Science.gov

Sample records for absolute timing error

  1. Relative errors can cue absolute visuomotor mappings.

    PubMed

    van Dam, Loes C J; Ernst, Marc O

    2015-12-01

    When repeatedly switching between two visuomotor mappings, e.g. in a reaching or pointing task, adaptation tends to speed up over time. That is, when the error in the feedback corresponds to a mapping switch, fast adaptation occurs. Yet, what is learned, the relative error or the absolute mappings? When switching between mappings, errors with a size corresponding to the relative difference between the mappings will occur more often than other large errors. Thus, we could learn to correct more for errors with this familiar size (Error Learning). On the other hand, it has been shown that the human visuomotor system can store several absolute visuomotor mappings (Mapping Learning) and can use associated contextual cues to retrieve them. Thus, when contextual information is present, no error feedback is needed to switch between mappings. Using a rapid pointing task, we investigated how these two types of learning may each contribute when repeatedly switching between mappings in the absence of task-irrelevant contextual cues. After training, we examined how participants changed their behaviour when a single error probe indicated either the often-experienced error (Error Learning) or one of the previously experienced absolute mappings (Mapping Learning). Results were consistent with Mapping Learning despite the relative nature of the error information in the feedback. This shows that errors in the feedback can have a double role in visuomotor behaviour: they drive the general adaptation process by making corrections possible on subsequent movements, as well as serve as contextual cues that can signal a learned absolute mapping. PMID:26280315

  2. Clock time is absolute and universal

    NASA Astrophysics Data System (ADS)

    Shen, Xinhang

    2015-09-01

    A critical error is found in the Special Theory of Relativity (STR): mixing up the concepts of the STR abstract time of a reference frame and the displayed time of a physical clock, which leads to use the properties of the abstract time to predict time dilation on physical clocks and all other physical processes. Actually, a clock can never directly measure the abstract time, but can only record the result of a physical process during a period of the abstract time such as the number of cycles of oscillation which is the multiplication of the abstract time and the frequency of oscillation. After Lorentz Transformation, the abstract time of a reference frame expands by a factor gamma, but the frequency of a clock decreases by the same factor gamma, and the resulting multiplication i.e. the displayed time of a moving clock remains unchanged. That is, the displayed time of any physical clock is an invariant of Lorentz Transformation. The Lorentz invariance of the displayed times of clocks can further prove within the framework of STR our earth based standard physical time is absolute, universal and independent of inertial reference frames as confirmed by both the physical fact of the universal synchronization of clocks on the GPS satellites and clocks on the earth, and the theoretical existence of the absolute and universal Galilean time in STR which has proved that time dilation and space contraction are pure illusions of STR. The existence of the absolute and universal time in STR has directly denied that the reference frame dependent abstract time of STR is the physical time, and therefore, STR is wrong and all its predictions can never happen in the physical world.

  3. Combined Use of Absolute and Differential Seismic Arrival Time Data to Improve Absolute Event Location

    NASA Astrophysics Data System (ADS)

    Myers, S.; Johannesson, G.

    2012-12-01

    Arrival time measurements based on waveform cross correlation are becoming more common as advanced signal processing methods are applied to seismic data archives and real-time data streams. Waveform correlation can precisely measure the time difference between the arrival of two phases, and differential time data can be used to constrain relative location of events. Absolute locations are needed for many applications, which generally requires the use of absolute time data. Current methods for measuring absolute time data are approximately two orders of magnitude less precise than differential time measurements. To exploit the strengths of both absolute and differential time data, we extend our multiple-event location method Bayesloc, which previously used absolute time data only, to include the use of differential time measurements that are based on waveform cross correlation. Fundamentally, Bayesloc is a formulation of the joint probability over all parameters comprising the multiple event location system. The Markov-Chain Monte Carlo method is used to sample from the joint probability distribution given arrival data sets. The differential time component of Bayesloc includes scaling a stochastic estimate of differential time measurement precision based the waveform correlation coefficient for each datum. For a regional-distance synthetic data set with absolute and differential time measurement error of 0.25 seconds and 0.01 second, respectively, epicenter location accuracy is improved from and average of 1.05 km when solely absolute time data are used to 0.28 km when absolute and differential time data are used jointly (73% improvement). The improvement in absolute location accuracy is the result of conditionally limiting absolute location probability regions based on the precise relative position with respect to neighboring events. Bayesloc estimates of data precision are found to be accurate for the synthetic test, with absolute and differential time measurement

  4. On the Error Sources in Absolute Individual Antenna Calibrations

    NASA Astrophysics Data System (ADS)

    Aerts, Wim; Baire, Quentin; Bilich, Andria; Bruyninx, Carine; Legrand, Juliette

    2013-04-01

    field) multi path errors, both during calibration and later on at the station, absolute sub-millimeter positioning with GPS is not (yet) possible. References [1] G. Wübbena, M. Schmitz, G. Boettcher, C. Schumann, "Absolute GNSS Antenna Calibration with a Robot: Repeatability of Phase Variations, Calibration of GLONASS and Determination of Carrier-to-Noise Pattern", International GNSS Service: Analysis Center workshop, 8-12 May 2006, Darmstadt, Germany. [2] P. Zeimetz, H. Kuhlmann, "On the Accuracy of Absolute GNSS Antenna Calibration and the Conception of a New Anechoic Chamber", FIG Working Week 2008, 14-19 June 2008, Stockholm, Sweden. [3] P. Zeimetz, H. Kuhlmann, L. Wanninger, V. Frevert, S. Schön and K. Strauch, "Ringversuch 2009", 7th GNSS-Antennen-Workshop, 19-20 March 2009, Dresden, Germany.

  5. Absolute vs. relative error characterization of electromagnetic tracking accuracy

    NASA Astrophysics Data System (ADS)

    Matinfar, Mohammad; Narayanasamy, Ganesh; Gutierrez, Luis; Chan, Raymond; Jain, Ameet

    2010-02-01

    Electromagnetic (EM) tracking systems are often used for real time navigation of medical tools in an Image Guided Therapy (IGT) system. They are specifically advantageous when the medical device requires tracking within the body of a patient where line of sight constraints prevent the use of conventional optical tracking. EM tracking systems are however very sensitive to electromagnetic field distortions. These distortions, arising from changes in the electromagnetic environment due to the presence of conductive ferromagnetic surgical tools or other medical equipment, limit the accuracy of EM tracking, in some cases potentially rendering tracking data unusable. We present a mapping method for the operating region over which EM tracking sensors are used, allowing for characterization of measurement errors, in turn providing physicians with visual feedback about measurement confidence or reliability of localization estimates. In this instance, we employ a calibration phantom to assess distortion within the operating field of the EM tracker and to display in real time the distribution of measurement errors, as well as the location and extent of the field associated with minimal spatial distortion. The accuracy is assessed relative to successive measurements. Error is computed for a reference point and consecutive measurement errors are displayed relative to the reference in order to characterize the accuracy in near-real-time. In an initial set-up phase, the phantom geometry is calibrated by registering the data from a multitude of EM sensors in a non-ferromagnetic ("clean") EM environment. The registration results in the locations of sensors with respect to each other and defines the geometry of the sensors in the phantom. In a measurement phase, the position and orientation data from all sensors are compared with the known geometry of the sensor spacing, and localization errors (displacement and orientation) are computed. Based on error thresholds provided by the

  6. Absolute Plate Velocities from Seismic Anisotropy: Importance of Correlated Errors

    NASA Astrophysics Data System (ADS)

    Gordon, R. G.; Zheng, L.; Kreemer, C.

    2014-12-01

    The orientation of seismic anisotropy inferred beneath the interiors of plates may provide a means to estimate the motions of the plate relative to the deeper mantle. Here we analyze a global set of shear-wave splitting data to estimate plate motions and to better understand the dispersion of the data, correlations in the errors, and their relation to plate speed. The errors in plate motion azimuths inferred from shear-wave splitting beneath any one tectonic plate are shown to be correlated with the errors of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. Our preferred set of angular velocities, SKS-MORVEL, is determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25±0.11º Ma-1 (95% confidence limits) right-handed about 57.1ºS, 68.6ºE. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ=19.2°) differs insignificantly from that for continental lithosphere (σ=21.6°). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ=7.4°) than for continental lithosphere (σ=14.7°). Two of the slowest-moving plates, Antarctica (vRMS=4 mm a-1, σ=29°) and Eurasia (vRMS=3 mm a-1, σ=33°), have two of the largest within-plate dispersions, which may indicate that a plate must move faster than ≈5 mm a-1 to result in seismic anisotropy useful for estimating plate motion.

  7. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    NASA Astrophysics Data System (ADS)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  8. Absolute plate velocities from seismic anisotropy: Importance of correlated errors

    NASA Astrophysics Data System (ADS)

    Zheng, Lin; Gordon, Richard G.; Kreemer, Corné

    2014-09-01

    The errors in plate motion azimuths inferred from shear wave splitting beneath any one tectonic plate are shown to be correlated with the errors of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. Our preferred set of angular velocities, SKS-MORVEL, is determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25 ± 0.11° Ma-1 (95% confidence limits) right handed about 57.1°S, 68.6°E. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ = 19.2°) differs insignificantly from that for continental lithosphere (σ = 21.6°). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ = 7.4°) than for continental lithosphere (σ = 14.7°). Two of the slowest-moving plates, Antarctica (vRMS = 4 mm a-1, σ = 29°) and Eurasia (vRMS = 3 mm a-1, σ = 33°), have two of the largest within-plate dispersions, which may indicate that a plate must move faster than ≈ 5 mm a-1 to result in seismic anisotropy useful for estimating plate motion. The tendency of observed azimuths on the Arabia plate to be counterclockwise of plate motion may provide information about the direction and amplitude of superposed asthenospheric flow or about anisotropy in the lithospheric mantle.

  9. Students' Mathematical Work on Absolute Value: Focusing on Conceptions, Errors and Obstacles

    ERIC Educational Resources Information Center

    Elia, Iliada; Özel, Serkan; Gagatsis, Athanasios; Panaoura, Areti; Özel, Zeynep Ebrar Yetkiner

    2016-01-01

    This study investigates students' conceptions of absolute value (AV), their performance in various items on AV, their errors in these items and the relationships between students' conceptions and their performance and errors. The Mathematical Working Space (MWS) is used as a framework for studying students' mathematical work on AV and the…

  10. Absolute Timing of the Crab Pulsar with RXTE

    NASA Technical Reports Server (NTRS)

    Rots, Arnold H.; Jahoda, Keith; Lyne, Andrew G.

    2004-01-01

    We have monitored the phase of the main X-ray pulse of the Crab pulsar with the Rossi X-ray Timing Explorer (RXTE) for almost eight years, since the start of the mission in January 1996. The absolute time of RXTE's clock is sufficiently accurate to allow this phase to be compared directly with the radio profile. Our monitoring observations of the pulsar took place bi-weekly (during the periods when it was at least 30 degrees from the Sun) and we correlated the data with radio timing ephemerides derived from observations made at Jodrell Bank. We have determined the phase of the X-ray main pulse for each observation with a typical error in the individual data points of 50 microseconds. The total ensemble is consistent with a phase that is constant over the monitoring period, with the X-ray pulse leading the radio pulse by 0.01025 plus or minus 0.00120 period in phase, or 344 plus or minus 40 microseconds in time. The error estimate is dominated by a systematic error of 40 microseconds, most likely constant, arising from uncertainties in the instrumental calibration of the radio data. The statistical error is 0.00015 period, or 5 microseconds. The separation of the main pulse and interpulse appears to be unchanging at time scales of a year or less, with an average value of 0.4001 plus or minus 0.0002 period. There is no apparent variation in these values with energy over the 2-30 keV range. The lag between the radio and X-ray pulses ma be constant in phase (i.e., rotational in nature) or constant in time (i.e., due to a pathlength difference). We are not (yet) able to distinguish between these two interpretations.

  11. Assessing Suturing Skills in a Self-Guided Learning Setting: Absolute Symmetry Error

    ERIC Educational Resources Information Center

    Brydges, Ryan; Carnahan, Heather; Dubrowski, Adam

    2009-01-01

    Directed self-guidance, whereby trainees independently practice a skill-set in a structured setting, may be an effective technique for novice training. Currently, however, most evaluation methods require an expert to be present during practice. The study aim was to determine if absolute symmetry error, a clinically important measure that can be…

  12. IMPROVEMENT OF SMVGEAR II ON VECTOR AND SCALAR MACHINES THROUGH ABSOLUTE ERROR TOLERANCE CONTROL (R823186)

    EPA Science Inventory

    The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...

  13. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  14. Error Budget for a Calibration Demonstration System for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2013-01-01

    A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.

  15. Time-resolved Absolute Velocity Quantification with Projections

    PubMed Central

    Langham, Michael C.; Jain, Varsha; Magland, Jeremy F.; Wehrli, Felix W.

    2010-01-01

    Quantitative information on time-resolved blood velocity along the femoral/popliteal artery can provide clinical information on peripheral arterial disease and complement MR angiography since not all stenoses are hemodynamically significant. The key disadvantages of the most widely used approach to time-resolve pulsatile blood flow by cardiac-gated velocity-encoded gradient-echo imaging are gating errors and long acquisition time. Here we demonstrate a rapid non-triggered method that quantifies absolute velocity on the basis of phase difference between successive velocity-encoded projections after selectively removing the background static tissue signal via a reference image. The tissue signal from the reference image’s center k-space line is isolated by masking out the vessels in the image domain. The performance of the technique, in terms of reproducibility and agreement with results obtained with conventional phase contrast (PC)-MRI was evaluated at 3T field strength with a variable-flow rate phantom and in vivo of the triphasic velocity waveforms at several segments along the femoral and popliteal arteries. Additionally, time-resolved flow velocity was quantified in five healthy subjects and compared against gated PC-MRI results. To illustrate clinical feasibility the proposed method was shown to be able to identify hemodynamic abnormalities and impaired reactivity in a diseased femoral artery. For both phantom and in vivo studies, velocity measurements were within 1.5 cm/s and the coefficient of variation was less than 5% in an in vivo reproducibility study. In five healthy subjects, the average differences in mean peak velocities and their temporal locations were within 1 cm/s and 10 ms compared to gated PC-MRI. In conclusion, the proposed method provides temporally-resolved arterial velocity with a temporal resolution of 20 ms with minimal post-processing. PMID:20677235

  16. Demonstrating the Error Budget for the Climate Absolute Radiance and Refractivity Observatory Through Solar Irradiance Measurements

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2016-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.

  17. Demonstrating the error budget for the Climate Absolute Radiance and Refractivity Observatory through solar irradiance measurements

    NASA Astrophysics Data System (ADS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2015-09-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a testbed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.

  18. Preliminary Error Budget for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; Gubbels, Timothy; Barnes, Robert

    2011-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) plans to observe climate change trends over decadal time scales to determine the accuracy of climate projections. The project relies on spaceborne earth observations of SI-traceable variables sensitive to key decadal change parameters. The mission includes a reflected solar instrument retrieving at-sensor reflectance over the 320 to 2300 nm spectral range with 500-m spatial resolution and 100-km swath. Reflectance is obtained from the ratio of measurements of the earth s surface to those while viewing the sun relying on a calibration approach that retrieves reflectance with uncertainties less than 0.3%. The calibration is predicated on heritage hardware, reduction of sensor complexity, adherence to detector-based calibration standards, and an ability to simulate in the laboratory on-orbit sources in both size and brightness to provide the basis of a transfer to orbit of the laboratory calibration including a link to absolute solar irradiance measurements. The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe high-accuracy, long-term climate change trends and to use decadal change observations as the most critical method to determine the accuracy of climate change projections such as those in the IPCC Report. A rigorously known accuracy of both decadal change observations as well as climate projections is critical in order to enable sound policy decisions. The CLARREO Project will implement a spaceborne earth observation mission designed to provide rigorous SI traceable observations (i.e., radiance, reflectance, and refractivity) that are sensitive to a wide range of key decadal change variables, including: 1) Surface temperature and atmospheric temperature profile 2) Atmospheric water vapor profile 3) Far infrared water vapor greenhouse 4) Aerosol properties and anthropogenic aerosol direct radiative forcing 5) Total and spectral solar

  19. Generalized approach for using unbiased symmetric metrics with negative values: normalized mean bias factor and normalized mean absolute error factor

    EPA Science Inventory

    Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations...

  20. Henry More and the development of absolute time.

    PubMed

    Thomas, Emily

    2015-12-01

    This paper explores the nature, development and influence of the first English account of absolute time, put forward in the mid-seventeenth century by the 'Cambridge Platonist' Henry More. Against claims in the literature that More does not have an account of time, this paper sets out More's evolving account and shows that it reveals the lasting influence of Plotinus. Further, this paper argues that More developed his views on time in response to his adoption of Descartes' vortex cosmology and cosmogony, providing new evidence of More's wider project to absorb Cartesian natural philosophy into his Platonic metaphysics. Finally, this paper argues that More should be added to the list of sources that later English thinkers - including Newton and Samuel Clarke - drew on in constructing their absolute accounts of time. PMID:26568082

  1. Interactions of timing and prediction error learning.

    PubMed

    Kirkpatrick, Kimberly

    2014-01-01

    Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields.

  2. Interactions of timing and prediction error learning.

    PubMed

    Kirkpatrick, Kimberly

    2014-01-01

    Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields. PMID:23962670

  3. Effective connectivity associated with auditory error detection in musicians with absolute pitch

    PubMed Central

    Parkinson, Amy L.; Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Larson, Charles R.; Robin, Donald A.

    2014-01-01

    It is advantageous to study a wide range of vocal abilities in order to fully understand how vocal control measures vary across the full spectrum. Individuals with absolute pitch (AP) are able to assign a verbal label to musical notes and have enhanced abilities in pitch identification without reliance on an external referent. In this study we used dynamic causal modeling (DCM) to model effective connectivity of ERP responses to pitch perturbation in voice auditory feedback in musicians with relative pitch (RP), AP, and non-musician controls. We identified a network compromising left and right hemisphere superior temporal gyrus (STG), primary motor cortex (M1), and premotor cortex (PM). We specified nine models and compared two main factors examining various combinations of STG involvement in feedback pitch error detection/correction process. Our results suggest that modulation of left to right STG connections are important in the identification of self-voice error and sensory motor integration in AP musicians. We also identify reduced connectivity of left hemisphere PM to STG connections in AP and RP groups during the error detection and corrections process relative to non-musicians. We suggest that this suppression may allow for enhanced connectivity relating to pitch identification in the right hemisphere in those with more precise pitch matching abilities. Musicians with enhanced pitch identification abilities likely have an improved auditory error detection and correction system involving connectivity of STG regions. Our findings here also suggest that individuals with AP are more adept at using feedback related to pitch from the right hemisphere. PMID:24634644

  4. Improved Strategies and Optimization of Calibration Models for Real-time PCR Absolute Quantification

    EPA Science Inventory

    Real-time PCR absolute quantification applications rely on the use of standard curves to make estimates of DNA target concentrations in unknown samples. Traditional absolute quantification approaches dictate that a standard curve must accompany each experimental run. However, t...

  5. An analysis of spacecraft data time tagging errors

    NASA Technical Reports Server (NTRS)

    Fang, A. C.

    1975-01-01

    An indepth examination of the timing and telemetry in just one spacecraft points out the genesis of various types of timing errors and serves as a guide in the design of future timing/telemetry systems. The principal sources of timing errors are examined carefully and are described in detail. Estimates of these errors are also made and presented. It is found that the timing errors within the telemetry system are larger than the total timing errors resulting from all other sources.

  6. Generalized approach for using unbiased symmetric metrics with negative values: normalized mean bias factor and normalized mean absolute error factor

    SciTech Connect

    Gustafson, William I.; Yu, Shaocai

    2012-10-23

    Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations of these metrics are only valid for datasets with positive means. This paper presents a methodology to use and interpret the metrics with datasets that have negative means. The updated formulations give identical results compared to the original formulations for the case of positive means, so researchers are encouraged to use the updated formulations going forward without introducing ambiguity.

  7. 75 FR 15371 - Time Error Correction Reliability Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-29

    ... Energy Regulatory Commission 18 CFR Part 40 Time Error Correction Reliability Standard March 18, 2010... section 215 of the Federal Power Act, the Commission proposes to remand the proposed revised Time Error... Commission proposes to remand the Time Error Correction Reliability Standard (BAL-004-1) developed by...

  8. System Measures Errors Between Time-Code Signals

    NASA Technical Reports Server (NTRS)

    Cree, David; Venkatesh, C. N.

    1993-01-01

    System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.

  9. Alterations in Error-Related Brain Activity and Post-Error Behavior over Time

    ERIC Educational Resources Information Center

    Themanson, Jason R.; Rosen, Peter J.; Pontifex, Matthew B.; Hillman, Charles H.; McAuley, Edward

    2012-01-01

    This study examines the relation between the error-related negativity (ERN) and post-error behavior over time in healthy young adults (N = 61). Event-related brain potentials were collected during two sessions of an identical flanker task. Results indicated changes in ERN and post-error accuracy were related across task sessions, with more…

  10. Time dependent corrections to absolute gravity determinations in the establishment of modern gravity control

    NASA Astrophysics Data System (ADS)

    Dykowski, Przemyslaw; Krynski, Jan

    2015-04-01

    The establishment of modern gravity control with the use of exclusively absolute method of gravity determination has significant advantages as compared to the one established mostly with relative gravity measurements (e.g. accuracy, time efficiency). The newly modernized gravity control in Poland consists of 28 fundamental stations (laboratory) and 168 base stations (PBOG14 - located in the field). Gravity at the fundamental stations was surveyed with the FG5-230 gravimeter of the Warsaw University of Technology, and at the base stations - with the A10-020 gravimeter of the Institute of Geodesy and Cartography, Warsaw. This work concerns absolute gravity determinations at the base stations. Although free of common relative measurement errors (e.g. instrumental drift) and effects of network adjustment, absolute gravity determinations for the establishment of gravity control require advanced corrections due to time dependent factors, i.e. tidal and ocean loading corrections, atmospheric corrections and hydrological corrections that were not taken into account when establishing the previous gravity control in Poland. Currently available services and software allow to determine high accuracy and high temporal resolution corrections for atmospheric (based on digital weather models, e.g. ECMWF) and hydrological (based on hydrological models, e.g. GLDAS/Noah) gravitational and loading effects. These corrections are mostly used for processing observations with Superconducting Gravimeters in the Global Geodynamics Project. For the area of Poland the atmospheric correction based on weather models can differ from standard atmospheric correction by even ±2 µGal. The hydrological model shows the annual variability of ±8 µGal. In addition the standard tidal correction may differ from the one obtained from the local tidal model (based on tidal observations). Such difference at Borowa Gora Observatory reaches the level of ±1.5 µGal. Overall the sum of atmospheric and

  11. Recovery of absolute phases for the fringe patterns of three selected wavelengths with improved anti-error capability

    NASA Astrophysics Data System (ADS)

    Long, Jiale; Xi, Jiangtao; Zhang, Jianmin; Zhu, Ming; Cheng, Wenqing; Li, Zhongwei; Shi, Yusheng

    2016-09-01

    In a recent published work, we proposed a technique to recover the absolute phase maps of fringe patterns with two selected fringe wavelengths. To achieve higher anti-error capability, the proposed method requires employing the fringe patterns with longer wavelengths; however, longer wavelength may lead to the degradation of the signal-to-noise ratio (SNR) in the surface measurement. In this paper, we propose a new approach to unwrap the phase maps from their wrapped versions based on the use of fringes with three different wavelengths which is characterized by improved anti-error capability and SNR. Therefore, while the previous method works on the two-phase maps obtained from six-step phase-shifting profilometry (PSP) (thus 12 fringe patterns are needed), the proposed technique performs very well on three-phase maps from three steps PSP, requiring only nine fringe patterns and hence more efficient. Moreover, the advantages of the two-wavelength method in simple implementation and flexibility in the use of fringe patterns are also reserved. Theoretical analysis and experiment results are presented to confirm the effectiveness of the proposed method.

  12. Multi-channel data acquisition system with absolute time synchronization

    NASA Astrophysics Data System (ADS)

    Włodarczyk, Przemysław; Pustelny, Szymon; Budker, Dmitry; Lipiński, Marcin

    2014-11-01

    We present a low-cost, stand-alone global-time-synchronized data acquisition system. Our prototype allows recording up to four analog signals with a 16-bit resolution in variable ranges and a maximum sampling rate of 1000 S/s. The system simultaneously acquires readouts of external sensors e.g. magnetometer or thermometer. A complete data set, including a header containing timestamp, is stored on a Secure Digital (SD) card or transmitted to a computer using Universal Serial Bus (USB). The estimated time accuracy of the data acquisition is better than ±200 ns. The device is intended for use in a global network of optical magnetometers (the Global Network of Optical Magnetometers for Exotic physics - GNOME), which aims to search for signals heralding physics beyond the Standard Model, that can be generated by ordinary spin coupling to exotic particles or anomalous spin interactions.

  13. Absolute GPS Time Event Generation and Capture for Remote Locations

    NASA Astrophysics Data System (ADS)

    HIRES Collaboration

    The HiRes experiment operates fixed location and portable lasers at remote desert locations to generate calibration events. One physics goal of HiRes is to search for unusual showers. These may appear similar to upward or horizontally pointing laser tracks used for atmospheric calibration. It is therefore necessary to remove all of these calibration events from the HiRes detector data stream in a physics blind manner. A robust and convenient "tagging" method is to generate the calibration events at precisely known times. To facilitate this tagging method we have developed the GPSY (Global Positioning System YAG) module. It uses a GPS receiver, an embedded processor and additional timing logic to generate laser triggers at arbitrary programmed times and frequencies with better than 100nS accuracy. The GPSY module has two trigger outputs (one microsecond resolution) to trigger the laser flash-lamp and Q-switch and one event capture input (25nS resolution). The GPSY module can be programmed either by a front panel menu based interface or by a host computer via an RS232 serial interface. The latter also allows for computer logging of generated and captured event times. Details of the design and the implementation of these devices will be presented. 1 Motivation Air Showers represent a small fraction, much less than a percent, of the total High Resolution Fly's Eye data sample. The bulk of the sample is calibration data. Most of this calibration data is generated by two types of systems that use lasers. One type sends light directly to the detectors via optical fibers to monitor detector gains (Girard 2001). The other sends a beam of light into the sky and the scattered light that reaches the detectors is used to monitor atmospheric effects (Wiencke 1998). It is important that these calibration events be cleanly separated from the rest of the sample both to provide a complete set of monitoring information, and more

  14. Effects of knowledge of results (KR) frequency in the learning of a timing skill: absolute versus relative KR frequency.

    PubMed

    Vieira, Márcio M; Ugrinowitsch, Herbert; Oliveira, Fernanda S; Gallo, Lívia G; Benda, Rodolfo N

    2012-10-01

    The interaction between the amount of practice and frequency of Knowledge of Results (KR) was investigated in a timing skill. In the acquisition phase the task involved 90 trials of releasing a knob and transporting three tennis balls from three near recipients to three far ones in a specific sequence and target time. The retention test performed 24 hr. later had the same sequence of transport but a new target time was required. In both phases, absolute error and standard deviation plus constant error was measured. The five groups differed in relation to frequency of KR and amount of practice. The results showed that intermediate frequencies as well as higher frequencies of KR elicited better performance during the retention test. PMID:23265002

  15. Effects of knowledge of results (KR) frequency in the learning of a timing skill: absolute versus relative KR frequency.

    PubMed

    Vieira, Márcio M; Ugrinowitsch, Herbert; Oliveira, Fernanda S; Gallo, Lívia G; Benda, Rodolfo N

    2012-10-01

    The interaction between the amount of practice and frequency of Knowledge of Results (KR) was investigated in a timing skill. In the acquisition phase the task involved 90 trials of releasing a knob and transporting three tennis balls from three near recipients to three far ones in a specific sequence and target time. The retention test performed 24 hr. later had the same sequence of transport but a new target time was required. In both phases, absolute error and standard deviation plus constant error was measured. The five groups differed in relation to frequency of KR and amount of practice. The results showed that intermediate frequencies as well as higher frequencies of KR elicited better performance during the retention test.

  16. A Mechanism for Error Detection in Speeded Response Time Tasks

    ERIC Educational Resources Information Center

    Holroyd, Clay B.; Yeung, Nick; Coles, Michael G. H.; Cohen, Jonathan D.

    2005-01-01

    The concept of error detection plays a central role in theories of executive control. In this article, the authors present a mechanism that can rapidly detect errors in speeded response time tasks. This error monitor assigns values to the output of cognitive processes involved in stimulus categorization and response generation and detects errors…

  17. Overproduction timing errors in expert dancers.

    PubMed

    Minvielle-Moncla, Joëlle; Audiffren, Michel; Macar, Françoise; Vallet, Cécile

    2008-07-01

    The authors investigated how expert dancers achieve accurate timing under various conditions. They designed the conditions to interfere with the dancers' attention to time and to test the explanation of the interference effect provided in the attentional model of time processing. Participants were 17 expert contemporary dancers who performed a freely chosen duration while walking and executing a bilateral cyclic arm movement over a given distance. The dancers reproduced that duration in different situations of interference. The process yielded temporal overproductions, validating the attentional model and extending its application to expert populations engaged in complex motor situations. The finding that the greatest overproduction occurred in the transfer-with-improvisation condition suggests that improvisation within a time deadline requires specific training.

  18. Overcoming time-integration errors in SINDA's FWDBCK solution routine

    NASA Technical Reports Server (NTRS)

    Skladany, J. T.; Costello, F. A.

    1984-01-01

    The FWDBCK time step, which is usually chosen intuitively to achieve adequate accuracy at reasonable computational costs, can in fact lead to large errors. NASA observed such errors in solving cryogenic problems on the COBE spacecraft, but a similar error is also demonstrated for a single node radiating to space. An algorithm has been developed for selecting the time step during the course of the simulation. The error incurred when the time derivative is replaced by the FWDBCK time difference can be estimated from the Taylor-Series expression for the temperature. The algorithm selects the time step to keep this error small. The efficacy of the method is demonstrated on the COBE and single-node problems.

  19. Left-hemisphere activation is associated with enhanced vocal pitch error detection in musicians with absolute pitch

    PubMed Central

    Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A.; Larson, Charles R.

    2014-01-01

    The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing. PMID:24355545

  20. Modeling error analysis of stationary linear discrete-time filters

    NASA Technical Reports Server (NTRS)

    Patel, R.; Toda, M.

    1977-01-01

    The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.

  1. Error Representation in Time For Compressible Flow Calculations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2010-01-01

    Time plays an essential role in most real world fluid mechanics problems, e.g. turbulence, combustion, acoustic noise, moving geometries, blast waves, etc. Time dependent calculations now dominate the computational landscape at the various NASA Research Centers but the accuracy of these computations is often not well understood. In this presentation, we investigate error representation (and error control) for time-periodic problems as a prelude to the investigation of feasibility of error control for stationary statistics and space-time averages. o These statistics and averages (e.g. time-averaged lift and drag forces) are often the output quantities sought by engineers. o For systems such as the Navier-Stokes equations, pointwise error estimates deteriorate rapidly which increasing Reynolds number while statistics and averages may remain well behaved.

  2. Disentangling timing and amplitude errors in streamflow simulations

    NASA Astrophysics Data System (ADS)

    Seibert, Simon Paul; Ehret, Uwe; Zehe, Erwin

    2016-09-01

    This article introduces an improvement in the Series Distance (SD) approach for the improved discrimination and visualization of timing and magnitude uncertainties in streamflow simulations. SD emulates visual hydrograph comparison by distinguishing periods of low flow and periods of rise and recession in hydrological events. Within these periods, it determines the distance of two hydrographs not between points of equal time but between points that are hydrologically similar. The improvement comprises an automated procedure to emulate visual pattern matching, i.e. the determination of an optimal level of generalization when comparing two hydrographs, a scaled error model which is better applicable across large discharge ranges than its non-scaled counterpart, and "error dressing", a concept to construct uncertainty ranges around deterministic simulations or forecasts. Error dressing includes an approach to sample empirical error distributions by increasing variance contribution, which can be extended from standard one-dimensional distributions to the two-dimensional distributions of combined time and magnitude errors provided by SD. In a case study we apply both the SD concept and a benchmark model (BM) based on standard magnitude errors to a 6-year time series of observations and simulations from a small alpine catchment. Time-magnitude error characteristics for low flow and rising and falling limbs of events were substantially different. Their separate treatment within SD therefore preserves useful information which can be used for differentiated model diagnostics, and which is not contained in standard criteria like the Nash-Sutcliffe efficiency. Construction of uncertainty ranges based on the magnitude of errors of the BM approach and the combined time and magnitude errors of the SD approach revealed that the BM-derived ranges were visually narrower and statistically superior to the SD ranges. This suggests that the combined use of time and magnitude errors to

  3. A mechanism for error detection in speeded response time tasks.

    PubMed

    Holroyd, Clay B; Yeung, Nick; Coles, Michael G H; Cohen, Jonathan D

    2005-05-01

    The concept of error detection plays a central role in theories of executive control. In this article, the authors present a mechanism that can rapidly detect errors in speeded response time tasks. This error monitor assigns values to the output of cognitive processes involved in stimulus categorization and response generation and detects errors by identifying states of the system associated with negative value. The mechanism is formalized in a computational model based on a recent theoretical framework for understanding error processing in humans (C. B. Holroyd & M. G. H. Coles, 2002). The model is used to simulate behavioral and event-related brain potential data in a speeded response time task, and the results of the simulation are compared with empirical data.

  4. Sources of error in picture naming under time pressure.

    PubMed

    Lloyd-Jones, Toby J; Nettlemill, Mandy

    2007-06-01

    We used a deadline procedure to investigate how time pressure may influence the processes involved in picture naming. The deadline exaggerated errors found under naming without deadline. There were also category differences in performance between living and nonliving things and, in particular, for animals versus fruit and vegetables. The majority of errors were visuallyand semantically related to the target (e. celery-asparagus), and there was a greater proportion of these errors made to living things. Importantly, there were also more visual-semantic errors to animals than to fruit and vegetables. In addition, there were a smaller number of pure semantic errors (e.g., nut-bolt), which were made predominantly to nonliving things. The different kinds of error were correlated with different variables. Overall, visual-semantic errors were associated with visual complexity and visual similarity, whereas pure semantic errors were associated with imageability and age of acquisition. However, for animals, visual-semantic errors were associated with visual complexity, whereas for fruit and vegetables they were associated with visual similarity. We discuss these findings in terms of theories of category-specific semantic impairment and models of picture naming. PMID:17848037

  5. On the Time Step Error of the DSMC

    NASA Astrophysics Data System (ADS)

    Hokazono, Tomokuni; Kobayashi, Seijiro; Ohsawa, Tomoki; Ohwada, Taku

    2003-05-01

    The time step truncation error of the DSMC is examined numerically. Contrary to the claim of [S.V. Bogomolov, U.S.S.R. Comput. Math. Math. Phys., Vol. 28, 79 (1988)] and in agreement with that of [T. Ohwada, J. Compt. Phys., Vol. 139, 1 (1998)], it is demonstrated that the error of the conventional DSMC per time step Δt is not O(Δt3) but O(Δt2). Further, it is shown that the error of the DSMC is reduced to O(Δt3) by applying Strang's splitting for the partial differential equations to the Boltzmann equation. The error resulting from the boundary condition, which is not studied in the abovementioned theoretical studies, is also discussed.

  6. Absolute frequency measurement at 10-16 level based on the international atomic time

    NASA Astrophysics Data System (ADS)

    Hachisu, H.; Fujieda, M.; Kumagai, M.; Ido, T.

    2016-06-01

    Referring to International Atomic Time (TAI), we measured the absolute frequency of the 87Sr lattice clock with its uncertainty of 1.1 x 10-15. Unless an optical clock is continuously operated for the five days of the TAI grid, it is required to evaluate dead time uncertainty in order to use the available five-day average of the local frequency reference. We homogeneously distributed intermittent measurements over the five-day grid of TAI, by which the dead time uncertainty was reduced to low 10-16 level. Three campaigns of the five (or four)-day consecutive measurements have resulted in the absolute frequency of the 87Sr clock transition of 429 228 004 229 872.85 (47) Hz, where the systematic uncertainty of the 87Sr optical frequency standard amounts to 8.6 x 10-17.

  7. Perturbative approach to continuous-time quantum error correction

    NASA Astrophysics Data System (ADS)

    Ippoliti, Matteo; Mazza, Leonardo; Rizzi, Matteo; Giovannetti, Vittorio

    2015-04-01

    We present a discussion of the continuous-time quantum error correction introduced by J. P. Paz and W. H. Zurek [Proc. R. Soc. A 454, 355 (1998), 10.1098/rspa.1998.0165]. We study the general Lindbladian which describes the effects of both noise and error correction in the weak-noise (or strong-correction) regime through a perturbative expansion. We use this tool to derive quantitative aspects of the continuous-time dynamics both in general and through two illustrative examples: the three-qubit and five-qubit stabilizer codes, which can be independently solved by analytical and numerical methods and then used as benchmarks for the perturbative approach. The perturbatively accessible time frame features a short initial transient in which error correction is ineffective, followed by a slow decay of the information content consistent with the known facts about discrete-time error correction in the limit of fast operations. This behavior is explained in the two case studies through a geometric description of the continuous transformation of the state space induced by the combined action of noise and error correction.

  8. Absolute value optimization to estimate phase properties of stochastic time series

    NASA Technical Reports Server (NTRS)

    Scargle, J. D.

    1977-01-01

    Most existing deconvolution techniques are incapable of determining phase properties of wavelets from time series data; to assure a unique solution, minimum phase is usually assumed. It is demonstrated, for moving average processes of order one, that deconvolution filtering using the absolute value norm provides an estimate of the wavelet shape that has the correct phase character when the random driving process is nonnormal. Numerical tests show that this result probably applies to more general processes.

  9. Characterizing Complex Time Series from the Scaling of Prediction Error.

    NASA Astrophysics Data System (ADS)

    Hinrichs, Brant Eric

    This thesis concerns characterizing complex time series from the scaling of prediction error. We use the global modeling technique of radial basis function approximation to build models from a state-space reconstruction of a time series that otherwise appears complicated or random (i.e. aperiodic, irregular). Prediction error as a function of prediction horizon is obtained from the model using the direct method. The relationship between the underlying dynamics of the time series and the logarithmic scaling of prediction error as a function of prediction horizon is investigated. We use this relationship to characterize the dynamics of both a model chaotic system and physical data from the optic tectum of an attentive pigeon exhibiting the important phenomena of nonstationary neuronal oscillations in response to visual stimuli.

  10. The Impact of Medical Interpretation Method on Time and Errors

    PubMed Central

    Kapelusznik, Luciano; Prakash, Kavitha; Gonzalez, Javier; Orta, Lurmag Y.; Tseng, Chi-Hong; Changrani, Jyotsna

    2007-01-01

    Background Twenty-two million Americans have limited English proficiency. Interpreting for limited English proficient patients is intended to enhance communication and delivery of quality medical care. Objective Little is known about the impact of various interpreting methods on interpreting speed and errors. This investigation addresses this important gap. Design Four scripted clinical encounters were used to enable the comparison of equivalent clinical content. These scripts were run across four interpreting methods, including remote simultaneous, remote consecutive, proximate consecutive, and proximate ad hoc interpreting. The first 3 methods utilized professional, trained interpreters, whereas the ad hoc method utilized untrained staff. Measurements Audiotaped transcripts of the encounters were coded, using a prespecified algorithm to determine medical error and linguistic error, by coders blinded to the interpreting method. Encounters were also timed. Results Remote simultaneous medical interpreting (RSMI) encounters averaged 12.72 vs 18.24 minutes for the next fastest mode (proximate ad hoc) (p = 0.002). There were 12 times more medical errors of moderate or greater clinical significance among utterances in non-RSMI encounters compared to RSMI encounters (p = 0.0002). Conclusions Whereas limited by the small number of interpreters involved, our study found that RSMI resulted in fewer medical errors and was faster than non-RSMI methods of interpreting. PMID:17957418

  11. Heat conduction errors and time lag in cryogenic thermometer installations

    NASA Technical Reports Server (NTRS)

    Warshawsky, I.

    1973-01-01

    Installation practices are recommended that will increase rate of heat exchange between the thermometric sensing element and the cryogenic fluid and that will reduce the rate of undesired heat transfer to higher-temperature objects. Formulas and numerical data are given that help to estimate the magnitude of heat-conduction errors and of time lag in response.

  12. Real-Time Minimization of Tracking Error for Aircraft Systems

    NASA Technical Reports Server (NTRS)

    Garud, Sumedha; Kaneshige, John T.; Krishnakumar, Kalmanje S.; Kulkarni, Nilesh V.; Burken, John

    2013-01-01

    This technology presents a novel, stable, discrete-time adaptive law for flight control in a Direct adaptive control (DAC) framework. Where errors are not present, the original control design has been tuned for optimal performance. Adaptive control works towards achieving nominal performance whenever the design has modeling uncertainties/errors or when the vehicle suffers substantial flight configuration change. The baseline controller uses dynamic inversion with proportional-integral augmentation. On-line adaptation of this control law is achieved by providing a parameterized augmentation signal to a dynamic inversion block. The parameters of this augmentation signal are updated to achieve the nominal desired error dynamics. If the system senses that at least one aircraft component is experiencing an excursion and the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, then the neural network (NN) modeling of aircraft operation may be changed.

  13. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    NASA Technical Reports Server (NTRS)

    Beck, S. M.

    1975-01-01

    A mobile self-contained Faraday cup system for beam current measurments of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 + or - 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV.

  14. Comparison of different standards for real-time PCR-based absolute quantification.

    PubMed

    Dhanasekaran, S; Doherty, T Mark; Kenneth, John

    2010-03-31

    Quantitative real-time PCR (qPCR) is a powerful tool used for both research and diagnostic, which has the advantage, compared to relative quantification, of providing an absolute copy number for a particular target. However, reliable standards are essential for qPCR. In this study, we have compared four types of commonly-used standards--PCR products (with and without purification) and cloned target sequences (circular and linear plasmid) for their stability during storage (using percentage of variance in copy numbers, PCR efficiency and regression curve correlation coefficient (R(2))) using hydrolysis probe (TaqMan) chemistry. Results, expressed as copy numbers/microl, are presented from a sample human system in which absolute levels of HuPO (reference gene) and the cytokine gene IFN-gamma were measured. To ensure the suitability and stability of the four standards, the experiments were performed at 0, 7 and 14 day intervals and repeated 6 times. We have found that the copy numbers vary (due to degradation of standards) over the period of time during storage at 4 degrees C and -20 degrees C, which affected PCR efficiency significantly. The cloned target sequences were noticeably more stable than the PCR product, which could lead to substantial variance in results using standards constructed by different routes. Standard quality and stability should be routinely tested for assays using qPCR.

  15. Evaluation of absolute quantitation by nonlinear regression in probe-based real-time PCR

    PubMed Central

    Goll, Rasmus; Olsen, Trine; Cui, Guanglin; Florholmen, Jon

    2006-01-01

    Background In real-time PCR data analysis, the cycle threshold (CT) method is currently the gold standard. This method is based on an assumption of equal PCR efficiency in all reactions, and precision may suffer if this condition is not met. Nonlinear regression analysis (NLR) or curve fitting has therefore been suggested as an alternative to the cycle threshold method for absolute quantitation. The advantages of NLR are that the individual sample efficiency is simulated by the model and that absolute quantitation is possible without a standard curve, releasing reaction wells for unknown samples. However, the calculation method has not been evaluated systematically and has not previously been applied to a TaqMan platform. Aim: To develop and evaluate an automated NLR algorithm capable of generating batch production regression analysis. Results Total RNA samples extracted from human gastric mucosa were reverse transcribed and analysed for TNFA, IL18 and ACTB by TaqMan real-time PCR. Fluorescence data were analysed by the regular CT method with a standard curve, and by NLR with a positive control for conversion of fluorescence intensity to copy number, and for this purpose an automated algorithm was written in SPSS syntax. Eleven separate regression models were tested, and the output data was subjected to Altman-Bland analysis. The Altman-Bland analysis showed that the best regression model yielded quantitative data with an intra-assay variation of 58% vs. 24% for the CT derived copy numbers, and with a mean inter-method deviation of × 0.8. Conclusion NLR can be automated for batch production analysis, but the CT method is more precise for absolute quantitation in the present setting. The observed inter-method deviation is an indication that assessment of the fluorescence conversion factor used in the regression method can be improved. However, the versatility depends on the level of precision required, and in some settings the increased cost effectiveness of NLR

  16. Time-series modeling and prediction of global monthly absolute temperature for environmental decision making

    NASA Astrophysics Data System (ADS)

    Ye, Liming; Yang, Guixia; Van Ranst, Eric; Tang, Huajun

    2013-03-01

    A generalized, structural, time series modeling framework was developed to analyze the monthly records of absolute surface temperature, one of the most important environmental parameters, using a deterministicstochastic combined (DSC) approach. Although the development of the framework was based on the characterization of the variation patterns of a global dataset, the methodology could be applied to any monthly absolute temperature record. Deterministic processes were used to characterize the variation patterns of the global trend and the cyclic oscillations of the temperature signal, involving polynomial functions and the Fourier method, respectively, while stochastic processes were employed to account for any remaining patterns in the temperature signal, involving seasonal autoregressive integrated moving average (SARIMA) models. A prediction of the monthly global surface temperature during the second decade of the 21st century using the DSC model shows that the global temperature will likely continue to rise at twice the average rate of the past 150 years. The evaluation of prediction accuracy shows that DSC models perform systematically well against selected models of other authors, suggesting that DSC models, when coupled with other ecoenvironmental models, can be used as a supplemental tool for short-term (˜10-year) environmental planning and decision making.

  17. Calculation of Retention Time Tolerance Windows with Absolute Confidence from Shared Liquid Chromatographic Retention Data

    PubMed Central

    Boswell, Paul G.; Abate-Pella, Daniel; Hewitt, Joshua T.

    2015-01-01

    Compound identification by liquid chromatography-mass spectrometry (LC-MS) is a tedious process, mainly because authentic standards must be run on a user’s system to be able to confidently reject a potential identity from its retention time and mass spectral properties. Instead, it would be preferable to use shared retention time/index data to narrow down the identity, but shared data cannot be used to reject candidates with an absolute level of confidence because the data are strongly affected by differences between HPLC systems and experimental conditions. However, a technique called “retention projection” was recently shown to account for many of the differences. In this manuscript, we discuss an approach to calculate appropriate retention time tolerance windows for projected retention times, potentially making it possible to exclude candidates with an absolute level of confidence, without needing to have authentic standards of each candidate on hand. In a range of multi-segment gradients and flow rates run among seven different labs, the new approach calculated tolerance windows that were significantly more appropriate for each retention projection than global tolerance windows calculated for retention projections or linear retention indices. Though there were still some small differences between the labs that evidently were not taken into account, the calculated tolerance windows only needed to be relaxed by 50% to make them appropriate for all labs. Even then, 42% of the tolerance windows calculated in this study without standards were narrower than those required by WADA for positive identification, where standards must be run contemporaneously. PMID:26292624

  18. Real-Time Parameter Estimation Using Output Error

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2014-01-01

    Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.

  19. The Question of Absolute Space and Time Directions in Relation to Molecular Chirality, Parity Violation, and Biomolecular Homochirality

    SciTech Connect

    Quack, Martin

    2001-03-21

    The questions of the absolute directions of space and time or the “observability” of absolute time direction as well as absolute handedness-left or right- are related to the fundamental symmetries of physics C, P, T as well as their combinations, in particular CPT, and their violations, such as parity violation. At the same time there is a relation to certain still open questions in chemistry concerning the fundamental physical- chemical principles of molecular chirality and in biochemistry concerning the selection of homochirality in evolution. In the lecture we shall introduce the concepts and then report new theoretical results from our work on parity violation in chiral molecules, showing order of magnitude increases with respect to previously accepted values. We discus as well our current experimental efforts. We shall briefly mention the construction of an absolute molecular clock.

  20. The Question of Absolute Space and Time Directions in Relation to Molecular Chirality, Parity Violation, and Biomolecular Homochirality

    SciTech Connect

    Quack, Martin

    2001-03-21

    The questions of the absolute directions of space and time or the 'observability' of absolute time direction as well as absolute handedness - left or right - are related to the fundamental symmetries of physics C, P, T as well as their combinations, in particular CPT, and their violations, such as parity violation. At the same time there is a relation to certain still open questions in chemistry concerning the fundamental physical-chemical principles of molecular chirality and in biochemistry concerning the selection of homochirality in evolution. In the lecture we shall introduce the concepts and then report new theoretical results from our work on parity violation in chiral molecules, showing order of magnitude increases with respect to previously accepted values. We discuss as well our current experimental efforts. We shall briefly mention the construction of an absolute molecular clock.

  1. Relationship between Brazilian airline pilot errors and time of day.

    PubMed

    de Mello, M T; Esteves, A M; Pires, M L N; Santos, D C; Bittencourt, L R A; Silva, R S; Tufik, S

    2008-12-01

    Flight safety is one of the most important and frequently discussed issues in aviation. Recent accident inquiries have raised questions as to how the work of flight crews is organized and the extent to which these conditions may have been contributing factors to accidents. Fatigue is based on physiologic limitations, which are reflected in performance deficits. The purpose of the present study was to provide an analysis of the periods of the day in which pilots working for a commercial airline presented major errors. Errors made by 515 captains and 472 co-pilots were analyzed using data from flight operation quality assurance systems. To analyze the times of day (shifts) during which incidents occurred, we divided the light-dark cycle (24:00) in four periods: morning, afternoon, night, and early morning. The differences of risk during the day were reported as the ratio of morning to afternoon, morning to night and morning to early morning error rates. For the purposes of this research, level 3 events alone were taken into account, since these were the most serious in which company operational limits were exceeded or when established procedures were not followed. According to airline flight schedules, 35% of flights take place in the morning period, 32% in the afternoon, 26% at night, and 7% in the early morning. Data showed that the risk of errors increased by almost 50% in the early morning relative to the morning period (ratio of 1:1.46). For the period of the afternoon, the ratio was 1:1.04 and for the night a ratio of 1:1.05 was found. These results showed that the period of the early morning represented a greater risk of attention problems and fatigue.

  2. Alignment between seafloor spreading directions and absolute plate motions through time

    NASA Astrophysics Data System (ADS)

    Williams, Simon E.; Flament, Nicolas; Müller, R. Dietmar

    2016-02-01

    The history of seafloor spreading in the ocean basins provides a detailed record of relative motions between Earth's tectonic plates since Pangea breakup. Determining how tectonic plates have moved relative to the Earth's deep interior is more challenging. Recent studies of contemporary plate motions have demonstrated links between relative plate motion and absolute plate motion (APM), and with seismic anisotropy in the upper mantle. Here we explore the link between spreading directions and APM since the Early Cretaceous. We find a significant alignment between APM and spreading directions at mid-ocean ridges; however, the degree of alignment is influenced by geodynamic setting, and is strongest for mid-Atlantic spreading ridges between plates that are not directly influenced by time-varying slab pull. In the Pacific, significant mismatches between spreading and APM direction may relate to a major plate-mantle reorganization. We conclude that spreading fabric can be used to improve models of APM.

  3. A California statewide three-dimensional seismic velocity model from both absolute and differential times

    USGS Publications Warehouse

    Lin, G.; Thurber, C.H.; Zhang, H.; Hauksson, E.; Shearer, P.M.; Waldhauser, F.; Brocher, T.M.; Hardebeck, J.

    2010-01-01

    We obtain a seismic velocity model of the California crust and uppermost mantle using a regional-scale double-difference tomography algorithm. We begin by using absolute arrival-time picks to solve for a coarse three-dimensional (3D) P velocity (VP) model with a uniform 30 km horizontal node spacing, which we then use as the starting model for a finer-scale inversion using double-difference tomography applied to absolute and differential pick times. For computational reasons, we split the state into 5 subregions with a grid spacing of 10 to 20 km and assemble our final statewide VP model by stitching together these local models. We also solve for a statewide S-wave model using S picks from both the Southern California Seismic Network and USArray, assuming a starting model based on the VP results and a VP=VS ratio of 1.732. Our new model has improved areal coverage compared with previous models, extending 570 km in the SW-NE directionand 1320 km in the NW-SE direction. It also extends to greater depth due to the inclusion of substantial data at large epicentral distances. Our VP model generally agrees with previous separate regional models for northern and southern California, but we also observe some new features, such as high-velocity anomalies at shallow depths in the Klamath Mountains and Mount Shasta area, somewhat slow velocities in the northern Coast Ranges, and slow anomalies beneath the Sierra Nevada at midcrustal and greater depths. This model can be applied to a variety of regional-scale studies in California, such as developing a unified statewide earthquake location catalog and performing regional waveform modeling.

  4. Absolute calibration method for nanosecond-resolved, time-streaked, fiber optic light collection, spectroscopy systems

    NASA Astrophysics Data System (ADS)

    Johnston, Mark D.; Oliver, Bryan V.; Droemer, Darryl W.; Frogget, Brent; Crain, Marlon D.; Maron, Yitzhak

    2012-08-01

    This paper describes a convenient and accurate method to calibrate fast (<1 ns resolution) streaked, fiber optic light collection, spectroscopy systems. Such systems are inherently difficult to calibrate due to the lack of sufficiently intense, calibrated light sources. Such a system is used to collect spectral data on plasmas generated in electron beam diodes fielded on the RITS-6 accelerator (8-12MV, 140-200kA) at Sandia National Laboratories. On RITS, plasma light is collected through a small diameter (200 μm) optical fiber and recorded on a fast streak camera at the output of a 1 meter Czerny-Turner monochromator. For this paper, a 300 W xenon short arc lamp (Oriel Model 6258) was used as the calibration source. Since the radiance of the xenon arc varies from cathode to anode, just the area around the tip of the cathode ("hotspot") was imaged onto the fiber, to produce the highest intensity output. To compensate for chromatic aberrations, the signal was optimized at each wavelength measured. Output power was measured using 10 nm bandpass interference filters and a calibrated photodetector. These measurements give power at discrete wavelengths across the spectrum, and when linearly interpolated, provide a calibration curve for the lamp. The shape of the spectrum is determined by the collective response of the optics, monochromator, and streak tube across the spectral region of interest. The ratio of the spectral curve to the measured bandpass filter curve at each wavelength produces a correction factor (Q) curve. This curve is then applied to the experimental data and the resultant spectra are given in absolute intensity units (photons/sec/cm2/steradian/nm). Error analysis shows this method to be accurate to within +/- 20%, which represents a high level of accuracy for this type of measurement.

  5. Absolute calibration method for nanosecond-resolved, time-streaked, fiber optic light collection, spectroscopy systems.

    PubMed

    Johnston, Mark D; Oliver, Bryan V; Droemer, Darryl W; Frogget, Brent; Crain, Marlon D; Maron, Yitzhak

    2012-08-01

    This paper describes a convenient and accurate method to calibrate fast (<1 ns resolution) streaked, fiber optic light collection, spectroscopy systems. Such systems are inherently difficult to calibrate due to the lack of sufficiently intense, calibrated light sources. Such a system is used to collect spectral data on plasmas generated in electron beam diodes fielded on the RITS-6 accelerator (8-12MV, 140-200kA) at Sandia National Laboratories. On RITS, plasma light is collected through a small diameter (200 μm) optical fiber and recorded on a fast streak camera at the output of a 1 meter Czerny-Turner monochromator. For this paper, a 300 W xenon short arc lamp (Oriel Model 6258) was used as the calibration source. Since the radiance of the xenon arc varies from cathode to anode, just the area around the tip of the cathode ("hotspot") was imaged onto the fiber, to produce the highest intensity output. To compensate for chromatic aberrations, the signal was optimized at each wavelength measured. Output power was measured using 10 nm bandpass interference filters and a calibrated photodetector. These measurements give power at discrete wavelengths across the spectrum, and when linearly interpolated, provide a calibration curve for the lamp. The shape of the spectrum is determined by the collective response of the optics, monochromator, and streak tube across the spectral region of interest. The ratio of the spectral curve to the measured bandpass filter curve at each wavelength produces a correction factor (Q) curve. This curve is then applied to the experimental data and the resultant spectra are given in absolute intensity units (photons/sec/cm(2)/steradian/nm). Error analysis shows this method to be accurate to within +∕- 20%, which represents a high level of accuracy for this type of measurement. PMID:22938275

  6. An Integrated Model of Choices and Response Times in Absolute Identification

    ERIC Educational Resources Information Center

    Brown, Scott D.; Marley, A. A. J.; Donkin, Christopher; Heathcote, Andrew

    2008-01-01

    Recent theoretical developments in the field of absolute identification have stressed differences between relative and absolute processes, that is, whether stimulus magnitudes are judged relative to a shorter term context provided by recently presented stimuli or a longer term context provided by the entire set of stimuli. The authors developed a…

  7. Frequency-domain analysis of absolute gravimeters

    NASA Astrophysics Data System (ADS)

    Svitlov, S.

    2012-12-01

    An absolute gravimeter is analysed as a linear time-invariant system in the frequency domain. Frequency responses of absolute gravimeters are derived analytically based on the propagation of the complex exponential signal through their linear measurement functions. Depending on the model of motion and the number of time-distance coordinates, an absolute gravimeter is considered as a second-order (three-level scheme) or third-order (multiple-level scheme) low-pass filter. It is shown that the behaviour of an atom absolute gravimeter in the frequency domain corresponds to that of the three-level corner-cube absolute gravimeter. Theoretical results are applied for evaluation of random and systematic measurement errors and optimization of an experiment. The developed theory agrees with known results of an absolute gravimeter analysis in the time and frequency domains and can be used for measurement uncertainty analyses, building of vibration-isolation systems and synthesis of digital filtering algorithms.

  8. Easy Absolute Values? Absolutely

    ERIC Educational Resources Information Center

    Taylor, Sharon E.; Mittag, Kathleen Cage

    2015-01-01

    The authors teach a problem-solving course for preservice middle-grades education majors that includes concepts dealing with absolute-value computations, equations, and inequalities. Many of these students like mathematics and plan to teach it, so they are adept at symbolic manipulations. Getting them to think differently about a concept that they…

  9. Supercontinent cycles and the calculation of absolute palaeolongitude in deep time.

    PubMed

    Mitchell, Ross N; Kilian, Taylor M; Evans, David A D

    2012-02-08

    Traditional models of the supercontinent cycle predict that the next supercontinent--'Amasia'--will form either where Pangaea rifted (the 'introversion' model) or on the opposite side of the world (the 'extroversion' models). Here, by contrast, we develop an 'orthoversion' model whereby a succeeding supercontinent forms 90° away, within the great circle of subduction encircling its relict predecessor. A supercontinent aggregates over a mantle downwelling but then influences global-scale mantle convection to create an upwelling under the landmass. We calculate the minimum moment of inertia about which oscillatory true polar wander occurs owing to the prolate shape of the non-hydrostatic Earth. By fitting great circles to each supercontinent's true polar wander legacy, we determine that the arc distances between successive supercontinent centres (the axes of the respective minimum moments of inertia) are 88° for Nuna to Rodinia and 87° for Rodinia to Pangaea--as predicted by the orthoversion model. Supercontinent centres can be located back into Precambrian time, providing fixed points for the calculation of absolute palaeolongitude over billion-year timescales. Palaeogeographic reconstructions additionally constrained in palaeolongitude will provide increasingly accurate estimates of ancient plate motions and palaeobiogeographic affinities.

  10. Supercontinent cycles and the calculation of absolute palaeolongitude in deep time.

    PubMed

    Mitchell, Ross N; Kilian, Taylor M; Evans, David A D

    2012-02-01

    Traditional models of the supercontinent cycle predict that the next supercontinent--'Amasia'--will form either where Pangaea rifted (the 'introversion' model) or on the opposite side of the world (the 'extroversion' models). Here, by contrast, we develop an 'orthoversion' model whereby a succeeding supercontinent forms 90° away, within the great circle of subduction encircling its relict predecessor. A supercontinent aggregates over a mantle downwelling but then influences global-scale mantle convection to create an upwelling under the landmass. We calculate the minimum moment of inertia about which oscillatory true polar wander occurs owing to the prolate shape of the non-hydrostatic Earth. By fitting great circles to each supercontinent's true polar wander legacy, we determine that the arc distances between successive supercontinent centres (the axes of the respective minimum moments of inertia) are 88° for Nuna to Rodinia and 87° for Rodinia to Pangaea--as predicted by the orthoversion model. Supercontinent centres can be located back into Precambrian time, providing fixed points for the calculation of absolute palaeolongitude over billion-year timescales. Palaeogeographic reconstructions additionally constrained in palaeolongitude will provide increasingly accurate estimates of ancient plate motions and palaeobiogeographic affinities. PMID:22318605

  11. ABSOLUTE TIMING OF THE CRAB PULSAR WITH THE INTEGRAL/SPI TELESCOPE

    SciTech Connect

    Molkov, S.; Jourdain, E.; Roques, J. P.

    2010-01-01

    We have investigated the pulse shape evolution of the Crab pulsar emission in the hard X-ray domain of the electromagnetic spectrum. In particular, we have studied the alignment of the Crab pulsar phase profiles measured in the hard X-rays and in other wavebands. To obtain the hard X-ray pulse profiles, we have used six years (2003-2009, with a total exposure of about 4 Ms) of publicly available data of the SPI telescope on-board the International Gamma-Ray Astrophysics Laboratory observatory, folded with the pulsar time solution derived from the Jodrell Bank Crab Pulsar Monthly Ephemeris. We found that the main pulse in the hard X-ray 20-100 keV energy band leads the radio one by 8.18 +- 0.46 milliperiods in phase, or 275 +- 15 mus in time. Quoted errors represent only statistical uncertainties. Our systematic error is estimated to be approx40 mus and is mainly caused by the radio measurement uncertainties. In hard X-rays, the average distance between the main pulse and interpulse on the phase plane is 0.3989 +- 0.0009. To compare our findings in hard X-rays with the soft 2-20 keV X-ray band, we have used data of quasi-simultaneous Crab observations with the proportional counter array monitor on-board the Rossi X-Ray Timing Explorer mission. The time lag and the pulses separation values measured in the 3-20 keV band are 0.00933 +- 0.00016 (corresponding to 310 +- 6 mus) and 0.40016 +- 0.00028 parts of the cycle, respectively. While the pulse separation values measured in soft X-rays and hard X-rays agree, the time lags are statistically different. Additional analysis show that the delay between the radio and X-ray signals varies with energy in the 2-300 keV energy range. We explain such a behavior as due to the superposition of two independent components responsible for the Crab pulsed emission in this energy band.

  12. [Time order error and position effect of a standardized stimulus in discrimination of short time duration].

    PubMed

    Rammsayer, T; Wittkowski, K M

    1990-01-01

    In comparison judgments of two successively presented time intervals ranging from 30 to 70 msec a time-order error (TOE) as well as a systematic effect depending on the constant position error (CPE) were demonstrated. The effects proved to be independent. Contrary to Vierordt's law, a negative TOE was found. When presenting the standard interval first, an increased hit rate resulting in a positive CPE was established. Furthermore, a test statistic is introduced that allows analysis of experiments utilizing all available information of a subject's psychometric function.

  13. Method for quantum-jump continuous-time quantum error correction

    NASA Astrophysics Data System (ADS)

    Hsu, Kung-Chuan; Brun, Todd A.

    2016-02-01

    Continuous-time quantum error correction (CTQEC) is a technique for protecting quantum information against decoherence, where both the decoherence and error correction processes are considered continuous in time. Given any [[n ,k ,d

  14. 5 CFR 1605.16 - Claims for correction of employing agency errors; time limitations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... of employing agency errors; time limitations. (a) Agency's discovery of error. Upon discovery of an... it, but, in any event, the agency must act promptly in doing so. (b) Participant's discovery of error. If an agency fails to discover an error of which a participant has knowledge involving the correct...

  15. A BAYESIAN METHOD FOR CALCULATING REAL-TIME QUANTITATIVE PCR CALIBRATION CURVES USING ABSOLUTE PLASMID DNA STANDARDS

    EPA Science Inventory

    In real-time quantitative PCR studies using absolute plasmid DNA standards, a calibration curve is developed to estimate an unknown DNA concentration. However, potential differences in the amplification performance of plasmid DNA compared to genomic DNA standards are often ignore...

  16. Using Graphs for Fast Error Term Approximation of Time-varying Datasets

    SciTech Connect

    Nuber, C; LaMar, E C; Pascucci, V; Hamann, B; Joy, K I

    2003-02-27

    We present a method for the efficient computation and storage of approximations of error tables used for error estimation of a region between different time steps in time-varying datasets. The error between two time steps is defined as the distance between the data of these time steps. Error tables are used to look up the error between different time steps of a time-varying dataset, especially when run time error computation is expensive. However, even the generation of error tables itself can be expensive. For n time steps, the exact error look-up table (which stores the error values for all pairs of time steps in a matrix) has a memory complexity and pre-processing time complexity of O(n2), and O(1) for error retrieval. Our approximate error look-up table approach uses trees, where the leaf nodes represent original time steps, and interior nodes contain an average (or best-representative) of the children nodes. The error computed on an edge of a tree describes the distance between the two nodes on that edge. Evaluating the error between two different time steps requires traversing a path between the two leaf nodes, and accumulating the errors on the traversed edges. For n time steps, this scheme has a memory complexity and pre-processing time complexity of O(nlog(n)), a significant improvement over the exact scheme; the error retrieval complexity is O(log(n)). As we do not need to calculate all possible n2 error terms, our approach is a fast way to generate the approximation.

  17. Mapping the Origins of Time: Scalar Errors in Infant Time Estimation

    ERIC Educational Resources Information Center

    Addyman, Caspar; Rocha, Sinead; Mareschal, Denis

    2014-01-01

    Time is central to any understanding of the world. In adults, estimation errors grow linearly with the length of the interval, much faster than would be expected of a clock-like mechanism. Here we present the first direct demonstration that this is also true in human infants. Using an eye-tracking paradigm, we examined 4-, 6-, 10-, and…

  18. A method for in situ absolute DD yield calibration of neutron time-of-flight detectors on OMEGA using CR-39-based proton detectors.

    PubMed

    Waugh, C J; Rosenberg, M J; Zylstra, A B; Frenje, J A; Séguin, F H; Petrasso, R D; Glebov, V Yu; Sangster, T C; Stoeckl, C

    2015-05-01

    Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition, comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule. PMID:26026524

  19. A method for in situ absolute DD yield calibration of neutron time-of-flight detectors on OMEGA using CR-39-based proton detectors

    SciTech Connect

    Waugh, C. J. Zylstra, A. B.; Frenje, J. A.; Séguin, F. H.; Petrasso, R. D.; Rosenberg, M. J.; Glebov, V. Yu.; Sangster, T. C.; Stoeckl, C.

    2015-05-15

    Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition, comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.

  20. A method for in situ absolute DD yield calibration of neutron time-of-flight detectors on OMEGA using CR-39-based proton detectors

    DOE PAGES

    Waugh, C. J.; Rosenberg, M. J.; Zylstra, A. B.; Frenje, J. A.; Seguin, F. H.; Petrasso, R. D.; Glebov, V. Yu.; Sangster, T. C.; Stoeckl, C.

    2015-05-27

    Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition,more » comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.« less

  1. A method for in situ absolute DD yield calibration of neutron time-of-flight detectors on OMEGA using CR-39-based proton detectors

    SciTech Connect

    Waugh, C. J.; Rosenberg, M. J.; Zylstra, A. B.; Frenje, J. A.; Seguin, F. H.; Petrasso, R. D.; Glebov, V. Yu.; Sangster, T. C.; Stoeckl, C.

    2015-05-27

    Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition, comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.

  2. Repeated quantum error correction on a continuously encoded qubit by real-time feedback

    NASA Astrophysics Data System (ADS)

    Cramer, J.; Kalb, N.; Rol, M. A.; Hensen, B.; Blok, M. S.; Markham, M.; Twitchen, D. J.; Hanson, R.; Taminiau, T. H.

    2016-05-01

    Reliable quantum information processing in the face of errors is a major fundamental and technological challenge. Quantum error correction protects quantum states by encoding a logical quantum bit (qubit) in multiple physical qubits. To be compatible with universal fault-tolerant computations, it is essential that states remain encoded at all times and that errors are actively corrected. Here we demonstrate such active error correction on a continuously protected logical qubit using a diamond quantum processor. We encode the logical qubit in three long-lived nuclear spins, repeatedly detect phase errors by non-destructive measurements, and apply corrections by real-time feedback. The actively error-corrected qubit is robust against errors and encoded quantum superposition states are preserved beyond the natural dephasing time of the best physical qubit in the encoding. These results establish a powerful platform to investigate error correction under different types of noise and mark an important step towards fault-tolerant quantum information processing.

  3. Error Correction for Foot Clearance in Real-Time Measurement

    NASA Astrophysics Data System (ADS)

    Wahab, Y.; Bakar, N. A.; Mazalan, M.

    2014-04-01

    Mobility performance level, fall related injuries, unrevealed disease and aging stage can be detected through examination of gait pattern. The gait pattern is normally directly related to the lower limb performance condition in addition to other significant factors. For that reason, the foot is the most important part for gait analysis in-situ measurement system and thus directly affects the gait pattern. This paper reviews the development of ultrasonic system with error correction using inertial measurement unit for gait analysis in real life measurement of foot clearance. This paper begins with the related literature where the necessity of measurement is introduced. Follow by the methodology section, problem and solution. Next, this paper explains the experimental setup for the error correction using the proposed instrumentation, results and discussion. Finally, this paper shares the planned future works.

  4. Method and apparatus for detecting timing errors in a system oscillator

    DOEpatents

    Gliebe, Ronald J.; Kramer, William R.

    1993-01-01

    A method of detecting timing errors in a system oscillator for an electronic device, such as a power supply, includes the step of comparing a system oscillator signal with a delayed generated signal and generating a signal representative of the timing error when the system oscillator signal is not identical to the delayed signal. An LED indicates to an operator that a timing error has occurred. A hardware circuit implements the above-identified method.

  5. Precision errors, least significant change, and monitoring time interval in pediatric measurements of bone mineral density, body composition, and mechanostat parameters by GE lunar prodigy.

    PubMed

    Jaworski, Maciej; Pludowski, Pawel

    2013-01-01

    Dual-energy X-ray absorptiometry (DXA) method is widely used in pediatrics in the study of bone density and body composition. However, there is a limit to how precise DXA can estimate bone and body composition measures in children. The study was aimed to (1) evaluate precision errors for bone mineral density, bone mass and bone area, body composition, and mechanostat parameters, (2) assess the relationships between precision errors and anthropometric parameters, and (3) calculate a "least significant change" and "monitoring time interval" values for DXA measures in children of wide age range (5-18yr) using GE Lunar Prodigy densitometer. It is observed that absolute precision error values were different for thin and standard technical modes of DXA measures and depended on age, body weight, and height. In contrast, relative precision error values expressed in percentages were similar for thin and standard modes (except total body bone mineral density [TBBMD]) and were not related to anthropometric variables (except TBBMD). Concluding, due to stability of percentage coefficient of variation values in wide range of age, the use of precision error expressed in percentages, instead of absolute error, appeared as convenient in pediatric population.

  6. Period, epoch, and prediction errors of ephemerides from continuous sets of timing measurements

    NASA Astrophysics Data System (ADS)

    Deeg, H. J.

    2015-06-01

    Space missions such as Kepler and CoRoT have led to large numbers of eclipse or transit measurements in nearly continuous time series. This paper shows how to obtain the period error in such measurements from a basic linear least-squares fit, and how to correctly derive the timing error in the prediction of future transit or eclipse events. Assuming strict periodicity, a formula for the period error of these time series is derived, σP = σT (12 / (N3-N))1 / 2, where σP is the period error, σT the timing error of a single measurement, and N the number of measurements. Compared to the iterative method for period error estimation by Mighell & Plavchan (2013), this much simpler formula leads to smaller period errors, whose correctness has been verified through simulations. For the prediction of times of future periodic events, usual linear ephemeris were epoch errors are quoted for the first time measurement, are prone to an overestimation of the error of that prediction. This may be avoided by a correction for the duration of the time series. An alternative is the derivation of ephemerides whose reference epoch and epoch error are given for the centre of the time series. For long continuous or near-continuous time series whose acquisition is completed, such central epochs should be the preferred way for the quotation of linear ephemerides. While this work was motivated from the analysis of eclipse timing measures in space-based light curves, it should be applicable to any other problem with an uninterrupted sequence of discrete timings for which the determination of a zero point, of a constant period and of the associated errors is needed.

  7. Space-Time Error Representation and Estimation in Navier-Stokes Calculations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2006-01-01

    The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.

  8. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

    2012-05-15

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  9. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

    2010-07-13

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  10. Ambient Temperature Changes and the Impact to Time Measurement Error

    NASA Astrophysics Data System (ADS)

    Ogrizovic, V.; Gucevic, J.; Delcev, S.

    2012-12-01

    Measurements in Geodetic Astronomy are mainly outdoors and performed during a night, when the temperature often decreases very quickly. The time-keeping during a measuring session is provided by collecting UTC time ticks from a GPS receiver and transferring them to a laptop computer. An interrupt handler routine processes received UTC impulses in real-time and calculates the clock parameters. The characteristics of the computer quartz clock are influenced by temperature changes of the environment. We exposed the laptop to different environmental temperature conditions, and calculate the clock parameters for each environmental model. The results show that the laptop used for time-keeping in outdoor measurements should be kept in a stable temperature environment, at temperatures near 20° C.

  11. Exposure measurement error in time-series studies of air pollution: concepts and consequences.

    PubMed Central

    Zeger, S L; Thomas, D; Dominici, F; Samet, J M; Schwartz, J; Dockery, D; Cohen, A

    2000-01-01

    Misclassification of exposure is a well-recognized inherent limitation of epidemiologic studies of disease and the environment. For many agents of interest, exposures take place over time and in multiple locations; accurately estimating the relevant exposures for an individual participant in epidemiologic studies is often daunting, particularly within the limits set by feasibility, participant burden, and cost. Researchers have taken steps to deal with the consequences of measurement error by limiting the degree of error through a study's design, estimating the degree of error using a nested validation study, and by adjusting for measurement error in statistical analyses. In this paper, we address measurement error in observational studies of air pollution and health. Because measurement error may have substantial implications for interpreting epidemiologic studies on air pollution, particularly the time-series analyses, we developed a systematic conceptual formulation of the problem of measurement error in epidemiologic studies of air pollution and then considered the consequences within this formulation. When possible, we used available relevant data to make simple estimates of measurement error effects. This paper provides an overview of measurement errors in linear regression, distinguishing two extremes of a continuum-Berkson from classical type errors, and the univariate from the multivariate predictor case. We then propose one conceptual framework for the evaluation of measurement errors in the log-linear regression used for time-series studies of particulate air pollution and mortality and identify three main components of error. We present new simple analyses of data on exposures of particulate matter < 10 microm in aerodynamic diameter from the Particle Total Exposure Assessment Methodology Study. Finally, we summarize open questions regarding measurement error and suggest the kind of additional data necessary to address them. Images Figure 1 Figure 2

  12. Solid-state track recorder dosimetry device to measure absolute reaction rates and neutron fluence as a function of time

    DOEpatents

    Gold, Raymond; Roberts, James H.

    1989-01-01

    A solid state track recording type dosimeter is disclosed to measure the time dependence of the absolute fission rates of nuclides or neutron fluence over a period of time. In a primary species an inner recording drum is rotatably contained within an exterior housing drum that defines a series of collimating slit apertures overlying windows defined in the stationary drum through which radiation can enter. Film type solid state track recorders are positioned circumferentially about the surface of the internal recording drum to record such radiation or its secondary products during relative rotation of the two elements. In another species both the recording element and the aperture element assume the configuration of adjacent disks. Based on slit size of apertures and relative rotational velocity of the inner drum, radiation parameters within a test area may be measured as a function of time and spectra deduced therefrom.

  13. Absolute perfusion measurements and associated iodinated contrast agent time course in brain metastasis: a study for contrast-enhanced radiotherapy.

    PubMed

    Obeid, Layal; Deman, Pierre; Tessier, Alexandre; Balosso, Jacques; Estève, François; Adam, Jean-François

    2014-04-01

    Contrast-enhanced radiotherapy is an innovative treatment that combines the selective accumulation of heavy elements in tumors with stereotactic irradiations using medium energy X-rays. The radiation dose enhancement depends on the absolute amount of iodine reached in the tumor and its time course. Quantitative, postinfusion iodine biodistribution and associated brain perfusion parameters were studied in human brain metastasis as key parameters for treatment feasibility and quality. Twelve patients received an intravenous bolus of iodinated contrast agent (CA) (40 mL, 4 mL/s), followed by a steady-state infusion (160 mL, 0.5 mL/s) to ensure stable intratumoral amounts of iodine during the treatment. Absolute iodine concentrations and quantitative perfusion maps were derived from 40 multislice dynamic computed tomography (CT) images of the brain. The postinfusion mean intratumoral iodine concentration (over 30 minutes) reached 1.94 ± 0.12 mg/mL. Reasonable correlations were obtained between these concentrations and the permeability surface area product and the cerebral blood volume. To our knowledge, this is the first quantitative study of CA biodistribution versus time in brain metastasis. The study shows that suitable and stable amounts of iodine can be reached for contrast-enhanced radiotherapy. Moreover, the associated perfusion measurements provide useful information for the patient recruitment and management processes.

  14. Analysis of static and time-varying polarization errors in the multiangle spectropolarimetric imager.

    PubMed

    Mahler, Anna-Britt; Diner, David J; Chipman, Russell A

    2011-05-10

    Multiangle Spectropolarimetric Imager (MSPI) sensitivity to static and time-varying polarization errors is examined. For a system without noise, static polarization errors are accurately represented by the calibration coefficients, and therefore do not impede correct mapping of measured to input Stokes vectors. But noise is invariably introduced during the detection process, and static polarization errors reduce the system's signal-to-noise ratio (SNR) by increasing noise sensitivity. Noise sensitivity is minimized by minimizing the condition number of the system data reduction matrix [Appl. Opt.41, 619 (2002)]. The sensitivity of condition numbers to static polarization errors is presented. The condition number of the nominal MSPI data reduction matrix is approximately 1.1 or less for all fields. The increase in the condition number above 1 results primarily from a quarter wave plate and mirror coating retardance magnitude errors. Sensitivity of the degree of linear polarization (DoLP) error with respect to time-varying diattenuation and retardance error was used to set a time-varying diattenuation magnitude tolerance of 0.005 and a time-varying retardance magnitude tolerance of ±0.2°. A Monte Carlo simulation of the calibration and measurements using anticipated static and time-varying errors indicates that MSPI has a probability of 0.9 of meeting its 0.005 DoLP uncertainty requirement.

  15. Correlated errors in geodetic time series: Implications for time-dependent deformation

    USGS Publications Warehouse

    Langbein, J.; Johnson, H.

    1997-01-01

    addition, the seasonal noise can be as large as 3 mm in amplitude but typically is less than 0.5 mm. Because of the presence of random-walk noise in these time series, modeling and interpretation of the geodetic data must account for this source of error. By way of example we show that estimating the time-varying strain tensor (a form of spatial averaging) from geodetic data having both random-walk and white noise error components results in seemingly significant variations in the rate of strain accumulation; spatial averaging does reduce the size of both noise components but not their relative influence on the resulting strain accumulation model. Copyright 1997 by the American Geophysical Union.

  16. Reward prediction error signals associated with a modified time estimation task.

    PubMed

    Holroyd, Clay B; Krigolson, Olave E

    2007-11-01

    The feedback error-related negativity (fERN) is a component of the human event-related brain potential (ERP) elicited by feedback stimuli. A recent theory holds that the fERN indexes a reward prediction error signal associated with the adaptive modification of behavior. Here we present behavioral and ERP data recorded from participants engaged in a modified time estimation task. As predicted by the theory, our results indicate that fERN amplitude reflects a reward prediction error signal and that the size of this error signal is correlated across participants with changes in task performance.

  17. Keep calm and be patient: The influence of anxiety and time on post-error adaptations.

    PubMed

    Van der Borght, Liesbet; Braem, Senne; Stevens, Michaël; Notebaert, Wim

    2016-02-01

    Individual differences in anxiety and punishment sensitivity have an impact on electrophysiological markers of error processing and the orienting of attention to threatening information. However, it remains unclear how these individual differences influence behavioral adaptations to errors. Therefore, we set out to investigate the influence of anxiety and punishment sensitivity on post-error adaptations, and whether this influence depends on the time people get to adapt. We tested 99 participants using a Simon task with randomized inter-trial intervals. Significant post-error slowing (PES) was found at all time intervals. However, in line with previous research, PES reduced over time. While PES did not interact with anxiety, or punishment sensitivity, the pattern of post-error accuracy depended on anxiety. There is clear post-error accuracy decrease at the shortest interval, but individuals with a low score on trait anxiety showed a reversed effect (i.e., post-error accuracy increase) at a longer interval. These results suggest that people have trouble to disengage attention from an error, which can be overcome with time and low anxiety.

  18. Keep calm and be patient: The influence of anxiety and time on post-error adaptations.

    PubMed

    Van der Borght, Liesbet; Braem, Senne; Stevens, Michaël; Notebaert, Wim

    2016-02-01

    Individual differences in anxiety and punishment sensitivity have an impact on electrophysiological markers of error processing and the orienting of attention to threatening information. However, it remains unclear how these individual differences influence behavioral adaptations to errors. Therefore, we set out to investigate the influence of anxiety and punishment sensitivity on post-error adaptations, and whether this influence depends on the time people get to adapt. We tested 99 participants using a Simon task with randomized inter-trial intervals. Significant post-error slowing (PES) was found at all time intervals. However, in line with previous research, PES reduced over time. While PES did not interact with anxiety, or punishment sensitivity, the pattern of post-error accuracy depended on anxiety. There is clear post-error accuracy decrease at the shortest interval, but individuals with a low score on trait anxiety showed a reversed effect (i.e., post-error accuracy increase) at a longer interval. These results suggest that people have trouble to disengage attention from an error, which can be overcome with time and low anxiety. PMID:26720098

  19. Finite Time Control Design for Bilateral Teleoperation System With Position Synchronization Error Constrained.

    PubMed

    Yang, Yana; Hua, Changchun; Guan, Xinping

    2016-03-01

    Due to the cognitive limitations of the human operator and lack of complete information about the remote environment, the work performance of such teleoperation systems cannot be guaranteed in most cases. However, some practical tasks conducted by the teleoperation system require high performances, such as tele-surgery needs satisfactory high speed and more precision control results to guarantee patient' health status. To obtain some satisfactory performances, the error constrained control is employed by applying the barrier Lyapunov function (BLF). With the constrained synchronization errors, some high performances, such as, high convergence speed, small overshoot, and an arbitrarily predefined small residual constrained synchronization error can be achieved simultaneously. Nevertheless, like many classical control schemes only the asymptotic/exponential convergence, i.e., the synchronization errors converge to zero as time goes infinity can be achieved with the error constrained control. It is clear that finite time convergence is more desirable. To obtain a finite-time synchronization performance, the terminal sliding mode (TSM)-based finite time control method is developed for teleoperation system with position error constrained in this paper. First, a new nonsingular fast terminal sliding mode (NFTSM) surface with new transformed synchronization errors is proposed. Second, adaptive neural network system is applied for dealing with the system uncertainties and the external disturbances. Third, the BLF is applied to prove the stability and the nonviolation of the synchronization errors constraints. Finally, some comparisons are conducted in simulation and experiment results are also presented to show the effectiveness of the proposed method.

  20. A novel double-focusing time-of-flight mass spectrometer for absolute recoil ion cross sections measurements.

    PubMed

    Sigaud, L; de Jesus, V L B; Ferreira, Natalia; Montenegro, E C

    2016-08-01

    In this work, the inclusion of an Einzel-like lens inside the time-of-flight drift tube of a standard mass spectrometer coupled to a gas cell-to study ionization of atoms and molecules by electron impact-is described. Both this lens and a conical collimator are responsible for further focalization of the ions and charged molecular fragments inside the spectrometer, allowing a much better resolution at the time-of-flight spectra, leading to a separation of a single mass-to-charge unit up to 100 a.m.u. The procedure to obtain the overall absolute efficiency of the spectrometer and micro-channel plate detector is also discussed. PMID:27587105

  1. A novel double-focusing time-of-flight mass spectrometer for absolute recoil ion cross sections measurements.

    PubMed

    Sigaud, L; de Jesus, V L B; Ferreira, Natalia; Montenegro, E C

    2016-08-01

    In this work, the inclusion of an Einzel-like lens inside the time-of-flight drift tube of a standard mass spectrometer coupled to a gas cell-to study ionization of atoms and molecules by electron impact-is described. Both this lens and a conical collimator are responsible for further focalization of the ions and charged molecular fragments inside the spectrometer, allowing a much better resolution at the time-of-flight spectra, leading to a separation of a single mass-to-charge unit up to 100 a.m.u. The procedure to obtain the overall absolute efficiency of the spectrometer and micro-channel plate detector is also discussed.

  2. A novel double-focusing time-of-flight mass spectrometer for absolute recoil ion cross sections measurements

    NASA Astrophysics Data System (ADS)

    Sigaud, L.; de Jesus, V. L. B.; Ferreira, Natalia; Montenegro, E. C.

    2016-08-01

    In this work, the inclusion of an Einzel-like lens inside the time-of-flight drift tube of a standard mass spectrometer coupled to a gas cell—to study ionization of atoms and molecules by electron impact—is described. Both this lens and a conical collimator are responsible for further focalization of the ions and charged molecular fragments inside the spectrometer, allowing a much better resolution at the time-of-flight spectra, leading to a separation of a single mass-to-charge unit up to 100 a.m.u. The procedure to obtain the overall absolute efficiency of the spectrometer and micro-channel plate detector is also discussed.

  3. Automatic Time Stepping with Global Error Control for Groundwater Flow Models

    SciTech Connect

    Tang, Guoping

    2008-09-01

    An automatic time stepping with global error control is proposed for the time integration of the diffusion equation to simulate groundwater flow in confined aquifers. The scheme is based on an a posteriori error estimate for the discontinuous Galerkin (dG) finite element methods. A stability factor is involved in the error estimate and it is used to adapt the time step and control the global temporal error for the backward difference method. The stability factor can be estimated by solving a dual problem. The stability factor is not sensitive to the accuracy of the dual solution and the overhead computational cost can be minimized by solving the dual problem using large time steps. Numerical experiments are conducted to show the application and the performance of the automatic time stepping scheme. Implementation of the scheme can lead to improvement in accuracy and efficiency for groundwater flow models.

  4. Repeated quantum error correction by real-time feedback on continuously encoded qubits

    NASA Astrophysics Data System (ADS)

    Cramer, Julia; Kalb, Norbert; Rol, M. Adriaan; Hensen, Bas; Blok, Machiel S.; Markham, Matthew; Twitchen, Daniel J.; Hanson, Ronald; Taminiau, Tim H.

    Because quantum information is extremely fragile, large-scale quantum information processing requires constant error correction. To be compatible with universal fault-tolerant computations, it is essential that quantum states remain encoded at all times and that errors are actively corrected. I will present such active quantum error correction in a hybrid quantum system based on the nitrogen vacancy (NV) center in diamond. We encode a logical qubit in three long-lived nuclear spins, detect errors by multiple non-destructive measurements using the optically active NV electron spin and correct them by real-time feedback. By combining these new capabilities with recent advances in spin control, multiple cycles of error correction can be performed within the dephasing time. We investigate both coherent and incoherent errors and show that the error-corrected logical qubit can indeed store quantum states longer than the best spin used in the encoding. Furthermore, I will present our latest results on increasing the number of qubits in the encoding, required for quantum error correction for both phase- and bit-flip.

  5. Repeated quantum error correction on a continuously encoded qubit by real-time feedback.

    PubMed

    Cramer, J; Kalb, N; Rol, M A; Hensen, B; Blok, M S; Markham, M; Twitchen, D J; Hanson, R; Taminiau, T H

    2016-01-01

    Reliable quantum information processing in the face of errors is a major fundamental and technological challenge. Quantum error correction protects quantum states by encoding a logical quantum bit (qubit) in multiple physical qubits. To be compatible with universal fault-tolerant computations, it is essential that states remain encoded at all times and that errors are actively corrected. Here we demonstrate such active error correction on a continuously protected logical qubit using a diamond quantum processor. We encode the logical qubit in three long-lived nuclear spins, repeatedly detect phase errors by non-destructive measurements, and apply corrections by real-time feedback. The actively error-corrected qubit is robust against errors and encoded quantum superposition states are preserved beyond the natural dephasing time of the best physical qubit in the encoding. These results establish a powerful platform to investigate error correction under different types of noise and mark an important step towards fault-tolerant quantum information processing. PMID:27146630

  6. Repeated quantum error correction on a continuously encoded qubit by real-time feedback

    PubMed Central

    Cramer, J.; Kalb, N.; Rol, M. A.; Hensen, B.; Blok, M. S.; Markham, M.; Twitchen, D. J.; Hanson, R.; Taminiau, T. H.

    2016-01-01

    Reliable quantum information processing in the face of errors is a major fundamental and technological challenge. Quantum error correction protects quantum states by encoding a logical quantum bit (qubit) in multiple physical qubits. To be compatible with universal fault-tolerant computations, it is essential that states remain encoded at all times and that errors are actively corrected. Here we demonstrate such active error correction on a continuously protected logical qubit using a diamond quantum processor. We encode the logical qubit in three long-lived nuclear spins, repeatedly detect phase errors by non-destructive measurements, and apply corrections by real-time feedback. The actively error-corrected qubit is robust against errors and encoded quantum superposition states are preserved beyond the natural dephasing time of the best physical qubit in the encoding. These results establish a powerful platform to investigate error correction under different types of noise and mark an important step towards fault-tolerant quantum information processing. PMID:27146630

  7. Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error

    NASA Astrophysics Data System (ADS)

    Jung, Insung; Koo, Lockjo; Wang, Gi-Nam

    2008-11-01

    The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.

  8. Neither One-Time Negative Screening Tests nor Negative Colposcopy Provides Absolute Reassurance against Cervical Cancer

    PubMed Central

    Castle, Philip E.; Rodríguez, Ana C.; Burk, Robert D.; Herrero, Rolando; Hildesheim, Allan; Solomon, Diane; Sherman, Mark E.; Jeronimo, Jose; Alfaro, Mario; Morales, Jorge; Guillén, Diego; Hutchinson, Martha L.; Wacholder, Sholom; Schiffman, Mark

    2009-01-01

    A population sample of 10,049 women living in Guanacaste, Costa Rica was recruited into a natural history of human papillomavirus (HPV) and cervical neoplasia study in 1993–4. At the enrollment visit, we applied multiple state-of-the-art cervical cancer screening methods to detect prevalent cervical cancer and to prevent subsequent cervical cancers by the timely detection and treatment of precancerous lesions. Women were screened at enrollment with 3 kinds of cytology (often reviewed by more than one pathologist), visual inspection, and Cervicography. Any positive screening test led to colposcopic referral and biopsy and/or excisional treatment of CIN2 or worse. We retrospectively tested stored specimens with an early HPV test (Hybrid Capture Tube Test) and for >40 HPV genotypes using a research PCR assay. We followed women typically 5–7 years and some up to 11 years. Nonetheless, sixteen cases of invasive cervical cancer were diagnosed during follow-up. Six cancer cases were failures at enrollment to detect abnormalities by cytology screening; three of the six were also negative at enrollment by sensitive HPV DNA testing. Seven cancers represent failures of colposcopy to diagnose cancer or a precancerous lesion in screen-positive women. Finally, three cases arose despite attempted excisional treatment of precancerous lesions. Based on this evidence, we suggest that no current secondary cervical cancer prevention technologies applied once in a previously under-screened population is likely to be 100% efficacious in preventing incident diagnoses of invasive cervical cancer. PMID:19569231

  9. Calibration method of the time synchronization error of many data acquisition nodes in the chained system

    NASA Astrophysics Data System (ADS)

    Jiang, Jia-jia; Duan, Fa-jie; Chen, Jin; Zhang, Chao; Wang, Kai; Chang, Zong-jie

    2012-08-01

    Time synchronization is very important in a distributed chained seismic acquisition system with a large number of data acquisition nodes (DANs). The time synchronization error has two causes. On the one hand, there is a large accumulated propagation delay when commands propagate from the analysis and control system to multiple distant DANs, which makes it impossible for different DANs to receive the same command synchronously. Unfortunately, the propagation delay of commands (PDCs) varies in different application environments. On the other hand, the phase jitter of both the master clock and the clock recovery phase-locked loop, which is designed to extract the timing signal, may also cause the time synchronization error. In this paper, in order to achieve accurate time synchronization, a novel calibration method is proposed which can align the PDCs of all of the DANs in real time and overcome the time synchronization error caused by the phase jitter. Firstly, we give a quantitative analysis of the time synchronization error caused by both the PDCs and the phase jitter. Secondly, we propose a back and forth model (BFM) and a transmission delay measurement method (TDMM) to overcome these difficulties. Furthermore, the BFM is designed as the hardware configuration to measure the PDCs and calibrate the time synchronization error. The TDMM is used to measure the PDCs accurately. Thirdly, in order to overcome the time synchronization error caused by the phase jitter, a compression and mapping algorithm (CMA) is presented. Finally, based on the proposed BFM, TDMM and CMA, a united calibration algorithm is developed to overcome the time synchronization error caused by both the PDCs and the phase jitter. The simulation experiment results show the effectiveness of the calibration method proposed in this paper.

  10. Absolute rate constant determinations for the deactivation of O/1D/ by time resolved decay of O/1D/ yields O/3P/ emission

    NASA Technical Reports Server (NTRS)

    Davidson, J. A.; Sadowski, C. M.; Schiff, H. I.; Howard, C. J.; Schmeltekopf, A. L.; Jennings, D. A.; Streit, G. E.

    1976-01-01

    Absolute rate constants for the deactivation of O(1D) atoms by some atmospheric gases have been determined by observing the time-resolved emission of O(1D) at 630 nm. O(1D) atoms were produced by the dissociation of ozone via repetitive laser pulses at 266 nm. Absolute rate constants for the relaxation of O(1D) at 298 K are reported for N2, O2, CO2, O3, H2, D2, CH4, HCl, NH3, H2O, N2O, and Ne. The results obtained are compared with previous relative and absolute measurements reported in the literature.

  11. Spectral characteristics of time-dependent orbit errors in altimeter height measurements

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Schlax, Michael G.

    1993-01-01

    A mean reference surface and time-dependent orbit errors are estimated simultaneously for each exact-repeat ground track from the first two years of Geosat sea level estimates based on the Goddard Earth model (GEM)-T2 orbits. Motivated by orbit theory and empirical analysis of Geosat data, the time-dependent orbit errors are modeled as 1 cycle per revolution (cpr) sinusoids with slowly varying amplitude and phase. The method recovers the known 'bow tie effect' introduced by the existence of force model errors within the precision orbit determination (POD) procedure used to generate the GEM-T2 orbits. The bow tie pattern of 1-cpr orbit errors is characterized by small amplitudes near the middle and larger amplitudes (up to 160 cm in the 2 yr of data considered here) near the ends of each 5- to 6-day orbit arc over which the POD force model is integrated. A detailed examination of these bow tie patterns reveals the existence of daily modulations of the amplitudes of the 1-cpr sinusoid orbit errors with typical and maximum peak-to-peak ranges of about 14 cm and 30 cm, respectively. The method also identifies a daily variation in the mean orbit error with typical and maximum peak-to-peak ranges of about 6 and 30 cm, respectively, that is unrelated to the predominant 1-cpr orbit error. Application of the simultaneous solution method to the much less accurate Geosat height estimates based on the Naval Astronautics Group orbits concludes that the accuracy of POD is not important for collinear altimetric studies of time-dependent mesoscale variability (wavelengths shorter than 1000 km), as long as the time-dependent orbit errors are dominated by 1-cpr variability and a long-arc (several orbital periods) orbit error estimation scheme such as that presented here is used.

  12. Calibration of diffuse correlation spectroscopy with a time-resolved near-infrared technique to yield absolute cerebral blood flow measurements

    PubMed Central

    Diop, Mamadou; Verdecchia, Kyle; Lee, Ting-Yim; St Lawrence, Keith

    2011-01-01

    A primary focus of neurointensive care is the prevention of secondary brain injury, mainly caused by ischemia. A noninvasive bedside technique for continuous monitoring of cerebral blood flow (CBF) could improve patient management by detecting ischemia before brain injury occurs. A promising technique for this purpose is diffuse correlation spectroscopy (DCS) since it can continuously monitor relative perfusion changes in deep tissue. In this study, DCS was combined with a time-resolved near-infrared technique (TR-NIR) that can directly measure CBF using indocyanine green as a flow tracer. With this combination, the TR-NIR technique can be used to convert DCS data into absolute CBF measurements. The agreement between the two techniques was assessed by concurrent measurements of CBF changes in piglets. A strong correlation between CBF changes measured by TR-NIR and changes in the scaled diffusion coefficient measured by DCS was observed (R2 = 0.93) with a slope of 1.05 ± 0.06 and an intercept of 6.4 ± 4.3% (mean ± standard error). PMID:21750781

  13. Leptin in whales: validation and measurement of mRNA expression by absolute quantitative real-time PCR.

    PubMed

    Ball, Hope C; Holmes, Robert K; Londraville, Richard L; Thewissen, Johannes G M; Duff, Robert Joel

    2013-01-01

    Leptin is the primary hormone in mammals that regulates adipose stores. Arctic adapted cetaceans maintain enormous adipose depots, suggesting possible modifications of leptin or receptor function. Determining expression of these genes is the first step to understanding the extreme physiology of these animals, and the uniqueness of these animals presents special challenges in estimating and comparing expression levels of mRNA transcripts. Here, we compare expression of two model genes, leptin and leptin-receptor gene-related product (OB-RGRP), using two quantitative real-time PCR (qPCR) methods: "relative" and "absolute". To assess the expression of leptin and OB-RGRP in cetacean tissues, we first examined how relative expression of those genes might differ when normalized to four common endogenous control genes. We performed relative expression qPCR assays measuring the amplification of these two model target genes relative to amplification of 18S ribosomal RNA (18S), ubiquitously expressed transcript (Uxt), ribosomal protein 9 (Rs9) and ribosomal protein 15 (Rs15) endogenous controls. Results demonstrated significant differences in the expression of both genes when different control genes were employed; emphasizing a limitation of relative qPCR assays, especially in studies where differences in physiology and/or a lack of knowledge regarding levels and patterns of expression of common control genes may possibly affect data interpretation. To validate the absolute quantitative qPCR methods, we evaluated the effects of plasmid structure, the purity of the plasmid standard preparation and the influence of type of qPCR "background" material on qPCR amplification efficiencies and copy number determination of both model genes, in multiple tissues from one male bowhead whale. Results indicate that linear plasmids are more reliable than circular plasmid standards, no significant differences in copy number estimation based upon background material used, and that the use of

  14. An Error Model for High-Time Resolution Satellite Precipitation Products

    NASA Astrophysics Data System (ADS)

    Maggioni, V.; Sapiano, M.; Adler, R. F.; Huffman, G. J.; Tian, Y.

    2013-12-01

    A new error scheme (PUSH: Precipitation Uncertainties for Satellite Hydrology) is presented to provide global estimates of errors for high time resolution, merged precipitation products. Errors are estimated for the widely used Tropical Rainfall Monitoring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) 3B42 product at daily/0.25° resolution, using the high quality NOAA CPC-UNI gauge analysis as the benchmark. Each of the following four scenarios is explored and explicitly modeled: correct no-precipitation detection (both satellite and gauges detect no precipitation), missed precipitation (satellite records a zero, but it is incorrect), false alarm (satellite detects precipitation, but the reference is zero), and hit (both satellite and gauges detect precipitation). Results over Oklahoma show that the estimated probability distributions are able to reproduce the probability density functions of the benchmark precipitation, in terms of both expected values and quantiles. PUSH adequately captures missed precipitation and false detection uncertainties, reproduces the spatial pattern of the error, and shows a good agreement between observed and estimated errors. The resulting error estimates could be attached to the standard products for the scientific community to use. Investigation is underway to: 1) test the approach in different regions of the world; 2) verify the ability of the model to discern the systematic and random components of the error; 3) and evaluate the model performance when higher time-resolution satellite products (i.e., 3-hourly) are employed.

  15. Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.

    PubMed

    Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał

    2016-08-01

    Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014.

  16. Error criteria for cross validation in the context of chaotic time series prediction.

    PubMed

    Lim, Teck Por; Puthusserypady, Sadasivan

    2006-03-01

    The prediction of a chaotic time series over a long horizon is commonly done by iterating one-step-ahead prediction. Prediction can be implemented using machine learning methods, such as radial basis function networks. Typically, cross validation is used to select prediction models based on mean squared error. The bias-variance dilemma dictates that there is an inevitable tradeoff between bias and variance. However, invariants of chaotic systems are unchanged by linear transformations; thus, the bias component may be irrelevant to model selection in the context of chaotic time series prediction. Hence, the use of error variance for model selection, instead of mean squared error, is examined. Clipping is introduced, as a simple way to stabilize iterated predictions. It is shown that using the error variance for model selection, in combination with clipping, may result in better models.

  17. Relevance of time-varying and time-invariant retrieval error sources on the utility of spaceborne soil moisture products

    NASA Astrophysics Data System (ADS)

    Crow, Wade T.; Koster, Randal D.; Reichle, Rolf H.; Sharif, Hatim O.

    2005-12-01

    Errors in remotely-sensed soil moisture retrievals originate from a combination of time-invariant and time-varying sources. For land modeling applications such as forecast initialization, some of the impact of time-invariant sources can be removed given known differences between observed and modeled soil moisture climatologies. Nevertheless, the distinction is seldom made when evaluating remotely-sensed soil moisture products. Here we describe an Observing System Simulation Experiment (OSSE) for radiometer-only soil moisture products derived from the NASA Hydrosphere States (Hydros) mission where the impact of time-invariant errors is explicitly removed via the linear rescaling of retrievals. OSSE results for the 575,000 km2 Red-Arkansas River Basin indicate that climatological rescaling may significantly reduce the perceived magnitude of Hydros soil moisture retrieval errors and expands the geographic areas over which retrievals demonstrate value for land surface modeling applications.

  18. A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series.

    PubMed

    Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan

    2015-01-01

    Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS. PMID:26193283

  19. A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series

    PubMed Central

    Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan

    2015-01-01

    Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS. PMID:26193283

  20. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  1. Unavoidable Errors: A Spatio-Temporal Analysis of Time-Course and Neural Sources of Evoked Potentials Associated with Error Processing in a Speeded Task

    ERIC Educational Resources Information Center

    Vocat, Roland; Pourtois, Gilles; Vuilleumier, Patrik

    2008-01-01

    The detection of errors is known to be associated with two successive neurophysiological components in EEG, with an early time-course following motor execution: the error-related negativity (ERN/Ne) and late positivity (Pe). The exact cognitive and physiological processes contributing to these two EEG components, as well as their functional…

  2. First photoelectron timing error evaluation of a new scintillation detector model

    SciTech Connect

    Petrick, N.; Clinthorne, N.H.; Rogers, W.L.; Hero, A.O. III . Div. of Nuclear Medicine)

    1991-04-01

    In this paper, a general timing system model for a scintillation detector developed is experimentally evaluated. The detector consists of a scintillator and a photodetector such as a photomultiplier tube or an avalanche photodiode. The model uses a Poisson point process to characterize the light output from the scintillator. This timing model was used to simulate a BGO scintillator with a Burle 8575 PMT using first photoelectron timing detection. Evaluation of the model consisted of comparing the RMS error from the simulations with the error from the actual detector system. The authors find that the general model compares well with the actual error results for the BGO/8575 PMT detector. In addition, the optimal threshold is found to be dependent upon the energy of the scintillation. In the low energy part of the spectrum, the authors find a low threshold is optimal while for higher energy pulses the optimal threshold increases.

  3. Detection and absolute quantitation of Tomato torrado virus (ToTV) by real time RT-PCR.

    PubMed

    Herrera-Vásquez, José Angel; Rubio, Luis; Alfaro-Fernández, Ana; Debreczeni, Diana Elvira; Font-San-Ambrosio, Isabel; Falk, Bryce W; Ferriol, Inmaculada

    2015-09-01

    Tomato torrado virus (ToTV) causes serious damage to the tomato industry and significant economic losses. A quantitative real-time reverse-transcription polymerase chain reaction (RT-qPCR) method using primers and a specific TaqMan(®) MGB probe for ToTV was developed for sensitive detection and quantitation of different ToTV isolates. A standard curve using RNA transcripts enabled absolute quantitation, with a dynamic range from 10(4) to 10(10) ToTV RNA copies/ng of total RNA. The specificity of the RT-qPCR was tested with twenty-three ToTV isolates from tomato (Solanum lycopersicum L.), and black nightshade (Solanum nigrum L.) collected in Spain, Australia, Hungary and France, which covered the genetic variation range of this virus. This new RT-qPCR assay enables a reproducible, sensitive and specific detection and quantitation of ToTV, which can be a valuable tool in disease management programs and epidemiological studies.

  4. Model-based evaluation of microbial mass fractions: effect of absolute anaerobic reaction time on microbial mass fractions.

    PubMed

    Tunçal, Tolga

    2010-04-14

    Although enhanced biological phosphorus removal processes (EBPR) are popular methods for nutrient control, unstable treatment performances of full-scale systems are still not well understood. In this study, the interaction between electron acceptors present at the start of the anaerobic phase of an EBPR system and the amount of organic acids generated from simple substrate (rbsCOD) was investigated in a full-scale wastewater treatment plant. Quantification of microbial groups including phosphorus-accumulating microorganisms (PAOs), denitrifying PAOs (DPAOs), glycogen-accumulating microorganisms (GAOs) and ordinary heterotrophic microorganisms (OHOs) was based on a modified dynamic model. The intracellular phosphorus content of PAOs was also determined by the execution of mass balances for the biological stages of the plant. The EBPR activities observed in the plant and in batch tests (under idealized conditions) were compared with each other statistically as well. Modelling efforts indicated that the use of absolute anaerobic reaction (eta1) instead of nominal anaerobic reaction time (eta), to estimate the amount of available substrate for PAOs, significantly improved model accuracy. Another interesting result of the study was the differences in EBPR characteristics observed in idealized and real conditions. PMID:20480829

  5. Benchmarking flood models from space in near real-time: accommodating SRTM height measurement errors with low resolution flood imagery

    NASA Astrophysics Data System (ADS)

    Schumann, G.; di Baldassarre, G.; Alsdorf, D.; Bates, P. D.

    2009-04-01

    In February 2000, the Shuttle Radar Topography Mission (SRTM) measured the elevation of most of the Earth's surface with spatially continuous sampling and an absolute vertical accuracy greater than 9 m. The vertical error has been shown to change with topographic complexity, being less important over flat terrain. This allows water surface slopes to be measured and associated discharge volumes to be estimated for open channels in large basins, such as the Amazon. Building on these capabilities, this paper demonstrates that near real-time coarse resolution radar imagery of a recent flood event on a 98 km reach of the River Po (Northern Italy) combined with SRTM terrain height data leads to a water slope remarkably similar to that derived by combining the radar image with highly accurate airborne laser altimetry. Moreover, it is shown that this space-borne flood wave approximation compares well to a hydraulic model and thus allows the performance of the latter, calibrated on a previous event, to be assessed when applied to an event of different magnitude in near real-time. These results are not only of great importance to real-time flood management and flood forecasting but also support the upcoming Surface Water and Ocean Topography (SWOT) mission that will routinely provide water levels and slopes with higher precision around the globe.

  6. Empirical versus time stepping with embedded error control for density-driven flow in porous media

    NASA Astrophysics Data System (ADS)

    Younes, Anis; Ackerer, Philippe

    2010-08-01

    Modeling density-driven flow in porous media may require very long computational time due to the nonlinear coupling between flow and transport equations. Time stepping schemes are often used to adapt the time step size in order to reduce the computational cost of the simulation. In this work, the empirical time stepping scheme which adapts the time step size according to the performance of the iterative nonlinear solver is compared to an adaptive time stepping scheme where the time step length is controlled by the temporal truncation error. Results of the simulations of the Elder problem show that (1) the empirical time stepping scheme can lead to inaccurate results even with a small convergence criterion, (2) accurate results are obtained when the time step size selection is based on the truncation error control, (3) a non iterative scheme with proper time step management can be faster and leads to more accurate solution than the standard iterative procedure with the empirical time stepping and (4) the temporal truncation error can have a significant effect on the results and can be considered as one of the reasons for the differences observed in the Elder numerical results.

  7. Structure and dating errors in the geologic time scale and periodicity in mass extinctions

    NASA Technical Reports Server (NTRS)

    Stothers, Richard B.

    1989-01-01

    Structure in the geologic time scale reflects a partly paleontological origin. As a result, ages of Cenozoic and Mesozoic stage boundaries exhibit a weak 28-Myr periodicity that is similar to the strong 26-Myr periodicity detected in mass extinctions of marine life by Raup and Sepkoski. Radiometric dating errors in the geologic time scale, to which the mass extinctions are stratigraphically tied, do not necessarily lessen the likelihood of a significant periodicity in mass extinctions, but do spread the acceptable values of the period over the range 25-27 Myr for the Harland et al. time scale or 25-30 Myr for the DNAG time scale. If the Odin time scale is adopted, acceptable periods fall between 24 and 33 Myr, but are not robust against dating errors. Some indirect evidence from independently-dated flood-basalt volcanic horizons tends to favor the Odin time scale.

  8. Linear time-dependent reference intervals where there is measurement error in the time variable-a parametric approach.

    PubMed

    Gillard, Jonathan

    2015-12-01

    This article re-examines parametric methods for the calculation of time specific reference intervals where there is measurement error present in the time covariate. Previous published work has commonly been based on the standard ordinary least squares approach, weighted where appropriate. In fact, this is an incorrect method when there are measurement errors present, and in this article, we show that the use of this approach may, in certain cases, lead to referral patterns that may vary with different values of the covariate. Thus, it would not be the case that all patients are treated equally; some subjects would be more likely to be referred than others, hence violating the principle of equal treatment required by the International Federation for Clinical Chemistry. We show, by using measurement error models, that reference intervals are produced that satisfy the requirement for equal treatment for all subjects.

  9. pp ii Variation in reading error in P times for explosions with body-wave magnitude

    NASA Astrophysics Data System (ADS)

    Douglas, A.; Young, J. B.; Bowers, D.; Lewis, M.

    2005-09-01

    The differences between true travel-times of P and times predicted from travel-time tables (path effects) can be estimated for groups of closely spaced explosions with known hypocentres and origin times, if the onsets are observed at large signal-to-noise ratios (SNR) and read by analysts. Reading error can also be estimated and is usually assumed to be normally distributed with zero mean. Two experiments have been carried out to look at how reading error in P times from explosions varies with magnitude - taken as a measure of SNR - when read by analysts and by automatic systems. Although at low magnitudes there is some evidence of analyst readings being biased late, the largest variation in reading error with magnitude is found for automatic systems. The results show just how difficult it can be to estimate path effects free from observational bias, at least using bulletin data. The current programme to estimate path effects to improve epicentre location for verification of the Comprehensive Test Ban needs to include checks to ensure that apparent variations in path effects with location, are not due to bias from systematic reading error.

  10. Covariate measurement error correction methods in mediation analysis with failure time data.

    PubMed

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469

  11. Covariate measurement error correction methods in mediation analysis with failure time data.

    PubMed

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk.

  12. On the effect of timing errors in run length codes. [redundancy removal algorithms for digital channels

    NASA Technical Reports Server (NTRS)

    Wilkins, L. C.; Wintz, P. A.

    1975-01-01

    Many redundancy removal algorithms employ some sort of run length code. Blocks of timing words are coded with synchronization words inserted between blocks. The probability of incorrectly reconstructing a sample because of a channel error in the timing data is a monotonically nondecreasing function of time since the last synchronization word. In this paper we compute the 'probability that the accumulated magnitude of timing errors equal zero' as a function of time since the last synchronization word for a zero-order predictor (ZOP). The result is valid for any data source that can be modeled by a first-order Markov chain and any digital channel that can be modeled by a channel transition matrix. An example is presented.

  13. Effects of holding time and measurement error on culturing Legionella in environmental water samples.

    PubMed

    Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G

    2014-10-01

    Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding.

  14. Interference peak detection based on FPGA for real-time absolute distance ranging with dual-comb lasers

    NASA Astrophysics Data System (ADS)

    Ni, Kai; Dong, Hao; Zhou, Qian; Xu, Mingfei; Li, Xinghui; Wu, Guanhao

    2015-08-01

    Absolute distance measurement using dual femtosecond comb lasers can achieve higher accuracy and faster measurement speed, which makes it more and more attractive. The data processing flow consists of four steps: interference peak detection, fast Fourier transform (FFT), phase fitting and compensation of index of refraction. A realtime data processing system based on Field-Programmable Gate Array (FPGA) for dual-comb ranging has been newly developed. The design and implementation of the interference peak detection algorithm by FPGA and Verilog language is introduced in this paper, which is viewed as the most complicated part and an important guarantee for system precision and reliability. An adaptive sliding window for scanning is used to detect peaks. In the process of detection, the algorithm stores 16 sample data as a detection unit and calculates the average of each unit. The average result is used to determine the vertical center height of the sliding window. The algorithm estimates the noise intensity of each detection unit, and then calculates the average of the noise strength of successive 128 units. The noise average is used to calculate the signal to noise ratio of the current working environment, which is used to adjust the height of the sliding window. This adaptive sliding window helps to eliminate fake peaks caused by noise. The whole design is based on the way of pipeline, which can improves the real-time throughput of the overall peak detection module. Its execution speed is up to 140MHz in the FPGA, and the peak can be detected in 16 clock cycle when it appears.

  15. Leptin in Whales: Validation and Measurement of mRNA Expression by Absolute Quantitative Real-Time PCR

    PubMed Central

    Ball, Hope C.; Holmes, Robert K.; Londraville, Richard L.; Thewissen, Johannes G. M.; Duff, Robert Joel

    2013-01-01

    Leptin is the primary hormone in mammals that regulates adipose stores. Arctic adapted cetaceans maintain enormous adipose depots, suggesting possible modifications of leptin or receptor function. Determining expression of these genes is the first step to understanding the extreme physiology of these animals, and the uniqueness of these animals presents special challenges in estimating and comparing expression levels of mRNA transcripts. Here, we compare expression of two model genes, leptin and leptin-receptor gene-related product (OB-RGRP), using two quantitative real-time PCR (qPCR) methods: “relative” and “absolute”. To assess the expression of leptin and OB-RGRP in cetacean tissues, we first examined how relative expression of those genes might differ when normalized to four common endogenous control genes. We performed relative expression qPCR assays measuring the amplification of these two model target genes relative to amplification of 18S ribosomal RNA (18S), ubiquitously expressed transcript (Uxt), ribosomal protein 9 (Rs9) and ribosomal protein 15 (Rs15) endogenous controls. Results demonstrated significant differences in the expression of both genes when different control genes were employed; emphasizing a limitation of relative qPCR assays, especially in studies where differences in physiology and/or a lack of knowledge regarding levels and patterns of expression of common control genes may possibly affect data interpretation. To validate the absolute quantitative qPCR methods, we evaluated the effects of plasmid structure, the purity of the plasmid standard preparation and the influence of type of qPCR “background” material on qPCR amplification efficiencies and copy number determination of both model genes, in multiple tissues from one male bowhead whale. Results indicate that linear plasmids are more reliable than circular plasmid standards, no significant differences in copy number estimation based upon background material used, and

  16. Measurement of the Absolute Magnitude and Time Courses of Mitochondrial Membrane Potential in Primary and Clonal Pancreatic Beta-Cells.

    PubMed

    Gerencser, Akos A; Mookerjee, Shona A; Jastroch, Martin; Brand, Martin D

    2016-01-01

    The aim of this study was to simplify, improve and validate quantitative measurement of the mitochondrial membrane potential (ΔψM) in pancreatic β-cells. This built on our previously introduced calculation of the absolute magnitude of ΔψM in intact cells, using time-lapse imaging of the non-quench mode fluorescence of tetramethylrhodamine methyl ester and a bis-oxonol plasma membrane potential (ΔψP) indicator. ΔψM is a central mediator of glucose-stimulated insulin secretion in pancreatic β-cells. ΔψM is at the crossroads of cellular energy production and demand, therefore precise assay of its magnitude is a valuable tool to study how these processes interplay in insulin secretion. Dispersed islet cell cultures allowed cell type-specific, single-cell observations of cell-to-cell heterogeneity of ΔψM and ΔψP. Glucose addition caused hyperpolarization of ΔψM and depolarization of ΔψP. The hyperpolarization was a monophasic step increase, even in cells where the ΔψP depolarization was biphasic. The biphasic response of ΔψP was associated with a larger hyperpolarization of ΔψM than the monophasic response. Analysis of the relationships between ΔψP and ΔψM revealed that primary dispersed β-cells responded to glucose heterogeneously, driven by variable activation of energy metabolism. Sensitivity analysis of the calibration was consistent with β-cells having substantial cell-to-cell variations in amounts of mitochondria, and this was predicted not to impair the accuracy of determinations of relative changes in ΔψM and ΔψP. Finally, we demonstrate a significant problem with using an alternative ΔψM probe, rhodamine 123. In glucose-stimulated and oligomycin-inhibited β-cells the principles of the rhodamine 123 assay were breached, resulting in misleading conclusions.

  17. Influence of measurement errors on temperature-based death time determination.

    PubMed

    Hubig, Michael; Muggenthaler, Holger; Mall, Gita

    2011-07-01

    Temperature-based methods represent essential tools in forensic death time determination. Empirical double exponential models have gained wide acceptance because they are highly flexible and simple to handle. The most established model commonly used in forensic practice was developed by Henssge. It contains three independent variables: the body mass, the environmental temperature, and the initial body core temperature. The present study investigates the influence of variations in the input data (environmental temperature, initial body core temperature, core temperature, time) on the standard deviation of the model-based estimates of the time since death. Two different approaches were used for calculating the standard deviation: the law of error propagation and the Monte Carlo method. Errors in environmental temperature measurements as well as deviations of the initial rectal temperature were identified as major sources of inaccuracies in model based death time estimation.

  18. Roughness/error trade-offs in neural network time series models

    NASA Astrophysics Data System (ADS)

    Gustafson, Steven C.; Little, Gordon R.; Loomis, John S.; Tuthill, Theresa A.

    1997-04-01

    Radial basis function neural network models of a time series may be developed or trained using samples from the series. Each model is a continuous curve that can be used to represent the series or predict future vales. Model development requires a tradeoff between a measure of roughness of the curve and a measure of its error relative to the samples. For roughness defined as the root integrated squared second derivative and for error defined as the root sum squared deviation (which are among the most common definitions), an optimal tradeoff conjecture is proposed and illustrated. The conjecture states that the curve that minimizes roughness subject to given error is a weighted mean of the least squares line and the natural cubic spline through the samples.

  19. Absolute Summ

    NASA Astrophysics Data System (ADS)

    Phillips, Alfred, Jr.

    Summ means the entirety of the multiverse. It seems clear, from the inflation theories of A. Guth and others, that the creation of many universes is plausible. We argue that Absolute cosmological ideas, not unlike those of I. Newton, may be consistent with dynamic multiverse creations. As suggested in W. Heisenberg's uncertainty principle, and with the Anthropic Principle defended by S. Hawking, et al., human consciousness, buttressed by findings of neuroscience, may have to be considered in our models. Predictability, as A. Einstein realized with Invariants and General Relativity, may be required for new ideas to be part of physics. We present here a two postulate model geared to an Absolute Summ. The seedbed of this work is part of Akhnaton's philosophy (see S. Freud, Moses and Monotheism). Most important, however, is that the structure of human consciousness, manifest in Kenya's Rift Valley 200,000 years ago as Homo sapiens, who were the culmination of the six million year co-creation process of Hominins and Nature in Africa, allows us to do the physics that we do. .

  20. A neighbourhood analysis based technique for real-time error concealment in H.264 intra pictures

    NASA Astrophysics Data System (ADS)

    Beesley, Steven T. C.; Grecos, Christos; Edirisinghe, Eran

    2007-02-01

    H.264s extensive use of context-based adaptive binary arithmetic or variable length coding makes streams highly susceptible to channel errors, a common occurrence over networks such as those used by mobile devices. Even a single bit error will cause a decoder to discard all stream data up to the next fixed length resynchronisation point, the worst scenario is that an entire slice is lost. In cases where retransmission and forward error concealment are not possible, a decoder should conceal any erroneous data in order to minimise the impact on the viewer. Stream errors can often be spotted early in the decode cycle of a macroblock which if aborted can provide unused processor cycles, these can instead be used to conceal errors at minimal cost, even as part of a real time system. This paper demonstrates a technique that utilises Sobel convolution kernels to quickly analyse the neighbourhood surrounding erroneous macroblocks before performing a weighted multi-directional interpolation. This generates significantly improved statistical (PSNR) and visual (IEEE structural similarity) results when compared to the commonly used weighted pixel value averaging. Furthermore it is also computationally scalable, both during analysis and concealment, achieving maximum performance from the spare processing power available.

  1. Mitigation of Second-Order Ionospheric Error for Real-Time PPP Users in Europe

    NASA Astrophysics Data System (ADS)

    Abdelazeem, Mohamed

    2016-07-01

    Currently, the international global navigation satellite system (GNSS) real-time service (IGS-RTS) products are used extensively for real-time precise point positioning and ionosphere modeling applications. The major challenge of the dual frequency real-time precise point positioning (RT-PPP) is that the solution requires relatively long time to converge to the centimeter-level accuracy. This relatively long convergence time results essentially from the un-modeled high-order ionospheric errors. To overcome this challenge, a method for the second-order ionospheric delay mitigation, which represents the bulk of the high-order ionospheric errors, is proposed for RT-PPP users in Europe. A real-time regional ionospheric model (RT-RIM) over Europe is developed using the IGS-RTS precise satellite orbit and clock products. GPS observations from a regional network consisting of 60 IGS and EUREF reference stations are processed using the Bernese 5.2 software package in order to extract the real-time vertical total electron content (RT-VTEC). The proposed RT-RIM has spatial and temporal resolution of 1º×1º and 15 minutes, respectively. In order to investigate the effect of the second-order ionospheric delay on the RT-PPP solution, new GPS data sets from another reference stations are used. The examined stations are selected to represent different latitudes. The GPS observations are corrected from the second-order ionospheric errors using the extracted RT-VTEC values. In addition, the IGS-RTS precise orbit and clock products are used to account for the satellite orbit and clock errors, respectively. It is shown that the RT-PPP convergence time and positioning accuracy are improved when the second-order ionospheric delay is accounted for.

  2. The Impact of Strategy Instruction and Timing of Estimates on Low and High Working-Memory Capacity Readers' Absolute Monitoring Accuracy

    ERIC Educational Resources Information Center

    Linderholm, Tracy; Zhao, Qin

    2008-01-01

    Working-memory capacity, strategy instruction, and timing of estimates were investigated for their effects on absolute monitoring accuracy, which is the difference between estimated and actual reading comprehension test performance. Participants read two expository texts under one of two randomly assigned reading strategy instruction conditions…

  3. Impact of gradient timing error on the tissue sodium concentration bioscale measured using flexible twisted projection imaging

    NASA Astrophysics Data System (ADS)

    Lu, Aiming; Atkinson, Ian C.; Vaughn, J. Thomas; Thulborn, Keith R.

    2011-12-01

    The rapid biexponential transverse relaxation of the sodium MR signal from brain tissue requires efficient k-space sampling for quantitative imaging in a time that is acceptable for human subjects. The flexible twisted projection imaging (flexTPI) sequence has been shown to be suitable for quantitative sodium imaging with an ultra-short echo time to minimize signal loss. The fidelity of the k-space center location is affected by the readout gradient timing errors on the three physical axes, which is known to cause image distortion for projection-based acquisitions. This study investigated the impact of these timing errors on the voxel-wise accuracy of the tissue sodium concentration (TSC) bioscale measured with the flexTPI sequence. Our simulations show greater than 20% spatially varying quantification errors when the gradient timing errors are larger than 10 μs on all three axes. The quantification is more tolerant of gradient timing errors on the Z-axis. An existing method was used to measure the gradient timing errors with <1 μs error. The gradient timing error measurement is shown to be RF coil dependent, and timing error differences of up to ˜16 μs have been observed between different RF coils used on the same scanner. The measured timing errors can be corrected prospectively or retrospectively to obtain accurate TSC values.

  4. Height Estimation and Error Assessment of Inland Water Level Time Series calculated by a Kalman Filter Approach using Multi-Mission Satellite Altimetry

    NASA Astrophysics Data System (ADS)

    Schwatke, Christian; Dettmering, Denise; Boergens, Eva

    2015-04-01

    compare our results with gauges and external inland altimeter databases (e.g. Hydroweb). We yield very high correlations between absolute water level height time series from altimetry and gauges. Moreover, the comparisons of water level heights are also used for the validation of the error assessment. More than 200 water level time series were already computed and made public available via the "Database for Hydrological Time Series of Inland Waters" (DAHITI) which is available via http://dahiti.dgfi.tum.de .

  5. Time that tells: critical clock-drawing errors for dementia screening

    PubMed Central

    Lessig, Mary C.; Scanlan, James M.; Nazemi, Hamid; Borson, Soo

    2009-01-01

    Background Clock-drawing tests are popular components of dementia screens but no single scoring system has been universally accepted. We sought to identify an optimal subset of clock errors for dementia screening and compare them with three other systems representative of the existing wide variations in approach (Shulman, Mendez, Wolf-Klein), as well as with the CDT system used in the Mini-Cog, which combines clock drawing with delayed recall. Methods The clock drawings of an ethnolinguistically and educationally diverse sample (N = 536) were analyzed for the association of 24 different errors with the presence and severity of dementia defined by independent research criteria. The final sample included 364 subjects with ≥5 years of education, as preliminary examination suggested different error patterns in subjects with 0–4 years of education and inadequate numbers of normal controls for reliable analysis. Results Eleven of 24 errors were significantly associated with dementia in subjects with ≥5 years of education, and six were combined to identify dementia with 88% specificity and 71% sensitivity: inaccurate time setting, no hands, missing numbers, number substitutions or repetitions, or refusal to attempt clock drawing. Time setting was the most prevalent error at all dementia stages, refusal occurred only in moderate and severe dementia; and ethnicity and language of administration had no effect. All critical errors increased in frequency with dementia stage. This simplified scoring system had much better specificity than two other systems (88% vs 39% for Mendez’s system –63% for Shulman’s) and much better sensitivity than Wolf-Klein’s (71% vs 51%). Stepwise logistic regression found the simplified system to be more strongly predictive of dementia than the three other CDT systems of dementia. Substituting the new CDT algorithm for that used in the original CDT Mini-Cog improved the Mini-Cog’s specificity from 89 to 93% with minimal change in

  6. 5 CFR 1605.22 - Claims for correction of Board or TSP record keeper errors; time limitations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 3 2012-01-01 2012-01-01 false Claims for correction of Board or TSP record keeper errors; time limitations. 1605.22 Section 1605.22 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD CORRECTION OF ADMINISTRATIVE ERRORS Board or TSP Record Keeper Errors § 1605.22 Claims for correction of Board or...

  7. Measurement of the Absolute Magnitude and Time Courses of Mitochondrial Membrane Potential in Primary and Clonal Pancreatic Beta-Cells.

    PubMed

    Gerencser, Akos A; Mookerjee, Shona A; Jastroch, Martin; Brand, Martin D

    2016-01-01

    The aim of this study was to simplify, improve and validate quantitative measurement of the mitochondrial membrane potential (ΔψM) in pancreatic β-cells. This built on our previously introduced calculation of the absolute magnitude of ΔψM in intact cells, using time-lapse imaging of the non-quench mode fluorescence of tetramethylrhodamine methyl ester and a bis-oxonol plasma membrane potential (ΔψP) indicator. ΔψM is a central mediator of glucose-stimulated insulin secretion in pancreatic β-cells. ΔψM is at the crossroads of cellular energy production and demand, therefore precise assay of its magnitude is a valuable tool to study how these processes interplay in insulin secretion. Dispersed islet cell cultures allowed cell type-specific, single-cell observations of cell-to-cell heterogeneity of ΔψM and ΔψP. Glucose addition caused hyperpolarization of ΔψM and depolarization of ΔψP. The hyperpolarization was a monophasic step increase, even in cells where the ΔψP depolarization was biphasic. The biphasic response of ΔψP was associated with a larger hyperpolarization of ΔψM than the monophasic response. Analysis of the relationships between ΔψP and ΔψM revealed that primary dispersed β-cells responded to glucose heterogeneously, driven by variable activation of energy metabolism. Sensitivity analysis of the calibration was consistent with β-cells having substantial cell-to-cell variations in amounts of mitochondria, and this was predicted not to impair the accuracy of determinations of relative changes in ΔψM and ΔψP. Finally, we demonstrate a significant problem with using an alternative ΔψM probe, rhodamine 123. In glucose-stimulated and oligomycin-inhibited β-cells the principles of the rhodamine 123 assay were breached, resulting in misleading conclusions. PMID:27404273

  8. Measurement of the Absolute Magnitude and Time Courses of Mitochondrial Membrane Potential in Primary and Clonal Pancreatic Beta-Cells

    PubMed Central

    Gerencser, Akos A.; Mookerjee, Shona A.; Jastroch, Martin; Brand, Martin D.

    2016-01-01

    The aim of this study was to simplify, improve and validate quantitative measurement of the mitochondrial membrane potential (ΔψM) in pancreatic β-cells. This built on our previously introduced calculation of the absolute magnitude of ΔψM in intact cells, using time-lapse imaging of the non-quench mode fluorescence of tetramethylrhodamine methyl ester and a bis-oxonol plasma membrane potential (ΔψP) indicator. ΔψM is a central mediator of glucose-stimulated insulin secretion in pancreatic β-cells. ΔψM is at the crossroads of cellular energy production and demand, therefore precise assay of its magnitude is a valuable tool to study how these processes interplay in insulin secretion. Dispersed islet cell cultures allowed cell type-specific, single-cell observations of cell-to-cell heterogeneity of ΔψM and ΔψP. Glucose addition caused hyperpolarization of ΔψM and depolarization of ΔψP. The hyperpolarization was a monophasic step increase, even in cells where the ΔψP depolarization was biphasic. The biphasic response of ΔψP was associated with a larger hyperpolarization of ΔψM than the monophasic response. Analysis of the relationships between ΔψP and ΔψM revealed that primary dispersed β-cells responded to glucose heterogeneously, driven by variable activation of energy metabolism. Sensitivity analysis of the calibration was consistent with β-cells having substantial cell-to-cell variations in amounts of mitochondria, and this was predicted not to impair the accuracy of determinations of relative changes in ΔψM and ΔψP. Finally, we demonstrate a significant problem with using an alternative ΔψM probe, rhodamine 123. In glucose-stimulated and oligomycin-inhibited β-cells the principles of the rhodamine 123 assay were breached, resulting in misleading conclusions. PMID:27404273

  9. Time lapse imaging of water content with geoelectrical methods: on the interest of working with absolute water content data

    NASA Astrophysics Data System (ADS)

    Dumont, Gaël; Pilawski, Tamara; Robert, Tanguy; Hermans, Thomas; Garré, Sarah; Nguyen, Frederic

    2016-04-01

    The electrical resistivity tomography is a suitable method to estimate the water content of a waste material and detect changes in water content. Various ERT profiles, both static data and time-lapse, where acquired on a landfill during the Minerve project. In the literature, the relative change of resistivity (Δρ/ρ) is generally computed. For saline or heat tracer tests in the saturated zone, the Δρ/ρ can be easily translated into pore water conductivity or underground temperature changes (provided that the initial salinity or temperature condition is homogeneous over the ERT panel extension). For water content changes in the vadose zone resulting of an infiltration event or injection experiment, many authors also work with the Δρ/ρ or relative changes of water content Δθ/θ (linked to the change of resistivity through one single parameter: the Archie's law exponent "m"). This parameter is not influenced by the underground temperature and pore fluid conductivity (ρ¬w) condition but is influenced by the initial water content distribution. Therefore, you never know if the loss of Δθ/θ signal is representative of the limit of the infiltration front or more humid initial condition. Another approach for the understanding of the infiltration process is the assessment of the absolute change of water content (Δθ). This requires the direct computation of the water content of the waste from the resistivity data. For that purpose, we used petrophysical laws calibrated with laboratory experiments and our knowledge of the in situ temperature and pore fluid conductivity parameters. Then, we investigated water content changes in the waste material after a rainfall event (Δθ= Δθ/θ* θ). This new observation is really representatives of the quantity of water infiltrated in the waste material. However, the uncertainty in the pore fluid conductivity value may influence the computed water changes (Δθ=k*m√(ρw) ; where "m" is the Archie's law exponent

  10. Bootstrap-based methods for estimating standard errors in Cox's regression analyses of clustered event times.

    PubMed

    Xiao, Yongling; Abrahamowicz, Michal

    2010-03-30

    We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.

  11. Error correction in short time steps during the application of quantum gates

    NASA Astrophysics Data System (ADS)

    de Castro, L. A.; Napolitano, R. d. J.

    2016-04-01

    We propose a modification of the standard quantum error-correction method to enable the correction of errors that occur due to the interaction with a noisy environment during quantum gates without modifying the codification used for memory qubits. Using a perturbation treatment of the noise that allows us to separate it from the ideal evolution of the quantum gate, we demonstrate that in certain cases it is necessary to divide the logical operation in short time steps intercalated by correction procedures. A prescription of how these gates can be constructed is provided, as well as a proof that, even for the cases when the division of the quantum gate in short time steps is not necessary, this method may be advantageous for reducing the total duration of the computation.

  12. Accelerating Time-Varying Hardware Volume Rendering Using TSP Trees and Color-Based Error Metrics

    NASA Technical Reports Server (NTRS)

    Ellsworth, David; Chiang, Ling-Jen; Shen, Han-Wei; Kwak, Dochan (Technical Monitor)

    2000-01-01

    This paper describes a new hardware volume rendering algorithm for time-varying data. The algorithm uses the Time-Space Partitioning (TSP) tree data structure to identify regions within the data that have spatial or temporal coherence. By using this coherence, the rendering algorithm can improve performance when the volume data is larger than the texture memory capacity by decreasing the amount of textures required. This coherence can also allow improved speed by appropriately rendering flat-shaded polygons instead of textured polygons, and by not rendering transparent regions. To reduce the polygonization overhead caused by the use of the hierarchical data structure, we introduce an optimization method using polygon templates. The paper also introduces new color-based error metrics, which more accurately identify coherent regions compared to the earlier scalar-based metrics. By showing experimental results from runs using different data sets and error metrics, we demonstrate that the new methods give substantial improvements in volume rendering performance.

  13. A time dependent approach for removing the cell boundary error in elliptic homogenization problems

    NASA Astrophysics Data System (ADS)

    Arjmand, Doghonay; Runborg, Olof

    2016-06-01

    This paper concerns the cell-boundary error present in multiscale algorithms for elliptic homogenization problems. Typical multiscale methods have two essential components: a macro and a micro model. The micro model is used to upscale parameter values which are missing in the macro model. To solve the micro model, boundary conditions are required on the boundary of the microscopic domain. Imposing a naive boundary condition leads to O (ε / η) error in the computation, where ε is the size of the microscopic variations in the media and η is the size of the micro-domain. The removal of this error in modern multiscale algorithms still remains an important open problem. In this paper, we present a time-dependent approach which is general in terms of dimension. We provide a theorem which shows that we have arbitrarily high order convergence rates in terms of ε / η in the periodic setting. Additionally, we present numerical evidence showing that the method improves the O (ε / η) error to O (ε) in general non-periodic media.

  14. Tracking a Quantum Error Syndrome in Real Time: Quantum Jumps of Photon Parity

    NASA Astrophysics Data System (ADS)

    Schoelkopf, Robert

    2015-03-01

    Dramatic progress has been made in the last decade and a half towards realizing solid-state systems for quantum information processing with superconducting quantum circuits. Artificial atoms (or qubits) based on Josephson junctions have improved their coherence times more than 100,000-fold, have been entangled, and used to perform simple quantum algorithms. The next challenge for the field is demonstrating quantum error correction that actually improves the lifetimes, a necessary step for building more complex systems. I will describe recent experiments with superconducting circuits, where we store quantum information in the form of Schrodinger cat states of a microwave cavity, containing up to 100 photons. Using an ancilla qubit, we then monitor the gradual death of these cats, photon by photon, by observing the first jumps of photon number parity. This represents the first continuous observation of a quantum error syndrome, and may enable new approaches to quantum information based on photonic qubits. The performance of this error-monitoring system and the prospects for reaching ``breakeven,'' where quantum error correction improves the lifetime of stored information, will be discussed. This worked performed with many collaborators at Yale University, and supported by the Army Research Office, the Laboratory for Physical Science, and the NSF.

  15. Absolutely calibrated, time-resolved measurements of soft x rays using transmission grating spectrometers at the Nike Laser Facility

    NASA Astrophysics Data System (ADS)

    Weaver, J. L.; Feldman, U.; Seely, J. F.; Holland, G.; Serlin, V.; Klapisch, M.; Columbant, D.; Mostovych, A.

    2001-12-01

    Accurate simulation of pellet implosions for direct drive inertial confinement fusion requires benchmarking the codes with experimental data. The Naval Research Laboratory (NRL) has begun to measure the absolute intensity of radiation from laser irradiated targets to provide critical information for the radiatively preheated pellet designs developed by the Nike laser group. Two main diagnostics for this effort are two spectrometers incorporating three detection systems. While both spectrometers use 2500 lines/mm transmission gratings, one instrument is coupled to a soft x-ray streak camera and the other is coupled to both an absolutely calibrated Si photodiode array and a charge coupled device (CCD) camera. Absolute calibration of spectrometer components has been undertaken at the National Synchrotron Light Source at Brookhaven National Laboratories. Currently, the system has been used to measure the spatially integrated soft x-ray flux as a function of target material, laser power, and laser spot size. A comparison between measured and calculated flux for Au and CH targets shows reasonable agreement to one-dimensional modeling for two laser power densities.

  16. Error Analysis of the IGS repro2 Station Position Time Series

    NASA Astrophysics Data System (ADS)

    Rebischung, P.; Ray, J.; Benoist, C.; Metivier, L.; Altamimi, Z.

    2015-12-01

    Eight Analysis Centers (ACs) of the International GNSS Service (IGS) have completed a second reanalysis campaign (repro2) of the GNSS data collected by the IGS global tracking network back to 1994, using the latest available models and methodology. The AC repro2 contributions include in particular daily terrestrial frame solutions, the first time with sub-weekly resolution for the full IGS history. The AC solutions, comprising positions for 1848 stations with daily polar motion coordinates, were combined to form the IGS contribution to the next release of the International Terrestrial Reference Frame (ITRF2014). Inter-AC position consistency is excellent, about 1.5 mm horizontal and 4 mm vertical. The resulting daily combined frames were then stacked into a long-term cumulative frame assuming generally linear motions, which constitutes the GNSS input to the ITRF2014 inter-technique combination. A special challenge involved identifying the many position discontinuities, averaging about 1.8 per station. A stacked periodogram of the station position residual time series from this long-term solution reveals a number of unexpected spectral lines (harmonics of the GPS draconitic year, fortnightly tidal lines) on top of a white+flicker background noise and strong seasonal variations. In this study, we will present results from station- and AC-specific analyses of the noise and periodic errors present in the IGS repro2 station position time series. So as to better understand their sources, and in view of developing a spatio-temporal error model, we will focus in particular on the spatial distribution of the noise characteristics and of the periodic errors. By computing AC-specific long-term frames and analyzing the respective residual time series, we will additionally study how the characteristics of the noise and of the periodic errors depend on the adopted analysis strategy and reduction software.

  17. SEPARABLE RESPONSES TO ERROR, AMBIGUITY, AND REACTION TIME IN CINGULO-OPERCULAR TASK CONTROL REGIONS

    PubMed Central

    Neta, Maital; Schlaggar, Bradley L.; Petersen, Steven E.

    2014-01-01

    The dorsal anterior cingulate (dACC), along with the closely affiliated anterior insula/frontal operculum have been demonstrated to show three types of task control signals across a wide variety of tasks. One of these signals, a transient signal that is thought to represent performance feedback, shows greater activity to error than correct trials. Other work has found similar effects for uncertainty/ambiguity or conflict, though some argue that dACC activity is, instead, modulated primarily by other processes more reflected in reaction time. Here, we demonstrate that, rather than a single explanation, multiple information processing operations are crucial to characterizing the function of these brain regions, by comparing operations within a single paradigm. Participants performed two tasks in an fMRI experimental session: (1) deciding whether or not visually presented word pairs rhyme, and (2) rating auditorily presented single words as abstract or concrete. A pilot was used to identify ambiguous stimuli for both tasks (e.g., word pair: BASS/GRACE; single word: CHANGE). We found greater cingulo-opercular activity for errors and ambiguous trials than clear/correct trials, with a robust effect of reaction time. The effects of error and ambiguity remained when reaction time was regressed out, although the differences decreased. Further stepwise regression of response consensus (agreement across participants for each stimulus; a proxy for ambiguity) decreased differences between ambiguous and clear trials, but left error-related differences almost completely intact. These observations suggest that trial-wise responses in cinguloopercular regions monitor multiple performance indices, including accuracy, ambiguity, and reaction time. PMID:24887509

  18. PULSAR TIMING ERRORS FROM ASYNCHRONOUS MULTI-FREQUENCY SAMPLING OF DISPERSION MEASURE VARIATIONS

    SciTech Connect

    Lam, M. T.; Cordes, J. M.; Chatterjee, S.; Dolch, T.

    2015-03-10

    Free electrons in the interstellar medium cause frequency-dependent delays in pulse arrival times due to both scattering and dispersion. Multi-frequency measurements are used to estimate and remove dispersion delays. In this paper, we focus on the effect of any non-simultaneity of multi-frequency observations on dispersive delay estimation and removal. Interstellar density variations combined with changes in the line of sight from pulsar and observer motions cause dispersion measure (DM) variations with an approximately power-law power spectrum, augmented in some cases by linear trends. We simulate time series, estimate the magnitude and statistical properties of timing errors that result from non-simultaneous observations, and derive prescriptions for data acquisition that are needed in order to achieve a specified timing precision. For nearby, highly stable pulsars, measurements need to be simultaneous to within about one day in order for the timing error from asynchronous DM correction to be less than about 10 ns. We discuss how timing precision improves when increasing the number of dual-frequency observations used in DM estimation for a given epoch. For a Kolmogorov wavenumber spectrum, we find about a factor of two improvement in precision timing when increasing from two to three observations but diminishing returns thereafter.

  19. Delay-Compensation Flip-Flop with In-situ Error Monitoring for Low-Power and Timing-Error-Tolerant Circuit Design

    NASA Astrophysics Data System (ADS)

    Hirose, Kenichiro; Manzawa, Yasuo; Goshima, Masahiro; Sakai, Shuichi

    2008-04-01

    With the continuous downscaling of transistors, process variation and power consumption have become major issues. Dynamic voltage and frequency scaling (DVFS) with in-situ timing-error monitoring is an effective method that addresses both issues. However, the conventional implementations of this method, which are mainly based on duplicated circuits, have some implementation-specific constraints. In this paper, the authors propose a delay-compensation flip-flop (DCFF) that does not use duplicated circuit components. It monitors timing errors by directly checking the transient timings of signals. The DCFF adjusts the rising-edge timings of the clock to avoid timing errors and compensates the timing margins between successive stages. Simulations using simulation program with integrated circuit emphasis (SPICE) indicated that the DCFF can operate in a wider supply voltage range than the conventional implementation of DVFS with in-situ timing-error monitoring. A 2.5 ×2.5 mm2 test chip was designed by using a 0.18 µm 5-metal process. An essential circuit component of the DCFF was implemented using semi-custom gate-array chips and its operation was verified. Although more detailed and varied simulations and actual measurements are required as future work, DCFFs can be effectively applied to process-variation tolerance and low-power computation and to optimize the design margin and resolve the false-path problem.

  20. Real-Time Baseline Error Estimation and Correction for GNSS/Strong Motion Seismometer Integration

    NASA Astrophysics Data System (ADS)

    Li, C. Y. N.; Groves, P. D.; Ziebart, M. K.

    2014-12-01

    Accurate and rapid estimation of permanent surface displacement is required immediately after a slip event for earthquake monitoring or tsunami early warning. It is difficult to achieve the necessary accuracy and precision at high- and low-frequencies using GNSS or seismometry alone. GNSS and seismic sensors can be integrated to overcome the limitations of each. Kalman filter algorithms with displacement and velocity states have been developed to combine GNSS and accelerometer observations to obtain the optimal displacement solutions. However, the sawtooth-like phenomena caused by the bias or tilting of the sensor decrease the accuracy of the displacement estimates. A three-dimensional Kalman filter algorithm with an additional baseline error state has been developed. An experiment with both a GNSS receiver and a strong motion seismometer mounted on a movable platform and subjected to known displacements was carried out. The results clearly show that the additional baseline error state enables the Kalman filter to estimate the instrument's sensor bias and tilt effects and correct the state estimates in real time. Furthermore, the proposed Kalman filter algorithm has been validated with data sets from the 2010 Mw 7.2 El Mayor-Cucapah Earthquake. The results indicate that the additional baseline error state can not only eliminate the linear and quadratic drifts but also reduce the sawtooth-like effects from the displacement solutions. The conventional zero-mean baseline-corrected results cannot show the permanent displacements after an earthquake; the two-state Kalman filter can only provide stable and optimal solutions if the strong motion seismometer had not been moved or tilted by the earthquake. Yet the proposed Kalman filter can achieve the precise and accurate displacements by estimating and correcting for the baseline error at each epoch. The integration filters out noise-like distortions and thus improves the real-time detection and measurement capability

  1. Mixed control for perception and action: timing and error correction in rhythmic ball-bouncing.

    PubMed

    Siegler, I A; Bazile, C; Warren, W H

    2013-05-01

    The task of bouncing a ball on a racket was adopted as a model system for investigating the behavioral dynamics of rhythmic movement, specifically how perceptual information modulates the dynamics of action. Two experiments, with sixteen participants each, were carried out to definitively answer the following questions: How are passive stability and active stabilization combined to produce stable behavior? What informational quantities are used to actively regulate the two main components of the action-the timing of racket oscillation and the correction of errors in bounce height? We used a virtual ball-bouncing setup to simultaneously perturb gravity (g) and ball launch velocity (v b) at impact. In Experiment 1, we tested the control of racket timing by varying the ball's upward half-period t up while holding its peak height h p constant. Conversely, in Experiment 2, we tested error correction by varying h p while holding t up constant. Participants adopted a mixed control mode in which information in the ball's trajectory is used to actively stabilize behavior on a cycle-by-cycle basis, in order to keep the system within or near the passively stable region. The results reveal how these adjustments are visually controlled: the period of racket oscillation is modulated by the half-period of the ball's upward flight, and the change in racket velocity from the previous impact (via a change in racket amplitude) is governed by the error to the target. PMID:23515627

  2. Finite-approximation-error-based discrete-time iterative adaptive dynamic programming.

    PubMed

    Wei, Qinglai; Wang, Fei-Yue; Liu, Derong; Yang, Xiong

    2014-12-01

    In this paper, a new iterative adaptive dynamic programming (ADP) algorithm is developed to solve optimal control problems for infinite horizon discrete-time nonlinear systems with finite approximation errors. First, a new generalized value iteration algorithm of ADP is developed to make the iterative performance index function converge to the solution of the Hamilton-Jacobi-Bellman equation. The generalized value iteration algorithm permits an arbitrary positive semi-definite function to initialize it, which overcomes the disadvantage of traditional value iteration algorithms. When the iterative control law and iterative performance index function in each iteration cannot accurately be obtained, for the first time a new "design method of the convergence criteria" for the finite-approximation-error-based generalized value iteration algorithm is established. A suitable approximation error can be designed adaptively to make the iterative performance index function converge to a finite neighborhood of the optimal performance index function. Neural networks are used to implement the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the developed method. PMID:25265640

  3. Effects of dating errors on nonparametric trend analyses of speleothem time series

    NASA Astrophysics Data System (ADS)

    Mudelsee, M.; Fohlmeister, J.; Scholz, D.

    2012-10-01

    A fundamental problem in paleoclimatology is to take fully into account the various error sources when examining proxy records with quantitative methods of statistical time series analysis. Records from dated climate archives such as speleothems add extra uncertainty from the age determination to the other sources that consist in measurement and proxy errors. This paper examines three stalagmite time series of oxygen isotopic composition (δ18O) from two caves in western Germany, the series AH-1 from the Atta Cave and the series Bu1 and Bu4 from the Bunker Cave. These records carry regional information about past changes in winter precipitation and temperature. U/Th and radiocarbon dating reveals that they cover the later part of the Holocene, the past 8.6 thousand years (ka). We analyse centennial- to millennial-scale climate trends by means of nonparametric Gasser-Müller kernel regression. Error bands around fitted trend curves are determined by combining (1) block bootstrap resampling to preserve noise properties (shape, autocorrelation) of the δ18O residuals and (2) timescale simulations (models StalAge and iscam). The timescale error influences on centennial- to millennial-scale trend estimation are not excessively large. We find a "mid-Holocene climate double-swing", from warm to cold to warm winter conditions (6.5 ka to 6.0 ka to 5.1 ka), with warm-cold amplitudes of around 0.5‰ δ18O; this finding is documented by all three records with high confidence. We also quantify the Medieval Warm Period (MWP), the Little Ice Age (LIA) and the current warmth. Our analyses cannot unequivocally support the conclusion that current regional winter climate is warmer than that during the MWP.

  4. Effects of dating errors on nonparametric trend analyses of speleothem time series

    NASA Astrophysics Data System (ADS)

    Mudelsee, M.; Fohlmeister, J.; Scholz, D.

    2012-05-01

    A fundamental problem in paleoclimatology is to take fully into account the various error sources when examining proxy records with quantitative methods of statistical time series analysis. Records from dated climate archives such as speleothems add extra uncertainty from the age determination to the other sources that consist in measurement and proxy errors. This paper examines three stalagmite time series of oxygen isotopic composition (δ18O) from two caves in Western Germany, the series AH-1 from the Atta cave and the series Bu1 and Bu4 from the Bunker cave. These records carry regional information about past changes in winter precipitation and temperature. U/Th and radiocarbon dating reveals that they cover the later part of the Holocene, the past 8.6 thousand years (ka). We analyse centennial- to millennial-scale climate trends by means of nonparametric Gasser-Müller kernel regression. Error bands around fitted trend curves are determined by combining (1) block bootstrap resampling to preserve noise properties (shape, autocorrelation) of the δ18O residuals and (2) timescale simulations (models StalAge and iscam). The timescale error influences on centennial- to millennial-scale trend estimation are not excessively large. We find a "mid-Holocene climate double-swing", from warm to cold to warm winter conditions (6.5 ka to 6.0 ka to 5.1 ka), with warm-cold amplitudes of around 0.5‰ δ18O; this finding is documented by all three records with high confidence. We also quantify the Medieval Warm Period (MWP), the Little Ice Age (LIA) and the current warmth. Our analyses cannot unequivocally support the conclusion that current regional winter climate is warmer than that during the MWP.

  5. Absolute calibration of optical flats

    DOEpatents

    Sommargren, Gary E.

    2005-04-05

    The invention uses the phase shifting diffraction interferometer (PSDI) to provide a true point-by-point measurement of absolute flatness over the surface of optical flats. Beams exiting the fiber optics in a PSDI have perfect spherical wavefronts. The measurement beam is reflected from the optical flat and passed through an auxiliary optic to then be combined with the reference beam on a CCD. The combined beams include phase errors due to both the optic under test and the auxiliary optic. Standard phase extraction algorithms are used to calculate this combined phase error. The optical flat is then removed from the system and the measurement fiber is moved to recombine the two beams. The newly combined beams include only the phase errors due to the auxiliary optic. When the second phase measurement is subtracted from the first phase measurement, the absolute phase error of the optical flat is obtained.

  6. Accounting for baseline differences and measurement error in the analysis of change over time.

    PubMed

    Braun, Julia; Held, Leonhard; Ledergerber, Bruno

    2014-01-15

    If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy.

  7. Accounting for baseline differences and measurement error in the analysis of change over time.

    PubMed

    Braun, Julia; Held, Leonhard; Ledergerber, Bruno

    2014-01-15

    If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy. PMID:23900718

  8. Adaptive correction method for an OCXO and investigation of analytical cumulative time error upper bound.

    PubMed

    Zhou, Hui; Kunz, Thomas; Schwartz, Howard

    2011-01-01

    Traditional oscillators used in timing modules of CDMA and WiMAX base stations are large and expensive. Applying cheaper and smaller, albeit more inaccurate, oscillators in timing modules is an interesting research challenge. An adaptive control algorithm is presented to enhance the oscillators to meet the requirements of base stations during holdover mode. An oscillator frequency stability model is developed for the adaptive control algorithm. This model takes into account the control loop which creates the correction signal when the timing module is in locked mode. A recursive prediction error method is used to identify the system model parameters. Simulation results show that an oscillator enhanced by our adaptive control algorithm improves the oscillator performance significantly, compared with uncorrected oscillators. Our results also show the benefit of explicitly modeling the control loop. Finally, the cumulative time error upper bound of such enhanced oscillators is investigated analytically and comparison results between the analytical and simulated upper bound are provided. The results show that the analytical upper bound can serve as a practical guide for system designers. PMID:21244973

  9. Adaptive correction method for an OCXO and investigation of analytical cumulative time error upper bound.

    PubMed

    Zhou, Hui; Kunz, Thomas; Schwartz, Howard

    2011-01-01

    Traditional oscillators used in timing modules of CDMA and WiMAX base stations are large and expensive. Applying cheaper and smaller, albeit more inaccurate, oscillators in timing modules is an interesting research challenge. An adaptive control algorithm is presented to enhance the oscillators to meet the requirements of base stations during holdover mode. An oscillator frequency stability model is developed for the adaptive control algorithm. This model takes into account the control loop which creates the correction signal when the timing module is in locked mode. A recursive prediction error method is used to identify the system model parameters. Simulation results show that an oscillator enhanced by our adaptive control algorithm improves the oscillator performance significantly, compared with uncorrected oscillators. Our results also show the benefit of explicitly modeling the control loop. Finally, the cumulative time error upper bound of such enhanced oscillators is investigated analytically and comparison results between the analytical and simulated upper bound are provided. The results show that the analytical upper bound can serve as a practical guide for system designers.

  10. Delays without Mistakes: Response Time and Error Distributions in Dual-Task

    PubMed Central

    Kamienkowski, Juan Esteban; Sigman, Mariano

    2008-01-01

    Background When two tasks are presented within a short interval, a delay in the execution of the second task has been systematically observed. Psychological theorizing has argued that while sensory and motor operations can proceed in parallel, the coordination between these modules establishes a processing bottleneck. This model predicts that the timing but not the characteristics (duration, precision, variability…) of each processing stage are affected by interference. Thus, a critical test to this hypothesis is to explore whether the qualitiy of the decision is unaffected by a concurrent task. Methodology/Principal Findings In number comparison–as in most decision comparison tasks with a scalar measure of the evidence–the extent to which two stimuli can be discriminated is determined by their ratio, referred as the Weber fraction. We investigated performance in a rapid succession of two non-symbolic comparison tasks (number comparison and tone discrimination) in which error rates in both tasks could be manipulated parametrically from chance to almost perfect. We observed that dual-task interference has a massive effect on RT but does not affect the error rates, or the distribution of errors as a function of the evidence. Conclusions/Significance Our results imply that while the decision process itself is delayed during multiple task execution, its workings are unaffected by task interference, providing strong evidence in favor of a sequential model of task execution. PMID:18787706

  11. Dynamic time warping in phoneme modeling for fast pronunciation error detection.

    PubMed

    Miodonska, Zuzanna; Bugdol, Marcin D; Krecichwost, Michal

    2016-02-01

    The presented paper describes a novel approach to the detection of pronunciation errors. It makes use of the modeling of well-pronounced and mispronounced phonemes by means of the Dynamic Time Warping (DTW) algorithm. Four approaches that make use of the DTW phoneme modeling were developed to detect pronunciation errors: Variations of the Word Structure (VoWS), Normalized Phoneme Distances Thresholding (NPDT), Furthest Segment Search (FSS) and Normalized Furthest Segment Search (NFSS). The performance evaluation of each module was carried out using a speech database of correctly and incorrectly pronounced words in the Polish language, with up to 10 patterns of every trained word from a set of 12 words having different phonetic structures. The performance of DTW modeling was compared to Hidden Markov Models (HMM) that were used for the same four approaches (VoWS, NPDT, FSS, NFSS). The average error rate (AER) was the lowest for DTW with NPDT (AER=0.287) and scored better than HMM with FSS (AER=0.473), which was the best result for HMM. The DTW modeling was faster than HMM for all four approaches. This technique can be used for computer-assisted pronunciation training systems that can work with a relatively small training speech corpus (less than 20 patterns per word) to support speech therapy at home.

  12. Dynamic time warping in phoneme modeling for fast pronunciation error detection.

    PubMed

    Miodonska, Zuzanna; Bugdol, Marcin D; Krecichwost, Michal

    2016-02-01

    The presented paper describes a novel approach to the detection of pronunciation errors. It makes use of the modeling of well-pronounced and mispronounced phonemes by means of the Dynamic Time Warping (DTW) algorithm. Four approaches that make use of the DTW phoneme modeling were developed to detect pronunciation errors: Variations of the Word Structure (VoWS), Normalized Phoneme Distances Thresholding (NPDT), Furthest Segment Search (FSS) and Normalized Furthest Segment Search (NFSS). The performance evaluation of each module was carried out using a speech database of correctly and incorrectly pronounced words in the Polish language, with up to 10 patterns of every trained word from a set of 12 words having different phonetic structures. The performance of DTW modeling was compared to Hidden Markov Models (HMM) that were used for the same four approaches (VoWS, NPDT, FSS, NFSS). The average error rate (AER) was the lowest for DTW with NPDT (AER=0.287) and scored better than HMM with FSS (AER=0.473), which was the best result for HMM. The DTW modeling was faster than HMM for all four approaches. This technique can be used for computer-assisted pronunciation training systems that can work with a relatively small training speech corpus (less than 20 patterns per word) to support speech therapy at home. PMID:26739104

  13. Sieve Estimation of Constant and Time-Varying Coefficients in Nonlinear Ordinary Differential Equation Models by Considering Both Numerical Error and Measurement Error

    PubMed Central

    Xue, Hongqi; Miao, Hongyu; Wu, Hulin

    2010-01-01

    This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge–Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the p-order numerical algorithm goes to zero at a rate faster than n−1/(p∧4), the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we have shown that the numerical solution-based NLS estimator and the sieve NLS estimator are strongly consistent. The sieve estimator of constant parameters is asymptotically normal with the same asymptotic co-variance as that of the case where the true ODE solution is exactly known, while the estimator of the time-varying parameter has the optimal convergence rate under some regularity conditions. The theoretical results are also developed for the case when the step size of the ODE numerical solver does not go to zero fast enough or the numerical error is comparable to the measurement error. We illustrate our approach with both simulation studies and clinical data on HIV viral dynamics. PMID:21132064

  14. Measurement error in time-series analysis: a simulation study comparing modelled and monitored data

    PubMed Central

    2013-01-01

    Background Assessing health effects from background exposure to air pollution is often hampered by the sparseness of pollution monitoring networks. However, regional atmospheric chemistry-transport models (CTMs) can provide pollution data with national coverage at fine geographical and temporal resolution. We used statistical simulation to compare the impact on epidemiological time-series analysis of additive measurement error in sparse monitor data as opposed to geographically and temporally complete model data. Methods Statistical simulations were based on a theoretical area of 4 regions each consisting of twenty-five 5 km × 5 km grid-squares. In the context of a 3-year Poisson regression time-series analysis of the association between mortality and a single pollutant, we compared the error impact of using daily grid-specific model data as opposed to daily regional average monitor data. We investigated how this comparison was affected if we changed the number of grids per region containing a monitor. To inform simulations, estimates (e.g. of pollutant means) were obtained from observed monitor data for 2003–2006 for national network sites across the UK and corresponding model data that were generated by the EMEP-WRF CTM. Average within-site correlations between observed monitor and model data were 0.73 and 0.76 for rural and urban daily maximum 8-hour ozone respectively, and 0.67 and 0.61 for rural and urban loge(daily 1-hour maximum NO2). Results When regional averages were based on 5 or 10 monitors per region, health effect estimates exhibited little bias. However, with only 1 monitor per region, the regression coefficient in our time-series analysis was attenuated by an estimated 6% for urban background ozone, 13% for rural ozone, 29% for urban background loge(NO2) and 38% for rural loge(NO2). For grid-specific model data the corresponding figures were 19%, 22%, 54% and 44% respectively, i.e. similar for rural loge(NO2) but more marked for urban loge(NO2

  15. Adaptive error detection for HDR/PDR brachytherapy: Guidance for decision making during real-time in vivo point dosimetry

    SciTech Connect

    Kertzscher, Gustavo Andersen, Claus E.; Tanderup, Kari

    2014-05-15

    Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusive dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was

  16. Representation of layer-counted proxy records as probability densities on error-free time axes

    NASA Astrophysics Data System (ADS)

    Boers, Niklas; Goswami, Bedartha; Ghil, Michael

    2016-04-01

    Time series derived from paleoclimatic proxy records exhibit substantial dating uncertainties in addition to the measurement errors of the proxy values. For radiometrically dated proxy archives, Goswami et al. [1] have recently introduced a framework rooted in Bayesian statistics that successfully propagates the dating uncertainties from the time axis to the proxy axis. The resulting proxy record consists of a sequence of probability densities over the proxy values, conditioned on prescribed age values. One of the major benefits of this approach is that the proxy record is represented on an accurate, error-free time axis. Such unambiguous dating is crucial, for instance, in comparing different proxy records. This approach, however, is not directly applicable to proxy records with layer-counted chronologies, as for example ice cores, which are typically dated by counting quasi-annually deposited ice layers. Hence the nature of the chronological uncertainty in such records is fundamentally different from that in radiometrically dated ones. Here, we introduce a modification of the Goswami et al. [1] approach that is specifically designed for layer-counted proxy records, instead of radiometrically dated ones. We apply our method to isotope ratios and dust concentrations in the NGRIP core, using a published 60,000-year chronology [2]. It is shown that the further one goes into the past, the more the layer-counting errors accumulate and lead to growing uncertainties in the probability density sequence for the proxy values that results from the proposed approach. For the older parts of the record, these uncertainties affect more and more a statistically sound estimation of proxy values. This difficulty implies that great care has to be exercised when comparing and in particular aligning specific events among different layer-counted proxy records. On the other hand, when attempting to derive stochastic dynamical models from the proxy records, one is only interested in the

  17. Standardization of Gene Expression Quantification by Absolute Real-Time qRT-PCR System Using a Single Standard for Marker and Reference Genes.

    PubMed

    Zhou, Yi-Hong; Raj, Vinay R; Siegel, Eric; Yu, Liping

    2010-08-16

    In the last decade, genome-wide gene expression data has been collected from a large number of cancer specimens. In many studies utilizing either microarray-based or knowledge-based gene expression profiling, both the validation of candidate genes and the identification and inclusion of biomarkers in prognosis-modeling has employed real-time quantitative PCR on reverse transcribed mRNA (qRT-PCR) because of its inherent sensitivity and quantitative nature. In qRT-PCR data analysis, an internal reference gene is used to normalize the variation in input sample quantity. The relative quantification method used in current real-time qRT-PCR analysis fails to ensure data comparability pivotal in identification of prognostic biomarkers. By employing an absolute qRT-PCR system that uses a single standard for marker and reference genes (SSMR) to achieve absolute quantification, we showed that the normalized gene expression data is comparable and independent of variations in the quantities of sample as well as the standard used for generating standard curves. We compared two sets of normalized gene expression data with same histological diagnosis of brain tumor from two labs using relative and absolute real-time qRT-PCR. Base-10 logarithms of the gene expression ratio relative to ACTB were evaluated for statistical equivalence between tumors processed by two different labs. The results showed an approximate comparability for normalized gene expression quantified using a SSMR-based qRT-PCR. Incomparable results were seen for the gene expression data using relative real-time qRT-PCR, due to inequality in molar concentration of two standards for marker and reference genes. Overall results show that SSMR-based real-time qRT-PCR ensures comparability of gene expression data much needed in establishment of prognostic/predictive models for cancer patients-a process that requires large sample sizes by combining independent sets of data.

  18. Real-Time Determination of Absolute Frequency in Continuous-Wave Terahertz Radiation with a Photocarrier Terahertz Frequency Comb Induced by an Unstabilized Femtosecond Laser

    NASA Astrophysics Data System (ADS)

    Minamikawa, Takeo; Hayashi, Kenta; Mizuguchi, Tatsuya; Hsieh, Yi-Da; Abdelsalam, Dahi Ghareab; Mizutani, Yasuhiro; Yamamoto, Hirotsugu; Iwata, Tetsuo; Yasui, Takeshi

    2016-05-01

    A practical method for the absolute frequency measurement of continuous-wave terahertz (CW-THz) radiation uses a photocarrier terahertz frequency comb (PC-THz comb) because of its ability to realize real-time, precise measurement without the need for cryogenic cooling. However, the requirement for precise stabilization of the repetition frequency ( f rep) and/or use of dual femtosecond lasers hinders its practical use. In this article, based on the fact that an equal interval between PC-THz comb modes is always maintained regardless of the fluctuation in f rep, the PC-THz comb induced by an unstabilized laser was used to determine the absolute frequency f THz of CW-THz radiation. Using an f rep-free-running PC-THz comb, the f THz of the frequency-fixed or frequency-fluctuated active frequency multiplier chain CW-THz source was determined at a measurement rate of 10 Hz with a relative accuracy of 8.2 × 10-13 and a relative precision of 8.8 × 10-12 to a rubidium frequency standard. Furthermore, f THz was correctly determined even when fluctuating over a range of 20 GHz. The proposed method enables the use of any commercial femtosecond laser for the absolute frequency measurement of CW-THz radiation.

  19. Post-event human decision errors: operator action tree/time reliability correlation

    SciTech Connect

    Hall, R E; Fragola, J; Wreathall, J

    1982-11-01

    This report documents an interim framework for the quantification of the probability of errors of decision on the part of nuclear power plant operators after the initiation of an accident. The framework can easily be incorporated into an event tree/fault tree analysis. The method presented consists of a structure called the operator action tree and a time reliability correlation which assumes the time available for making a decision to be the dominating factor in situations requiring cognitive human response. This limited approach decreases the magnitude and complexity of the decision modeling task. Specifically, in the past, some human performance models have attempted prediction by trying to emulate sequences of human actions, or by identifying and modeling the information processing approach applicable to the task. The model developed here is directed at describing the statistical performance of a representative group of hypothetical individuals responding to generalized situations.

  20. A new global time-variable gravity mascon solution: Signal and error analysis

    NASA Astrophysics Data System (ADS)

    Loomis, B.; Luthcke, S. B.; Sabaka, T. J.

    2014-12-01

    The latest time-variable global gravity mascon solution product from the NASA Goddard Space Flight Center is described and analyzed. This most recent solution is estimated directly from the reduction of the GRACE L1B RL2 data with an optimized set of arc parameters and the full noise covariance. The mascons are estimated monthly with 1-arc-degree equal-area sampling where anisotropic spatial constraints are applied to maximize the recovery of signal while minimizing noise and signal leakage across the geographic constraint region boundaries. Analysis of the solution signals and errors is presented at global and regional scales and comparisons to the GRACE project solutions and independent models are presented. Time series of cryospheric and hydrologic regions are analyzed with the complete Ensemble Empirical Mode Decomposition (EEMD) with adaptive noise algorithm, which adaptively sifts the signal into intrinsic frequency-ordered modes. Lastly, the impact of different solution components is discussed.

  1. A wearable device for real-time motion error detection and vibrotactile instructional cuing.

    PubMed

    Lee, Beom-Chan; Chen, Shu; Sienko, Kathleen H

    2011-08-01

    We have developed a mobile instrument for motion instruction and correction (MIMIC) that enables an expert (i.e., physical therapist) to map his/her movements to a trainee (i.e., patient) in a hands-free fashion. MIMIC comprises an expert module (EM) and a trainee module (TM). Both the EM and TM are composed of six-degree-of-freedom inertial measurement units, microcontrollers, and batteries. The TM also has an array of actuators that provide the user with vibrotactile instructional cues. The expert wears the EM, and his/her relevant body position is computed by an algorithm based on an extended Kalman filter that provides asymptotic state estimation. The captured expert body motion information is transmitted wirelessly to the trainee, and based on the computed difference between the expert and trainee motion, directional instructions are displayed via vibrotactile stimulation to the skin. The trainee is instructed to move in the direction of the vibration sensation until the vibration is eliminated. Two proof-of-concept studies involving young, healthy subjects were conducted using a simplified version of the MIMIC system (pre-specified target trajectories representing ideal expert movements and only two actuators) during anterior-posterior trunk movements. The first study was designed to investigate the effects of changing the expert-trainee error thresholds (0.5(°), 1.0(°), and 1.5(°)) and varying the nature of the control signal (proportional, proportional plus derivative). Expert-subject cross-correlation values were maximized (0.99) and average position errors (0.33(°)) and time delays (0.2 s) were minimized when the controller used a 0.5(°) error threshold and proportional plus derivative feedback control signal. The second study used the best performing activation threshold and control signal determined from the first study to investigate subject performance when the motion task complexity and speed were varied. Subject performance decreased as motion

  2. A Research on Errors in Two-way Satellite Time and Frequency Transfer

    NASA Astrophysics Data System (ADS)

    Wu, W. J.

    2013-07-01

    The two-way satellite time and frequency transfer (TWSTFT) is one of the most accurate means for remote clock comparison with an uncertainty in time of less than 1 ns and with a relative uncertainty in frequency of about 10^{-14} d^{-1}. The transmission paths of signals between two stations are almost symmetrical in the TWSTFT. In principal, most of all kinds of path delays are canceled out, which guarantees the high accuracy of TWSTFT. With the development of TWSTFT and the increase in the frequence of observations, it is showed that the diurnal variation of systematic errors is about 1˜3 ns in the TWSTFT. This problem has become a hot topic of research around the world. By using the data of Transfer Satellite Orbit Determination Net (TSODN) and international TWSTFT links, the systematic errors are studied in detail as follows: (1) The atmospheric effect. This includes ionospheric and tropospheric effects. The tropospheric effect is very small, and it can be ignored. The ionospheric error can be corrected by using the IGS ionosphere product. The variations of ionospheric effect are about 0˜0.05 ns and 0˜0.7 ns at KU band and C band, respectively, and have the diurnal variation characteristics. (2) The equipment time delay. The equipment delay is closely related with temperature, presenting a linear relation at the normal temperature. Its outdoor part indicates the characteristics of the diurnal variation with the environment temperature. The various kinds of effects related with the modem are studied. Some resolutions are proposed. (3) The satellite transponder effect. This effect is studied by using the data of international TWSTFT links. It is analyzed that different satellite transponders can highly increase the amplitude of the diurnal variation in one TWSTFT link. This is the major reason of the diurnal variation in the TWSTFT. The function fitting method is used to basically solve this problem. (4) The satellite motion effect. The geostationary

  3. Error in estimation of rate and time inferred from the early amniote fossil record and avian molecular clocks.

    PubMed

    van Tuinen, Marcel; Hadly, Elizabeth A

    2004-08-01

    The best reconstructions of the history of life will use both molecular time estimates and fossil data. Errors in molecular rate estimation typically are unaccounted for and no attempts have been made to quantify this uncertainty comprehensively. Here, focus is primarily on fossil calibration error because this error is least well understood and nearly universally disregarded. Our quantification of errors in the synapsid-diapsid calibration illustrates that although some error can derive from geological dating of sedimentary rocks, the absence of good stem fossils makes phylogenetic error the most critical. We therefore propose the use of calibration ages that are based on the first undisputed synapsid and diapsid. This approach yields minimum age estimates and standard errors of 306.1 +/- 8.5 MYR for the divergence leading to birds and mammals. Because this upper bound overlaps with the recent use of 310 MYR, we do not support the notion that several metazoan divergence times are significantly overestimated because of serious miscalibration (sensuLee 1999). However, the propagation of relevant errors reduces the statistical significance of the pre-K-T boundary diversification of many bird lineages despite retaining similar point time estimates. Our results demand renewed investigation into suitable loci and fossil calibrations for constructing evolutionary timescales.

  4. An examination of exposure measurement error from air pollutant spatial variability in time-series studies.

    PubMed

    Sarnat, Stefanie E; Klein, Mitchel; Sarnat, Jeremy A; Flanders, W Dana; Waller, Lance A; Mulholland, James A; Russell, Armistead G; Tolbert, Paige E

    2010-03-01

    Relatively few studies have evaluated the effects of heterogeneous spatiotemporal pollutant distributions on health risk estimates in time-series analyses that use data from a central monitor to assign exposures. We present a method for examining the effects of exposure measurement error relating to spatiotemporal variability in ambient air pollutant concentrations on air pollution health risk estimates in a daily time-series analysis of emergency department visits in Atlanta, Georgia. We used Poisson generalized linear models to estimate associations between current-day pollutant concentrations and circulatory emergency department visits for the 1998-2004 time period. Data from monitoring sites located in different geographical regions of the study area and at different distances from several urban geographical subpopulations served as alternative measures of exposure. We observed associations for spatially heterogeneous pollutants (CO and NO(2)) using data from several different urban monitoring sites. These associations were not observed when using data from the most rural site, located 38 miles from the city center. In contrast, associations for spatially homogeneous pollutants (O(3) and PM(2.5)) were similar, regardless of the monitoring site location. We found that monitoring site location and the distance of a monitoring site to a population of interest did not meaningfully affect estimated associations for any pollutant when using data from urban sites located within 20 miles from the population center under study. However, for CO and NO(2), these factors were important when using data from rural sites located > or = 30 miles from the population center, most likely owing to exposure measurement error. Overall, our findings lend support to the use of pollutant data from urban central sites to assess population exposures within geographically dispersed study populations in Atlanta and similar cities. PMID:19277071

  5. Verdict: Time-Dependent Density Functional Theory "Not Guilty" of Large Errors for Cyanines.

    PubMed

    Jacquemin, Denis; Zhao, Yan; Valero, Rosendo; Adamo, Carlo; Ciofini, Ilaria; Truhlar, Donald G

    2012-04-10

    We assess the accuracy of eight Minnesota density functionals (M05 through M08-SO) and two others (PBE and PBE0) for the prediction of electronic excitation energies of a family of four cyanine dyes. We find that time-dependent density functional theory (TDDFT) with the five most recent of these functionals (from M06-HF through M08-SO) is able to predict excitation energies for cyanine dyes within 0.10-0.36 eV accuracy with respect to the most accurate available Quantum Monte Carlo calculations, providing a comparable accuracy to the latest generation of CASPT2 calculations, which have errors of 0.16-0.34 eV. Therefore previous conclusions that TDDFT cannot treat cyanine dyes reasonably accurately must be revised.

  6. GPS receivers timing data processing using neural networks: optimal estimation and errors modeling.

    PubMed

    Mosavi, M R

    2007-10-01

    The Global Positioning System (GPS) is a network of satellites, whose original purpose was to provide accurate navigation, guidance, and time transfer to military users. The past decade has also seen rapid concurrent growth in civilian GPS applications, including farming, mining, surveying, marine, and outdoor recreation. One of the most significant of these civilian applications is commercial aviation. A stand-alone civilian user enjoys an accuracy of 100 meters and 300 nanoseconds, 25 meters and 200 nanoseconds, before and after Selective Availability (SA) was turned off. In some applications, high accuracy is required. In this paper, five Neural Networks (NNs) are proposed for acceptable noise reduction of GPS receivers timing data. The paper uses from an actual data collection for evaluating the performance of the methods. An experimental test setup is designed and implemented for this purpose. The obtained experimental results from a Coarse Acquisition (C/A)-code single-frequency GPS receiver strongly support the potential of methods to give high accurate timing. Quality of the obtained results is very good, so that GPS timing RMS error reduce to less than 120 and 40 nanoseconds, with and without SA. PMID:18098370

  7. GPS receivers timing data processing using neural networks: optimal estimation and errors modeling.

    PubMed

    Mosavi, M R

    2007-10-01

    The Global Positioning System (GPS) is a network of satellites, whose original purpose was to provide accurate navigation, guidance, and time transfer to military users. The past decade has also seen rapid concurrent growth in civilian GPS applications, including farming, mining, surveying, marine, and outdoor recreation. One of the most significant of these civilian applications is commercial aviation. A stand-alone civilian user enjoys an accuracy of 100 meters and 300 nanoseconds, 25 meters and 200 nanoseconds, before and after Selective Availability (SA) was turned off. In some applications, high accuracy is required. In this paper, five Neural Networks (NNs) are proposed for acceptable noise reduction of GPS receivers timing data. The paper uses from an actual data collection for evaluating the performance of the methods. An experimental test setup is designed and implemented for this purpose. The obtained experimental results from a Coarse Acquisition (C/A)-code single-frequency GPS receiver strongly support the potential of methods to give high accurate timing. Quality of the obtained results is very good, so that GPS timing RMS error reduce to less than 120 and 40 nanoseconds, with and without SA.

  8. Wind induced errors on solid precipitation measurements: an evaluation using time-dependent turbulence simulations

    NASA Astrophysics Data System (ADS)

    Colli, Matteo; Lanza, Luca Giovanni; Rasmussen, Roy; Mireille Thériault, Julie

    2014-05-01

    Among the different environmental sources of error for ground based solid precipitation measurements, wind is the main responsible for a large reduction of the catching performance. This is due to the aero-dynamic response of the gauge that affects the originally undisturbed airflow causing the deformation of the snowflakes trajectories. The application of composite gauge/wind shield measuring configurations allows the improvements of the collection efficiency (CE) at low wind speeds (Uw) but the performance achievable under severe airflow velocities and the role of turbulence still have to be explained. This work is aimed to assess the wind induced errors of a Geonor T200B vibrating wires gauge equipped with a single Alter shield. This is a common measuring system for solid precipitation, which constitutes of the R3 reference system in the ongoing WMO Solid Precipitation InterComparison Experiment (SPICE). The analysis is carried out by adopting advanced Computational Fluid Dynamics (CFD) tools for the numerical simulation of the turbulent airflow realized in the proximity of the catching section of the gauge. The airflow patterns were computed by running both time-dependent (Large Eddies Simulation) and time-independent (Reynolds Averaged Navier-Stokes) simulations. on the Yellowstone high performance computing system of the National Center for Atmospheric Research. The evaluation of CE under different Uw conditions was obtained by running a Lagrangian model for the calculation of the snowflakes trajectories building on the simulated airflow patterns. Particular attention has been paid to the sensitivity of the trajectories to different snow particles sizes and water content (corresponding to dry and wet snow). The results will be illustrated in comparative form between the different methodologies adopted and the existing infield CE evaluations based on double shield reference gauges.

  9. Statistical modelling of forecast errors for multiple lead-times and a system of reservoirs

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjorn; Steinsland, Ingelin; Kolberg, Sjur

    2010-05-01

    Water resources management, e.g. operation of reservoirs, is amongst others based on forecasts of inflow provided by a precipitation-runoff model. The forecasted inflow is normally given as one value, even though it is an uncertain value. There is a growing interest to account for uncertain information in decision support systems, e.g. how to operate a hydropower reservoir to maximize the gain. One challenge is to develop decision support systems that can use uncertain information. The contribution from the hydrological modeler is to derive a forecast distribution (from which uncertainty intervals can be computed) for the inflow predictions. In this study we constructed a statistical model for the forecast errors for daily inflow into a system of four hydropower reservoirs in Ulla-Førre in Western Norway. A distributed hydrological model was applied to generate the inflow forecasts using weather forecasts provided by ECM for lead-times up to 10 days. The precipitation forecasts were corrected for systematic bias. A statistical model based on auto-regressive innovations for Box-Cox-transformed observations and forecasts was constructed for the forecast errors. The parameters of the statistical model were conditioned on climate and the internal snow state in the hydrological model. The model was evaluated according to the reliability of the forecast distribution, the width of the forecast distribution, and efficiency of the median forecast for the 10 lead times and the four catchments. The interpretation of the results had to be done carefully since the inflow data have a large uncertainty.

  10. Audibility of dispersion error in room acoustic finite-difference time-domain simulation as a function of simulation distance.

    PubMed

    Saarelma, Jukka; Botts, Jonathan; Hamilton, Brian; Savioja, Lauri

    2016-04-01

    Finite-difference time-domain (FDTD) simulation has been a popular area of research in room acoustics due to its capability to simulate wave phenomena in a wide bandwidth directly in the time-domain. A downside of the method is that it introduces a direction and frequency dependent error to the simulated sound field due to the non-linear dispersion relation of the discrete system. In this study, the perceptual threshold of the dispersion error is measured in three-dimensional FDTD schemes as a function of simulation distance. Dispersion error is evaluated for three different explicit, non-staggered FDTD schemes using the numerical wavenumber in the direction of the worst-case error of each scheme. It is found that the thresholds for the different schemes do not vary significantly when the phase velocity error level is fixed. The thresholds are found to vary significantly between the different sound samples. The measured threshold for the audibility of dispersion error at the probability level of 82% correct discrimination for three-alternative forced choice is found to be 9.1 m of propagation in a free field, that leads to a maximum group delay error of 1.8 ms at 20 kHz with the chosen phase velocity error level of 2%. PMID:27106330

  11. Audibility of dispersion error in room acoustic finite-difference time-domain simulation as a function of simulation distance.

    PubMed

    Saarelma, Jukka; Botts, Jonathan; Hamilton, Brian; Savioja, Lauri

    2016-04-01

    Finite-difference time-domain (FDTD) simulation has been a popular area of research in room acoustics due to its capability to simulate wave phenomena in a wide bandwidth directly in the time-domain. A downside of the method is that it introduces a direction and frequency dependent error to the simulated sound field due to the non-linear dispersion relation of the discrete system. In this study, the perceptual threshold of the dispersion error is measured in three-dimensional FDTD schemes as a function of simulation distance. Dispersion error is evaluated for three different explicit, non-staggered FDTD schemes using the numerical wavenumber in the direction of the worst-case error of each scheme. It is found that the thresholds for the different schemes do not vary significantly when the phase velocity error level is fixed. The thresholds are found to vary significantly between the different sound samples. The measured threshold for the audibility of dispersion error at the probability level of 82% correct discrimination for three-alternative forced choice is found to be 9.1 m of propagation in a free field, that leads to a maximum group delay error of 1.8 ms at 20 kHz with the chosen phase velocity error level of 2%.

  12. A wearable device for real-time motion error detection and vibrotactile instructional cuing.

    PubMed

    Lee, Beom-Chan; Chen, Shu; Sienko, Kathleen H

    2011-08-01

    We have developed a mobile instrument for motion instruction and correction (MIMIC) that enables an expert (i.e., physical therapist) to map his/her movements to a trainee (i.e., patient) in a hands-free fashion. MIMIC comprises an expert module (EM) and a trainee module (TM). Both the EM and TM are composed of six-degree-of-freedom inertial measurement units, microcontrollers, and batteries. The TM also has an array of actuators that provide the user with vibrotactile instructional cues. The expert wears the EM, and his/her relevant body position is computed by an algorithm based on an extended Kalman filter that provides asymptotic state estimation. The captured expert body motion information is transmitted wirelessly to the trainee, and based on the computed difference between the expert and trainee motion, directional instructions are displayed via vibrotactile stimulation to the skin. The trainee is instructed to move in the direction of the vibration sensation until the vibration is eliminated. Two proof-of-concept studies involving young, healthy subjects were conducted using a simplified version of the MIMIC system (pre-specified target trajectories representing ideal expert movements and only two actuators) during anterior-posterior trunk movements. The first study was designed to investigate the effects of changing the expert-trainee error thresholds (0.5(°), 1.0(°), and 1.5(°)) and varying the nature of the control signal (proportional, proportional plus derivative). Expert-subject cross-correlation values were maximized (0.99) and average position errors (0.33(°)) and time delays (0.2 s) were minimized when the controller used a 0.5(°) error threshold and proportional plus derivative feedback control signal. The second study used the best performing activation threshold and control signal determined from the first study to investigate subject performance when the motion task complexity and speed were varied. Subject performance decreased as motion

  13. How the propagation of error through stochastic counters affects time discrimination and other psychophysical judgments.

    PubMed

    Killeen, P R; Taylor, T J

    2000-07-01

    The performance of fallible counters is investigated in the context of pacemaker-counter models of interval timing. Failure to reliably transmit signals from one stage of a counter to the next generates periodicity in mean and variance of counts registered, with means power functions of input and standard deviations approximately proportional to the means (Weber's law). The transition diagrams and matrices of the counter are self-similar: Their eigenvalues have a fractal form and closely approximate Julia sets. The distributions of counts registered and of hitting times approximate Weibull densities, which provide the foundation for a signal-detection model of discrimination. Different schemes for weighting the values of each stage may be established by conditioning. As higher order stages of a cascade come on-line the veridicality of lower order stages degrades, leading to scale-invariance in error. The capacity of a counter is more likely to be limited by fallible transmission between stages than by a paucity of stages. Probabilities of successful transmission between stages of a binary counter around 0.98 yield predictions consistent with performance in temporal discrimination and production and with channel capacities for identification of unidimensional stimuli.

  14. Teaching Absolute Value Meaningfully

    ERIC Educational Resources Information Center

    Wade, Angela

    2012-01-01

    What is the meaning of absolute value? And why do teachers teach students how to solve absolute value equations? Absolute value is a concept introduced in first-year algebra and then reinforced in later courses. Various authors have suggested instructional methods for teaching absolute value to high school students (Wei 2005; Stallings-Roberts…

  15. Detecting and Correcting Errors in Rapid Aiming Movements: Effects of Movement Time, Distance, and Velocity

    ERIC Educational Resources Information Center

    Sherwood, David E.

    2010-01-01

    According to closed-loop accounts of motor control, movement errors are detected by comparing sensory feedback to an acquired reference state. Differences between the reference state and the movement-produced feedback results in an error signal that serves as a basis for a correction. The main question addressed in the current study was how…

  16. Sub-micron absolute distance measurements in sub-millisecond times with dual free-running femtosecond Er fiber-lasers.

    PubMed

    Liu, Tze-An; Newbury, Nathan R; Coddington, Ian

    2011-09-12

    We demonstrate a simplified dual-comb LIDAR setup for precision absolute ranging that can achieve a ranging precision of 2 μm in 140 μs acquisition time. With averaging, the precision drops below 1 μm at 0.8 ms and below 200 nm at 20 ms. The system can measure the distance to multiple targets with negligible dead zones and a ranging ambiguity of 1 meter. The system is much simpler than a previous coherent dual-comb LIDAR because the two combs are replaced by free-running, saturable-absorber-based femtosecond Er fiber lasers, rather than tightly phase-locked combs, with the entire time base provided by a single 10-digit frequency counter. Despite the simpler design, the system provides a factor of three improved performance over the previous coherent dual comb LIDAR system.

  17. Sub-micron absolute distance measurements in sub-millisecond times with dual free-running femtosecond Er fiber-lasers.

    PubMed

    Liu, Tze-An; Newbury, Nathan R; Coddington, Ian

    2011-09-12

    We demonstrate a simplified dual-comb LIDAR setup for precision absolute ranging that can achieve a ranging precision of 2 μm in 140 μs acquisition time. With averaging, the precision drops below 1 μm at 0.8 ms and below 200 nm at 20 ms. The system can measure the distance to multiple targets with negligible dead zones and a ranging ambiguity of 1 meter. The system is much simpler than a previous coherent dual-comb LIDAR because the two combs are replaced by free-running, saturable-absorber-based femtosecond Er fiber lasers, rather than tightly phase-locked combs, with the entire time base provided by a single 10-digit frequency counter. Despite the simpler design, the system provides a factor of three improved performance over the previous coherent dual comb LIDAR system. PMID:21935219

  18. Comprehensive panel of real-time TaqMan polymerase chain reaction assays for detection and absolute quantification of filoviruses, arenaviruses, and New World hantaviruses.

    PubMed

    Trombley, Adrienne R; Wachter, Leslie; Garrison, Jeffrey; Buckley-Beason, Valerie A; Jahrling, Jordan; Hensley, Lisa E; Schoepp, Randal J; Norwood, David A; Goba, Augustine; Fair, Joseph N; Kulesh, David A

    2010-05-01

    Viral hemorrhagic fever is caused by a diverse group of single-stranded, negative-sense or positive-sense RNA viruses belonging to the families Filoviridae (Ebola and Marburg), Arenaviridae (Lassa, Junin, Machupo, Sabia, and Guanarito), and Bunyaviridae (hantavirus). Disease characteristics in these families mark each with the potential to be used as a biological threat agent. Because other diseases have similar clinical symptoms, specific laboratory diagnostic tests are necessary to provide the differential diagnosis during outbreaks and for instituting acceptable quarantine procedures. We designed 48 TaqMan-based polymerase chain reaction (PCR) assays for specific and absolute quantitative detection of multiple hemorrhagic fever viruses. Forty-six assays were determined to be virus-specific, and two were designated as pan assays for Marburg virus. The limit of detection for the assays ranged from 10 to 0.001 plaque-forming units (PFU)/PCR. Although these real-time hemorrhagic fever virus assays are qualitative (presence of target), they are also quantitative (measure a single DNA/RNA target sequence in an unknown sample and express the final results as an absolute value (e.g., viral load, PFUs, or copies/mL) on the basis of concentration of standard samples and can be used in viral load, vaccine, and antiviral drug studies.

  19. Satellite-station time synchronization information based real-time orbit error monitoring and correction of navigation satellite in Beidou System

    NASA Astrophysics Data System (ADS)

    He, Feng; Zhou, ShanShi; Hu, XiaoGong; Zhou, JianHua; Liu, Li; Guo, Rui; Li, XiaoJie; Wu, Shan

    2014-07-01

    Satellite-station two-way time comparison is a typical design in Beidou System (BDS) which is significantly different from other satellite navigation systems. As a type of two-way time comparison method, BDS time synchronization is hardly influenced by satellite orbit error, atmosphere delay, tracking station coordinate error and measurement model error. Meanwhile, single-way time comparison can be realized through the method of Multi-satellite Precision Orbit Determination (MPOD) with pseudo-range and carrier phase of monitor receiver. It is proved in the constellation of 3GEO/2IGSO that the radial orbit error can be reflected in the difference between two-way time comparison and single-way time comparison, and that may lead to a substitute for orbit evaluation by SLR. In this article, the relation between orbit error and difference of two-way and single-way time comparison is illustrated based on the whole constellation of BDS. Considering the all-weather and real-time operation mode of two-way time comparison, the orbit error could be quantifiably monitored in a real-time mode through comparing two-way and single-way time synchronization. In addition, the orbit error can be predicted and corrected in a short time based on its periodic characteristic. It is described in the experiments of GEO and IGSO that the prediction accuracy of space signal can be obviously improved when the prediction orbit error is sent to the users through navigation message, and then the UERE including terminal error can be reduced from 0.1 m to 0.4 m while the average accuracy can be improved more than 27%. Though it is still hard to make accuracy improvement for Precision Orbit Determination (POD) and orbit prediction because of the confined tracking net and the difficulties in dynamic model optimization, in this paper, a practical method for orbit accuracy improvement is proposed based on two-way time comparison which can result in the reflection of orbit error.

  20. Correcting incompatible DN values and geometric errors in nighttime lights time series images

    SciTech Connect

    Zhao, Naizhuo; Zhou, Yuyu; Samson, Eric L.

    2014-09-19

    The Defense Meteorological Satellite Program’s Operational Linescan System (DMSP-OLS) nighttime lights imagery has proven to be a powerful remote sensing tool to monitor urbanization and assess socioeconomic activities at large scales. However, the existence of incompatible digital number (DN) values and geometric errors severely limit application of nighttime light image data on multi-year quantitative research. In this study we extend and improve previous studies on inter-calibrating nighttime lights image data to obtain more compatible and reliable nighttime lights time series (NLT) image data for China and the United States (US) through four steps: inter-calibration, geometric correction, steady increase adjustment, and population data correction. We then use gross domestic product (GDP) data to test the processed NLT image data indirectly and find that sum light (summed DN value of pixels in a nighttime light image) maintains apparent increase trends with relatively large GDP growth rates but does not increase or decrease with relatively small GDP growth rates. As nighttime light is a sensitive indicator for economic activity, the temporally consistent trends between sum light and GDP growth rate imply that brightness of nighttime lights on the ground is correctly represented by the processed NLT image data. Finally, through analyzing the corrected NLT image data from 1992 to 2008, we find that China experienced apparent nighttime lights development in 1992-1997 and 2001-2008 respectively and the US suffered from nighttime lights decay in large areas after 2001.

  1. Optical laboratory solution and error model simulation of a linear time-varying finite element equation

    NASA Technical Reports Server (NTRS)

    Taylor, B. K.; Casasent, D. P.

    1989-01-01

    The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.

  2. Absolute concentrations of highly vibrationally excited OH(υ = 9 + 8) in the mesopause region derived from the TIMED/SABER instrument

    NASA Astrophysics Data System (ADS)

    Mast, Jeffrey; Mlynczak, Martin G.; Hunt, Linda A.; Marshall, B. Thomas; Mertens, Christoper J.; Russell, James M.; Thompson, R. Earl; Gordley, Larry L.

    2013-02-01

    Abstract <span class="hlt">Absolute</span> concentrations (cm-3) of highly vibrationally excited hydroxyl (OH) are derived from measurements of the volume emission rate of the υ = 9 + 8 states of the OH radical made by the SABER instrument on the <span class="hlt">TIMED</span> satellite. SABER has exceptionally sensitive measurement precision that corresponds to an ability to detect changes in volume emission rate on the order of ~5 excited OH molecules per cm3. Peak zonal annual mean concentrations observed by SABER exceed 1000 cm-3 at night and 225 cm-3 during the day. Measurements since 2002 show an apparent altitude-dependent variation of the night OH(υ = 9 + 8) concentrations with the 11 year solar cycle, with concentrations decreasing below ~ 95 km from 2002 to 2008. These observations provide a global database for evaluating photochemical model computations of OH abundance, reaction kinetics, and rates and mechanisms responsible for maintaining vibrationally excited OH in the mesopause region.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1812195W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1812195W"><span id="translatedtitle">Continuous Gravity Monitoring in South America with Superconducting and <span class="hlt">Absolute</span> Gravimeters: More than 12 years <span class="hlt">time</span> series at station TIGO/Concepcion (Chile)</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wziontek, Hartmut; Falk, Reinhard; Hase, Hayo; Armin, Böer; Andreas, Güntner; Rongjiang, Wang</p> <p>2016-04-01</p> <p>As part of the Transportable Integrated Geodetic Observatory (TIGO) of BKG, the superconducting gravimeter SG 038 was set up in December 2002 at station Concepcion / Chile to record temporal gravity variations with highest precision. Since May 2006 the <span class="hlt">time</span> series was supported by weekly observations with the <span class="hlt">absolute</span> gravimeter FG5-227, proving the large seasonal variations of up to 30 μGal and establishing a gravity reference station in South America. With the move of the whole observatory to the new location near to La Plata / Argentina the series was terminated. Results of almost continuously monitoring gravity variations for more than 12 years are presented. Seasonal variations are interpreted with respect of global and local water storage changes and the impact of the 8.8 Maule Earthquake in February 2010 is discussed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016ASPC..503..233C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016ASPC..503..233C"><span id="translatedtitle">SkyProbe: Real-<span class="hlt">Time</span> Precision Monitoring in the Optical of the <span class="hlt">Absolute</span> Atmospheric Absorption on the Telescope Science and Calibration Fields</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cuillandre, J.-C.; Magnier, E.; Sabin, D.; Mahoney, B.</p> <p>2016-05-01</p> <p>Mauna Kea is known for its pristine seeing conditions but sky transparency can be an issue for science operations since at least 25% of the observable (i.e. open dome) nights are not photometric, an effect mostly due to high-altitude cirrus. Since 2001, the original single channel SkyProbe mounted in parallel on the Canada-France-Hawaii Telescope (CFHT) has gathered one V-band exposure every minute during each observing night using a small CCD camera offering a very wide field of view (35 sq. deg.) encompassing the region pointed by the telescope for science operations, and exposures long enough (40 seconds) to capture at least 100 stars of Hipparcos' Tycho catalog at high galactic latitudes (and up to 600 stars at low galactic latitudes). The measurement of the true atmospheric absorption is achieved within 2%, a key advantage over all-sky direct thermal infrared imaging detection of clouds. The <span class="hlt">absolute</span> measurement of the true atmospheric absorption by clouds and particulates affecting the data being gathered by the telescope's main science instrument has proven crucial for decision making in the CFHT queued service observing (QSO) representing today all of the telescope <span class="hlt">time</span>. Also, science exposures taken in non-photometric conditions are automatically registered for a new observation at a later date at 1/10th of the original exposure <span class="hlt">time</span> in photometric conditions to ensure a proper final <span class="hlt">absolute</span> photometric calibration. Photometric standards are observed only when conditions are reported as being perfectly stable by SkyProbe. The more recent dual color system (simultaneous B & V bands) will offer a better characterization of the sky properties above Mauna Kea and should enable a better detection of the thinnest cirrus (absorption down to 0.01 mag., or 1%).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26132165','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26132165"><span id="translatedtitle">Using a Novel <span class="hlt">Absolute</span> Ontogenetic Age Determination Technique to Calculate the <span class="hlt">Timing</span> of Tooth Eruption in the Saber-Toothed Cat, Smilodon fatalis.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wysocki, M Aleksander; Feranec, Robert S; Tseng, Zhijie Jack; Bjornsson, Christopher S</p> <p>2015-01-01</p> <p>Despite the superb fossil record of the saber-toothed cat, Smilodon fatalis, ontogenetic age determination for this and other ancient species remains a challenge. The present study utilizes a new technique, a combination of data from stable oxygen isotope analyses and micro-computed tomography, to establish the eruption rate for the permanent upper canines in Smilodon fatalis. The results imply an eruption rate of 6.0 millimeters per month, which is similar to a previously published average enamel growth rate of the S. fatalis upper canines (5.8 millimeters per month). Utilizing the upper canine growth rate, the upper canine eruption rate, and a previously published tooth replacement sequence, this study calculates <span class="hlt">absolute</span> ontogenetic age ranges of tooth development and eruption in S. fatalis. The <span class="hlt">timing</span> of tooth eruption is compared between S. fatalis and several extant conical-toothed felids, such as the African lion (Panthera leo). Results suggest that the permanent dentition of S. fatalis, except for the upper canines, was fully erupted by 14 to 22 months, and that the upper canines finished erupting at about 34 to 41 months. Based on these developmental age calculations, S. fatalis individuals less than 4 to 7 months of age were not typically preserved at Rancho La Brea. On the whole, S. fatalis appears to have had delayed dental development compared to dental development in similar-sized extant felids. This technique for <span class="hlt">absolute</span> ontogenetic age determination can be replicated in other ancient species, including non-saber-toothed taxa, as long as the <span class="hlt">timing</span> of growth initiation and growth rate can be determined for a specific feature, such as a tooth, and that growth period overlaps with the development of the other features under investigation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4489498','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4489498"><span id="translatedtitle">Using a Novel <span class="hlt">Absolute</span> Ontogenetic Age Determination Technique to Calculate the <span class="hlt">Timing</span> of Tooth Eruption in the Saber-Toothed Cat, Smilodon fatalis</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wysocki, M. Aleksander; Feranec, Robert S.; Tseng, Zhijie Jack; Bjornsson, Christopher S.</p> <p>2015-01-01</p> <p>Despite the superb fossil record of the saber-toothed cat, Smilodon fatalis, ontogenetic age determination for this and other ancient species remains a challenge. The present study utilizes a new technique, a combination of data from stable oxygen isotope analyses and micro-computed tomography, to establish the eruption rate for the permanent upper canines in Smilodon fatalis. The results imply an eruption rate of 6.0 millimeters per month, which is similar to a previously published average enamel growth rate of the S. fatalis upper canines (5.8 millimeters per month). Utilizing the upper canine growth rate, the upper canine eruption rate, and a previously published tooth replacement sequence, this study calculates <span class="hlt">absolute</span> ontogenetic age ranges of tooth development and eruption in S. fatalis. The <span class="hlt">timing</span> of tooth eruption is compared between S. fatalis and several extant conical-toothed felids, such as the African lion (Panthera leo). Results suggest that the permanent dentition of S. fatalis, except for the upper canines, was fully erupted by 14 to 22 months, and that the upper canines finished erupting at about 34 to 41 months. Based on these developmental age calculations, S. fatalis individuals less than 4 to 7 months of age were not typically preserved at Rancho La Brea. On the whole, S. fatalis appears to have had delayed dental development compared to dental development in similar-sized extant felids. This technique for <span class="hlt">absolute</span> ontogenetic age determination can be replicated in other ancient species, including non-saber-toothed taxa, as long as the <span class="hlt">timing</span> of growth initiation and growth rate can be determined for a specific feature, such as a tooth, and that growth period overlaps with the development of the other features under investigation. PMID:26132165</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/biblio/22465724','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/biblio/22465724"><span id="translatedtitle">Motoneuron axon pathfinding <span class="hlt">errors</span> in zebrafish: Differential effects related to concentration and <span class="hlt">timing</span> of nicotine exposure</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Menelaou, Evdokia; Paul, Latoya T.; Perera, Surangi N.; Svoboda, Kurt R.</p> <p>2015-04-01</p> <p>Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding <span class="hlt">errors</span> in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding <span class="hlt">errors</span> could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding <span class="hlt">errors</span>, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding <span class="hlt">errors</span> that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding <span class="hlt">errors</span> coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. - Highlights: • Embryonic nicotine exposure can specifically affect secondary motoneuron axons in a dose-dependent manner. </p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://pubs.usgs.gov/of/1972/0235/report.pdf','USGSPUBS'); return false;" href="http://pubs.usgs.gov/of/1972/0235/report.pdf"><span id="translatedtitle">Analysis of potential <span class="hlt">errors</span> in real-<span class="hlt">time</span> streamflow data and methods of data verification by digital computer</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Lystrom, David J.</p> <p>1972-01-01</p> <p>Various methods of verifying real-<span class="hlt">time</span> streamflow data are outlined in part II. Relatively large <span class="hlt">errors</span> (those greater than 20-30 percent) can be detected readily by use of well-designed verification programs for a digital computer, and smaller <span class="hlt">errors</span> can be detected only by discharge measurements and field observations. The capability to substitute a simulated discharge value for missing or erroneous data is incorporated in some of the verification routines described. The routines represent concepts ranging from basic statistical comparisons to complex watershed modeling and provide a selection from which real-<span class="hlt">time</span> data users can choose a suitable level of verification.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/27039281','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/27039281"><span id="translatedtitle">Lunch-<span class="hlt">time</span> food choices in preschoolers: Relationships between <span class="hlt">absolute</span> and relative intakes of different food categories, and appetitive characteristics and weight.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Carnell, S; Pryor, K; Mais, L A; Warkentin, S; Benson, L; Cheng, R</p> <p>2016-08-01</p> <p>Children's appetitive characteristics measured by parent-report questionnaires are reliably associated with body weight, as well as behavioral tests of appetite, but relatively little is known about relationships with food choice. As part of a larger preloading study, we served 4-5year olds from primary school classes five school lunches at which they were presented with the same standardized multi-item meal. Parents completed Child Eating Behavior Questionnaire (CEBQ) sub-scales assessing satiety responsiveness (CEBQ-SR), food responsiveness (CEBQ-FR) and enjoyment of food (CEBQ-EF), and children were weighed and measured. Despite differing preload conditions, children showed remarkable consistency of intake patterns across all five meals with day-to-day intra-class correlations in <span class="hlt">absolute</span> and percentage intake of each food category ranging from 0.78 to 0.91. Higher CEBQ-SR was associated with lower mean intake of all food categories across all five meals, with the weakest association apparent for snack foods. Higher CEBQ-FR was associated with higher intake of white bread and fruits and vegetables, and higher CEBQ-EF was associated with greater intake of all categories, with the strongest association apparent for white bread. Analyses of intake of each food group as a percentage of total intake, treated here as an index of the child's choice to consume relatively more or relatively less of each different food category when composing their total lunch-<span class="hlt">time</span> meal, further suggested that children who were higher in CEBQ-SR ate relatively more snack foods and relatively less fruits and vegetables, while children with higher CEBQ-EF ate relatively less snack foods and relatively more white bread. Higher <span class="hlt">absolute</span> intakes of white bread and snack foods were associated with higher BMI z score. CEBQ sub-scale associations with food intake variables were largely unchanged by controlling for daily metabolic needs. However, descriptive comparisons of lunch intakes with</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004AGUSM.H31A..24H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004AGUSM.H31A..24H"><span id="translatedtitle">A Two-Dimensional Space-<span class="hlt">Time</span> Satellite Rainfall <span class="hlt">Error</span> Model and its Application to Land Surface Simulations</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hossain, F.; Anagnostou, E. N.; Wang, D.</p> <p>2004-05-01</p> <p>A comprehensive Satellite Rainfall <span class="hlt">Error</span> Model (SREM-2D) is developed that modeled the two-dimensional space-<span class="hlt">time</span> <span class="hlt">error</span> structure of satellite rain retrievals. The <span class="hlt">error</span> structure is decomposed into the following components: (1) Sensor's detection structure for rain and no rain; (2) Sensor's spatial structure of detection for rain and no rain; (3) Sensor's spatial structure for rainfall retrieval, and (4) Sensor's temporal structure for the mean field retrieval <span class="hlt">error</span>. On the basis of Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR), parameters of the <span class="hlt">error</span> structure for Passive Microwave (PM) and Infra-red (IR) sensors are derived over the Southern United States. A demonstration of the utility of SREM-2D is shown by coupling SREM-2D with the Community Land Model (CLM) over a 40000 km2 area in Oklahoma. SREM-2D is found to be a very elegant and valuable tool for formulating scientific questions related to the understanding of propagation of satellite rainfall <span class="hlt">error</span> in land surface simulations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://medlineplus.gov/ency/article/003649.htm','NIH-MEDLINEPLUS'); return false;" href="https://medlineplus.gov/ency/article/003649.htm"><span id="translatedtitle">Eosinophil count - <span class="hlt">absolute</span></span></a></p> <p><a target="_blank" href="http://medlineplus.gov/">MedlinePlus</a></p> <p></p> <p></p> <p>Eosinophils; <span class="hlt">Absolute</span> eosinophil count ... the white blood cell count to give the <span class="hlt">absolute</span> eosinophil count. ... than 500 cells per microliter (cells/mcL). Normal value ranges may vary slightly among different laboratories. Talk ...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/937503','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/937503"><span id="translatedtitle"><span class="hlt">Errors</span> in determination of soil water content using <span class="hlt">time</span>-domain reflectometry caused by soil compaction around wave guides</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Ghezzehei, T.A.</p> <p>2008-05-29</p> <p>Application of <span class="hlt">time</span> domain reflectometry (TDR) in soil hydrology often involves the conversion of TDR-measured dielectric permittivity to water content using universal calibration equations (empirical or physically based). Deviations of soil-specific calibrations from the universal calibrations have been noted and are usually attributed to peculiar composition of soil constituents, such as high content of clay and/or organic matter. Although it is recognized that soil disturbance by TDR waveguides may have impact on measurement <span class="hlt">errors</span>, to our knowledge, there has not been any quantification of this effect. In this paper, we introduce a method that estimates this <span class="hlt">error</span> by combining two models: one that describes soil compaction around cylindrical objects and another that translates change in bulk density to evolution of soil water retention characteristics. Our analysis indicates that the compaction pattern depends on the mechanical properties of the soil at the <span class="hlt">time</span> of installation. The relative <span class="hlt">error</span> in water content measurement depends on the compaction pattern as well as the water content and water retention properties of the soil. Illustrative calculations based on measured soil mechanical and hydrologic properties from the literature indicate that the measurement <span class="hlt">errors</span> of using a standard three-prong TDR waveguide could be up to 10%. We also show that the <span class="hlt">error</span> scales linearly with the ratio of rod radius to the interradius spacing.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008WRR....44.8451G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008WRR....44.8451G"><span id="translatedtitle"><span class="hlt">Errors</span> in determination of soil water content using <span class="hlt">time</span> domain reflectometry caused by soil compaction around waveguides</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ghezzehei, Teamrat A.</p> <p>2008-08-01</p> <p>Application of <span class="hlt">time</span> domain reflectometry (TDR) in soil hydrology often involves the conversion of TDR-measured dielectric permittivity to water content using universal calibration equations (empirical or physically based). Deviations of soil-specific calibrations from the universal calibrations have been noted and are usually attributed to peculiar composition of soil constituents, such as high content of clay and/or organic matter. Although it is recognized that soil disturbance by TDR waveguides may have impact on measurement <span class="hlt">errors</span>, to our knowledge, there has not been any quantification of this effect. In this paper, we introduce a method that estimates this <span class="hlt">error</span> by combining two models: one that describes soil compaction around cylindrical objects and another that translates change in bulk density to evolution of soil water retention characteristics. Our analysis indicates that the compaction pattern depends on the mechanical properties of the soil at the <span class="hlt">time</span> of installation. The relative <span class="hlt">error</span> in water content measurement depends on the compaction pattern as well as the water content and water retention properties of the soil. Illustrative calculations based on measured soil mechanical and hydrologic properties from the literature indicate that the measurement <span class="hlt">errors</span> of using a standard three-prong TDR waveguide could be up to 10%. We also show that the <span class="hlt">error</span> scales linearly with the ratio of rod radius to the interradius spacing.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007AGUFM.H21L..06G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007AGUFM.H21L..06G"><span id="translatedtitle"><span class="hlt">Errors</span> in determination of soil water content using <span class="hlt">time</span>-domain reflectometry caused by soil compaction around wave guides</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ghezzehei, T. A.</p> <p>2007-12-01</p> <p>Application of <span class="hlt">time</span>-domain reflectometry (TDR) in soil hydrology often involves the conversion of TDR-measured dielectric permittivity to water content using universal calibration equations (empirical or physically based). Deviations of soil-specific calibrations from the universal calibrations have been noted and are usually attributed with peculiar composition of soil constituents, such as high content of clay and/or organic matter. Although it is recognized that soil disturbance by TDR wave guides may have impact on measurement <span class="hlt">errors</span>, to our knowledge, there has not been any quantification of this effect. In this presentation, we introduce a combined mechanical-hydrological method that estimates the measurement <span class="hlt">error</span>. Our analysis indicates that soil compaction pattern depends on the mechanical properties of the soil at the <span class="hlt">time</span> of installation. The relative <span class="hlt">error</span> in water content measurement depends on the compaction pattern as well as the water content and water retention characteristics of the soil. Illustrative calculations based on measured soil mechanical and hydrologic properties from the literature show that the measurement <span class="hlt">errors</span> of using a standard three-prong TDR wave guide could be up to 10 percent. We also show that the <span class="hlt">error</span> scales linearly with the ratio of rod radius to the inter- radius spacing.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MeScT..27j5103W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MeScT..27j5103W"><span id="translatedtitle">Real-<span class="hlt">time</span> modeling and online filtering of the stochastic <span class="hlt">error</span> in a fiber optic current transducer</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Lihui; Wei, Guangjin; Zhu, Yunan; Liu, Jian; Tian, Zhengqi</p> <p>2016-10-01</p> <p>The stochastic <span class="hlt">error</span> characteristics of a fiber optic current transducer (FOCT) influence the relay protection, electric-energy metering, and other devices in the spacer layer. Real-<span class="hlt">time</span> modeling and online filtering of the FOCT’s stochastic <span class="hlt">error</span> tends to be an effective method for improving the measurement accuracy of the FOCT. This paper first pretreats and inspects the FOCT data, statistically. Then, the model order is set by the AIC principle to establish an ARMA (2,1) model and model’s applicability is tested. Finally, a Kalman filter is adopted to reduce the noise in the FOCT data. The results of the experiment and the simulation demonstrate that there is a notable decrease in the stochastic <span class="hlt">error</span> after <span class="hlt">time</span> series modeling and Kalman filtering. Besides, the mean-variance is decreased by two orders. All the stochastic <span class="hlt">error</span> coefficients are decreased by the total variance method; the BI is decreased by 41.4%, the RRW is decreased by 67.5%, and the RR is decreased by 53.4%. Consequently, the method can reduce the stochastic <span class="hlt">error</span> and improve the measurement accuracy of the FOCT, effectively.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/cgi-bin/nph-data_query?bibcode=2011AGUFMSA41A1834M&link_type=ABSTRACT','NASAADS'); return false;" href="http://adsabs.harvard.edu/cgi-bin/nph-data_query?bibcode=2011AGUFMSA41A1834M&link_type=ABSTRACT"><span id="translatedtitle"><span class="hlt">Absolute</span> Populations of Highly Vibrationally Excited OH(υ=8 + υ=9) in the Night Mesopause Region Derived from the <span class="hlt">TIMED</span>/SABER Instrument from 2002 to 2010</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mast, J. C.; Mlynczak, M. G.; Marshall, B. T.; Thompson, R. E.; Mertens, C. J.; Hunt, L. A.; Russell, J. M.; Gordley, L. L.</p> <p>2011-12-01</p> <p>We present global distributions of v = 9 + v = 8 nighttime vibrationally excited hydroxyl concentrations as measured by the SABER instrument on board the <span class="hlt">TIMED</span> spacecraft. These states are formed directly by the reaction of atomic hydrogen and ozone in the terrestrial mesopause region. SABER measures the limb radiance from the delta-v = 2 transitions in a channel centered near 2.0 um, specifically the sum of the 9 -> 7 and 8 -> 6 transitions. The limb radiances are inverted to yield the volume emission rates from the sum of the v = 8 and 9 states of the hydroxyl molecule. The Einstein coefficients for spontaneous emission for these two transitions are essentially identical. Thus dividing the derived volume emission rate by the Einstein coefficient yields the <span class="hlt">absolute</span> populations of these states (molecules per cubic cm). Nine full years of data are presented in this paper. Over this <span class="hlt">time</span> the globally averaged OH(v = 8 + v = 9) populations have varied relative to the nine year mean by only a few percent. We conclude that despite substantial solar variability over this <span class="hlt">time</span> period, the apparently small variation of the highly vibrationally excited hydroxyl populations implies that atomic hydrogen, atomic oxygen, temperature, and density adjust in such a way so as to keep the product of the atomic hydrogen concentration, the ozone concentration, and the rate coefficient for their reaction essentially constant.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=FREUND&pg=3&id=EJ836431','ERIC'); return false;" href="http://eric.ed.gov/?q=FREUND&pg=3&id=EJ836431"><span id="translatedtitle">Continued Driving and <span class="hlt">Time</span> to Transition to Nondriver Status through <span class="hlt">Error</span>-Specific Driving Restrictions</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Freund, Barbara; Petrakos, Davithoula</p> <p>2008-01-01</p> <p>We developed driving restrictions that are linked to specific driving <span class="hlt">errors</span>, allowing cognitively impaired individuals to continue to independently meet mobility needs while minimizing risk to themselves and others. The purpose of this project was to evaluate the efficacy and duration expectancy of these restrictions in promoting safe continued…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1990Natur.344..734S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1990Natur.344..734S"><span id="translatedtitle">Nonlinear forecasting as a way of distinguishing chaos from measurement <span class="hlt">error</span> in <span class="hlt">time</span> series</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sugihara, George; May, Robert M.</p> <p>1990-04-01</p> <p>An approach is presented for making short-term predictions about the trajectories of chaotic dynamical systems. The method is applied to data on measles, chickenpox, and marine phytoplankton populations, to show how apparent noise associated with deterministic chaos can be distinguished from sampling <span class="hlt">error</span> and other sources of externally induced environmental noise.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4541960','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4541960"><span id="translatedtitle">Real-<span class="hlt">Time</span> PPP Based on the Coupling Estimation of Clock Bias and Orbit <span class="hlt">Error</span> with Broadcast Ephemeris</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan</p> <p>2015-01-01</p> <p>Satellite orbit <span class="hlt">error</span> and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-<span class="hlt">time</span> performance. However, real-<span class="hlt">time</span> positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-<span class="hlt">time</span> PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit <span class="hlt">error</span>. The projection of orbit <span class="hlt">error</span> onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit <span class="hlt">error</span> can be absorbed by the clock bias and the effects of residual orbit <span class="hlt">error</span> on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit <span class="hlt">error</span> is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-<span class="hlt">time</span> satellite clock bias coupled with orbit <span class="hlt">error</span>. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-<span class="hlt">time</span> PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-<span class="hlt">time</span> products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26205276','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26205276"><span id="translatedtitle">Real-<span class="hlt">Time</span> PPP Based on the Coupling Estimation of Clock Bias and Orbit <span class="hlt">Error</span> with Broadcast Ephemeris.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan</p> <p>2015-07-22</p> <p>Satellite orbit <span class="hlt">error</span> and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-<span class="hlt">time</span> performance. However, real-<span class="hlt">time</span> positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-<span class="hlt">time</span> PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit <span class="hlt">error</span>. The projection of orbit <span class="hlt">error</span> onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit <span class="hlt">error</span> can be absorbed by the clock bias and the effects of residual orbit <span class="hlt">error</span> on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit <span class="hlt">error</span> is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-<span class="hlt">time</span> satellite clock bias coupled with orbit <span class="hlt">error</span>. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-<span class="hlt">time</span> PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-<span class="hlt">time</span> products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_11 --> <div id="page_12" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="221"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://pubs.er.usgs.gov/publication/70117138','USGSPUBS'); return false;" href="http://pubs.er.usgs.gov/publication/70117138"><span id="translatedtitle">Accuracy of travel <span class="hlt">time</span> distribution (TTD) models as affected by TTD complexity, observation <span class="hlt">errors</span>, and model and tracer selection</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.</p> <p>2014-01-01</p> <p>Analytical models of the travel <span class="hlt">time</span> distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation <span class="hlt">errors</span>, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction <span class="hlt">errors</span> were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the <span class="hlt">error</span> sources (TTD complexity, observation <span class="hlt">error</span>, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected <span class="hlt">errors</span> of the estimated TTDs. However, prediction <span class="hlt">errors</span> for NO3− and median age depended more on tracer concentration <span class="hlt">errors</span>. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5061461','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5061461"><span id="translatedtitle">One-Class Classification-Based Real-<span class="hlt">Time</span> Activity <span class="hlt">Error</span> Detection in Smart Homes</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Das, Barnan; Cook, Diane J.; Krishnan, Narayanan C.; Schmitter-Edgecombe, Maureen</p> <p>2016-01-01</p> <p>Caring for individuals with dementia is frequently associated with extreme physical and emotional stress, which often leads to depression. Smart home technology and advances in machine learning techniques can provide innovative solutions to reduce caregiver burden. One key service that caregivers provide is prompting individuals with memory limitations to initiate and complete daily activities. We hypothesize that sensor technologies combined with machine learning techniques can automate the process of providing reminder-based interventions. The first step towards automated interventions is to detect when an individual faces difficulty with activities. We propose machine learning approaches based on one-class classification that learn normal activity patterns. When we apply these classifiers to activity patterns that were not seen before, the classifiers are able to detect activity <span class="hlt">errors</span>, which represent potential prompt situations. We validate our approaches on smart home sensor data obtained from older adult participants, some of whom faced difficulties performing routine activities and thus committed <span class="hlt">errors</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AdWR...49...46H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AdWR...49...46H"><span id="translatedtitle">Non-iterative adaptive <span class="hlt">time</span>-stepping scheme with temporal truncation <span class="hlt">error</span> control for simulating variable-density flow</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hirthe, Eugenia M.; Graf, Thomas</p> <p>2012-12-01</p> <p>The automatic non-iterative second-order <span class="hlt">time</span>-stepping scheme based on the temporal truncation <span class="hlt">error</span> proposed by Kavetski et al. [Kavetski D, Binning P, Sloan SW. Non-iterative <span class="hlt">time</span>-stepping schemes with adaptive truncation <span class="hlt">error</span> control for the solution of Richards equation. Water Resour Res 2002;38(10):1211, http://dx.doi.org/10.1029/2001WR000720.] is implemented into the code of the HydroGeoSphere model. This <span class="hlt">time</span>-stepping scheme is applied for the first <span class="hlt">time</span> to the low-Rayleigh-number thermal Elder problem of free convection in porous media [van Reeuwijk M, Mathias SA, Simmons CT, Ward JD. Insights from a pseudospectral approach to the Elder problem. Water Resour Res 2009;45:W04416, http://dx.doi.org/10.1029/2008WR007421.], and to the solutal [Shikaze SG, Sudicky EA, Schwartz FW. Density-dependent solute transport in discretely-fractured geological media: is prediction possible? J Contam Hydrol 1998;34:273-91] problem of free convection in fractured-porous media. Numerical simulations demonstrate that the proposed scheme efficiently limits the temporal truncation <span class="hlt">error</span> to a user-defined tolerance by controlling the <span class="hlt">time</span>-step size. The non-iterative second-order <span class="hlt">time</span>-stepping scheme can be applied to (i) thermal and solutal variable-density flow problems, (ii) linear and non-linear density functions, and (iii) problems including porous and fractured-porous media.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013cpgt.book....3S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013cpgt.book....3S"><span id="translatedtitle"><span class="hlt">Error</span> Analysis</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Scherer, Philipp O. J.</p> <p></p> <p>Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding <span class="hlt">errors</span>. This kind of numerical <span class="hlt">error</span> can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing <span class="hlt">time</span> and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger <span class="hlt">errors</span> since series expansions have to be truncated and iterations accumulate the <span class="hlt">errors</span> of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical <span class="hlt">errors</span> on the uncertainties of the calculated results and the stability of simple algorithms.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26173857','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26173857"><span id="translatedtitle">A statistical model for measurement <span class="hlt">error</span> that incorporates variation over <span class="hlt">time</span> in the target measure, with application to nutritional epidemiology.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Freedman, Laurence S; Midthune, Douglas; Dodd, Kevin W; Carroll, Raymond J; Kipnis, Victor</p> <p>2015-11-30</p> <p>Most statistical methods that adjust analyses for measurement <span class="hlt">error</span> assume that the target exposure T is a fixed quantity for each individual. However, in many applications, the value of T for an individual varies with <span class="hlt">time</span>. We develop a model that accounts for such variation, describing the model within the framework of a meta-analysis of validation studies of dietary self-report instruments, where the reference instruments are biomarkers. We demonstrate that in this application, the estimates of the attenuation factor and correlation with true intake, key parameters quantifying the accuracy of the self-report instrument, are sometimes substantially modified under the <span class="hlt">time</span>-varying exposure model compared with estimates obtained under a traditional fixed-exposure model. We conclude that accounting for the <span class="hlt">time</span> element in measurement <span class="hlt">error</span> problems is potentially important.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3114292','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3114292"><span id="translatedtitle">The Dorsal Medial Frontal Cortex is Sensitive to <span class="hlt">Time</span> on Task, Not Response Conflict or <span class="hlt">Error</span> Likelihood</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Grinband, Jack; Savitsky, Judith; Wager, Tor D.; Teichert, Tobias; Ferrera, Vincent P.; Hirsch, Joy</p> <p>2011-01-01</p> <p>The dorsal medial frontal cortex (dMFC) is highly active during choice behavior. Though many models have been proposed to explain dMFC function, the conflict monitoring model is the most influential. It posits that dMFC is primarily involved in detecting interference between competing responses thus signaling the need for control. It accurately predicts increased neural activity and response <span class="hlt">time</span> (RT) for incompatible (high-interference) vs. compatible (low-interference) decisions. However, it has been shown that neural activity can increase with <span class="hlt">time</span> on task, even when no decisions are made. Thus, the greater dMFC activity on incompatible trials may stem from longer RTs rather than response conflict. This study shows that (1) the conflict monitoring model fails to predict the relationship between <span class="hlt">error</span> likelihood and RT, and (2) the dMFC activity is not sensitive to congruency, <span class="hlt">error</span> likelihood, or response conflict, but is monotonically related to <span class="hlt">time</span> on task. PMID:21168515</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19790007418','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19790007418"><span id="translatedtitle">Directional <span class="hlt">errors</span> of movements and their correction in a discrete tracking task. [pilot reaction <span class="hlt">time</span> and sensorimotor performance</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jaeger, R. J.; Agarwal, G. C.; Gottlieb, G. L.</p> <p>1978-01-01</p> <p>Subjects can correct their own <span class="hlt">errors</span> of movement more quickly than they can react to external stimuli by using three general categories of feedback: (1) knowledge of results, primarily visually mediated; (2) proprioceptive or kinaesthetic such as from muscle spindles and joint receptors, and (3) corollary discharge or efference copy within the central nervous system. The effects of these feedbacks on simple reaction <span class="hlt">time</span>, choice reaction <span class="hlt">time</span>, and <span class="hlt">error</span> correction <span class="hlt">time</span> were studied in four normal human subjects. The movement used was plantarflexion and dorsiflexion of the ankle joint. The feedback loops were modified, by changing the sign of the visual display to alter the subject's perception of results, and by applying vibration at 100 Hz simultaneously to both the agonist and antagonist muscles of the ankle joint. The central processing was interfered with when the subjects were given moderate doses of alcohol (blood alcohol concentration levels of up to 0.07%). Vibration and alcohol increase both the simple and choice reaction <span class="hlt">times</span> but not the <span class="hlt">error</span> correction <span class="hlt">time</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016ChJOL..34.1383Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016ChJOL..34.1383Y"><span id="translatedtitle"><span class="hlt">Absolute</span> geostrophic currents in global tropical oceans</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Lina; Yuan, Dongliang</p> <p>2016-11-01</p> <p>A set of <span class="hlt">absolute</span> geostrophic current (AGC) data for the period January 2004 to December 2012 are calculated using the P-vector method based on monthly gridded Argo profiles in the world tropical oceans. The AGCs agree well with altimeter geostrophic currents, Ocean Surface Current Analysis-Real <span class="hlt">time</span> currents, and moored current-meter measurements at 10-m depth, based on which the classical Sverdrup circulation theory is evaluated. Calculations have shown that <span class="hlt">errors</span> of wind stress calculation, AGC transport, and depth ranges of vertical integration cannot explain non-Sverdrup transport, which is mainly in the subtropical western ocean basins and equatorial currents near the Equator in each ocean basin (except the North Indian Ocean, where the circulation is dominated by monsoons). The identified non-Sverdrup transport is thereby robust and attributed to the joint effect of baroclinicity and relief of the bottom (JEBAR) and mesoscale eddy nonlinearity.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007JSDD....1..491K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007JSDD....1..491K"><span id="translatedtitle">The Study of the Tether Motion with <span class="hlt">Time</span>-Varying Length Using the <span class="hlt">Absolute</span> Nodal Coordinate Formulation with Multiple Nonlinear <span class="hlt">Time</span> Scales</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kawaguti, Keisuke; Terumichi, Yoshiaki; Takehara, Shoichiro; Kaczmarczyk, Stefan; Sogabe, Kiyoshi</p> <p></p> <p>In this study, the modeling and formulation for tether motion with <span class="hlt">time</span>-varying length, large rotation, large displacement and large deformation are proposed. A tether or cable is an important element in lift systems, construction machines for transportation and often is used with a <span class="hlt">time</span>-varying length. In some cases, these systems are large and the tether has a long length, large deformation and large displacement. The dynamic behavior of a tether in extension and retraction using the proposed method is discussed in this paper. In the passage through resonance, significant tether motions with large rotation and large deformation result. In the analysis of this phenomenon, the transient fluctuations of the motion amplitudes are examined and compared with the corresponding steady state motions. The accuracy and the cost of the calculations are also verified by comparison with the experimental results.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFMEP52A..05D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFMEP52A..05D"><span id="translatedtitle">Placing <span class="hlt">Absolute</span> <span class="hlt">Timing</span> on Basin Incision Adjacent to the Colorado Front Range: Results from Meteoric and in Situ 10BE Dating</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Duehnforth, M.; Anderson, R. S.; Ward, D.</p> <p>2010-12-01</p> <p>A sequence of six levels of gravel-capped surfaces, mapped as Pliocene to Holocene in age, are cut into Cretaceous shale in the northwestern part of the Denver Basin immediately adjacent to the Colorado Front Range (CFR). The existing relative age constraints and terrace correlations suggest that the incision of the Denver Basin occurred at a steady and uniform rate of 0.1 mm yr-1 since the Pliocene. As <span class="hlt">absolute</span> ages in this landscape are rare, they have the potential to test the reliability of the existing chronology, and to illuminate the detailed history of incision. We explore the <span class="hlt">timing</span> of basin incision and the variability of geomorphic process rates through <span class="hlt">time</span> by dating the three highest surfaces at the northwestern edge of the Denver Basin using both in situ and meteoric 10Be concentrations. As the tectonic conditions have not changed since the Pliocene, much of the variability of generation and abandonment of alluvial surfaces likely reflects the influence of glacial-interglacial climate variations. We selected Gunbarrel Hill (mapped as pre-Rocky Flats (Pliocene)), Table Mountain (mapped as Rocky Flats (early Pleistocene)), and the Pioneer surface (mapped as Verdos (Pleistocene, ~640 ka)) as sample locations. We took two amalgamated clast samples on the Gunbarrel Hill surface, and dated depth profiles using meteoric and in situ 10Be on the Table Mountain and Pioneer surfaces. In addition, we measured the in situ 10Be concentrations of 6 boulder samples from the Table Mountain surface. We find that all three surfaces are significantly younger than expected and that in situ and meteoric age measurements largely agree with each other. The samples from the pre-Rocky Flats site (Gunbarrel Hill) show ages of 250 and 310 ka, ignoring post-depositional surface erosion. The ages of the Table Mountain and Pioneer sites fall within the 120 to 150 ka window. These <span class="hlt">absolute</span> ages overlap with the <span class="hlt">timing</span> of the penultimate glaciation during marine isotope stage (MIS) 6</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19900039520&hterms=percent+error&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dpercent%2Berror','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19900039520&hterms=percent+error&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dpercent%2Berror"><span id="translatedtitle">Sampling <span class="hlt">errors</span> for satellite-derived tropical rainfall - Monte Carlo study using a space-<span class="hlt">time</span> stochastic model</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.</p> <p>1990-01-01</p> <p>Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling <span class="hlt">error</span> inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this <span class="hlt">error</span> is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional <span class="hlt">time</span>-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling <span class="hlt">errors</span> found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling <span class="hlt">error</span> is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006AGUFMSM13A0346P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006AGUFMSM13A0346P"><span id="translatedtitle">Single Particle-Photon Imaging Detector With 4-Dimensional Output: <span class="hlt">Absolute</span> <span class="hlt">Time</span>-of-hit, X-Y Position, and PHA: Applications in Space Science Instruments</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Paschalidis, N. P.; Mitchell, D. G.; Brandt, P. C.</p> <p>2006-12-01</p> <p>A detector that can simultaneously measure <span class="hlt">time</span>, position, and pulse high analysis (PHA) of single particle/photons with high resolutions and speeds, is a strong enabling technology for many space science instruments such as: energetic neutral atom imagers, low energy neutrals, energetic particle spectrometers, ion/electron plasma analyzers, UV spectrographs, mass spectrometers, laser range finding imagers, X-ray imagers. This presentation describes one such 4-dimentional detector based on micro-channel plates (MCPs), delay line anodes, and precise <span class="hlt">time</span> of flight, and charge integration electronics for PHA. More specifically the detector includes: a) An MCP in 2-stack or Z-stack configuration for the particle/photon detection. b) Option for a thin foil or photo-cathode in front of the MCP to increase the detection efficiency of particles or photons respectively. c) Novel 1D or 2D delay line anode adaptable to almost any geometry and physical size of common instruments mentioned above. d) Fast <span class="hlt">time</span> of flight (TOF) electronics for the <span class="hlt">absolute</span> <span class="hlt">time</span> of hit and the X-Y position determination. e) Fast charge integration electronics for PHA of the total charge released by the MCP. Under certain circumstances the PHA gives information about the particle mass such as for protons, He and Oxygen, cross calibrated against UV light which typically gives a single electron distribution. f) FPGA electronics for digital data acquisition and handling. e) Standard mat lab SW for data analysis and visualization in a stand alone application. The detector achieves <span class="hlt">time</span> of hit accuracy <50ps, X-Y position resolution <20um in a field of 2048 x 2048 pixels (2048 for 1D) and adjustable speeds of: 10MHz at 256 x 256 pixels to 1MHz at 2048 x 2048 pixels. The total-charge analysis is at 10-bits. The detector can be used in its full 4D configuration such as in TOF imaging particle analyzer (i.e ENA), or in a reduced configuration such as in a UV spectrograph with X-Y position only. Typical</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhyD..258...47B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhyD..258...47B"><span id="translatedtitle">Real-<span class="hlt">time</span> prediction of atmospheric Lagrangian coherent structures based on forecast data: An application and <span class="hlt">error</span> analysis</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>BozorgMagham, Amir E.; Ross, Shane D.; Schmale, David G.</p> <p>2013-09-01</p> <p>The language of Lagrangian coherent structures (LCSs) provides a new means for studying transport and mixing of passive particles advected by an atmospheric flow field. Recent observations suggest that LCSs govern the large-scale atmospheric motion of airborne microorganisms, paving the way for more efficient models and management strategies for the spread of infectious diseases affecting plants, domestic animals, and humans. In addition, having reliable predictions of the <span class="hlt">timing</span> of hyperbolic LCSs may contribute to improved aerobiological sampling of microorganisms with unmanned aerial vehicles and LCS-based early warning systems. Chaotic atmospheric dynamics lead to unavoidable forecasting <span class="hlt">errors</span> in the wind velocity field, which compounds <span class="hlt">errors</span> in LCS forecasting. In this study, we reveal the cumulative effects of <span class="hlt">errors</span> of (short-term) wind field forecasts on the finite-<span class="hlt">time</span> Lyapunov exponent (FTLE) fields and the associated LCSs when realistic forecast plans impose certain limits on the forecasting parameters. Objectives of this paper are to (a) quantify the accuracy of prediction of FTLE-LCS features and (b) determine the sensitivity of such predictions to forecasting parameters. Results indicate that forecasts of attracting LCSs exhibit less divergence from the archive-based LCSs than the repelling features. This result is important since attracting LCSs are the backbone of long-lived features in moving fluids. We also show under what circumstances one can trust the forecast results if one merely wants to know if an LCS passed over a region and does not need to precisely know the passage <span class="hlt">time</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009ams..book..145K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009ams..book..145K"><span id="translatedtitle"><span class="hlt">Absolute</span> High-Precision Localisation of an Unmanned Ground Vehicle by Using Real-<span class="hlt">Time</span> Aerial Video Imagery for Geo-referenced Orthophoto Registration</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kuhnert, Lars; Ax, Markus; Langer, Matthias; Nguyen van, Duong; Kuhnert, Klaus-Dieter</p> <p></p> <p>This paper describes an <span class="hlt">absolute</span> localisation method for an unmanned ground vehicle (UGV) if GPS is unavailable for the vehicle. The basic idea is to combine an unmanned aerial vehicle (UAV) to the ground vehicle and use it as an external sensor platform to achieve an <span class="hlt">absolute</span> localisation of the robotic team. Beside the discussion of the rather naive method directly using the GPS position of the aerial robot to deduce the ground robot's position the main focus of this paper lies on the indirect usage of the telemetry data of the aerial robot combined with live video images of an onboard camera to realise a registration of local video images with apriori registered orthophotos. This yields to a precise driftless <span class="hlt">absolute</span> localisation of the unmanned ground vehicle. Experiments with our robotic team (AMOR and PSYCHE) successfully verify this approach.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19720049545&hterms=formula&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dformula','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19720049545&hterms=formula&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dformula"><span id="translatedtitle">Simplified formula for mean cycle-slip <span class="hlt">time</span> of phase-locked loops with steady-state phase <span class="hlt">error</span>.</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tausworthe, R. C.</p> <p>1972-01-01</p> <p>Previous work shows that the mean <span class="hlt">time</span> from lock to a slipped cycle of a phase-locked loop is given by a certain double integral. Accurate numerical evaluation of this formula for the second-order loop is extremely vexing because the difference between exponentially large quantities is involved. The presented article demonstrates a method in which a much-reduced precision program can be used to obtain the mean first-cycle slip <span class="hlt">time</span> for a loop of arbitrary degree tracking at a specified SNR and steady-state phase <span class="hlt">error</span>. It also presents a simple approximate formula that is asymptotically tight at higher loop SNR.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EJASP2012...94K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EJASP2012...94K"><span id="translatedtitle">Performance analysis for <span class="hlt">time</span>-frequency MUSIC algorithm in presence of both additive noise and array calibration <span class="hlt">errors</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim</p> <p>2012-12-01</p> <p>This article deals with the application of Spatial <span class="hlt">Time</span>-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration <span class="hlt">errors</span> in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) <span class="hlt">error</span> estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19787345','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19787345"><span id="translatedtitle">Comparison of <span class="hlt">error</span>-amplification and haptic-guidance training techniques for learning of a <span class="hlt">timing</span>-based motor task by healthy individuals.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Milot, Marie-Hélène; Marchal-Crespo, Laura; Green, Christopher S; Cramer, Steven C; Reinkensmeyer, David J</p> <p>2010-03-01</p> <p>Performance <span class="hlt">errors</span> drive motor learning for many tasks. Some researchers have suggested that reducing performance <span class="hlt">errors</span> with haptic guidance can benefit learning by demonstrating correct movements, while others have suggested that artificially increasing <span class="hlt">errors</span> will force faster and more complete learning. This study compared the effect of these two techniques--haptic guidance and <span class="hlt">error</span> amplification--as healthy subjects learned to play a computerized pinball-like game. The game required learning to press a button using wrist movement at the correct <span class="hlt">time</span> to make a flipper hit a falling ball to a randomly positioned target. <span class="hlt">Errors</span> were decreased or increased using a robotic device that retarded or accelerated wrist movement, based on sensed movement initiation <span class="hlt">timing</span> <span class="hlt">errors</span>. After training with either <span class="hlt">error</span> amplification or haptic guidance, subjects significantly reduced their <span class="hlt">timing</span> <span class="hlt">errors</span> and generalized learning to untrained targets. However, for a subset of more skilled subjects, training with amplified <span class="hlt">errors</span> produced significantly greater learning than training with the reduced <span class="hlt">errors</span> associated with haptic guidance, while for a subset of less skilled subjects, training with haptic guidance seemed to benefit learning more. These results suggest that both techniques help enhanced performance of a <span class="hlt">timing</span> task, but learning is optimized if training subjects with the appropriate technique based on their baseline skill level.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25536847','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25536847"><span id="translatedtitle">Effects of compatible versus competing rhythmic grouping on <span class="hlt">errors</span> and <span class="hlt">timing</span> variability in speech.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Katsika, Argyro; Shattuck-Hufnagel; Mooshammer, Christine; Tiede, Mark; Goldstein, Louis</p> <p>2014-12-01</p> <p>In typical speech words are grouped into prosodic constituents. This study investigates how such grouping interacts with segmental sequencing patterns in the production of repetitive word sequences. We experimentally manipulated grouping behavior using a rhythmic repetition task to elicit speech for perceptual and acoustic analysis to test the hypothesis that prosodic structure and patterns of segmental alternation can interact in the production planning process. Talkers produced alternating sequences of two words (top cop) and non-alternating controls (top top and cop cop), organized into six-word sequences. These sequences were further organized into prosodic groupings of three two-word pairs or two three-word triples by means of visual cues and audible metronome clicks. Results for six speakers showed more speech <span class="hlt">errors</span> in triples, that is, when pairwise word alternation was mismatched with prosodic subgrouping in triples. This result suggests that the planning process for the segmental units of an utterance interacts with the planning process for the prosodic grouping of its words. It also highlights the importance of extending commonly used experimental speech elicitation methods to include more complex prosodic patterns, in order to evoke the kinds of interaction between prosodic structure and planning that occur in the production of lexical forms in continuous communicative speech.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=%22Survival+model%22&id=EJ737239','ERIC'); return false;" href="http://eric.ed.gov/?q=%22Survival+model%22&id=EJ737239"><span id="translatedtitle">A Note on Standard <span class="hlt">Errors</span> for Survival Curves in Discrete-<span class="hlt">Time</span> Survival Analysis</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Zwick, Rebecca; Sklar, Jeffrey C.</p> <p>2005-01-01</p> <p>Cox (1972) proposed a discrete-<span class="hlt">time</span> survival model that is somewhat analogous to the proportional hazards model for continuous <span class="hlt">time</span>. Efron (1988) showed that this model can be estimated using ordinary logistic regression software, and Singer and Willett (1993) provided a detailed illustration of a particularly flexible form of the model that…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=ray+AND+bradbury&id=EJ659027','ERIC'); return false;" href="http://eric.ed.gov/?q=ray+AND+bradbury&id=EJ659027"><span id="translatedtitle">Ray Bradbury's "The Kilimanjaro Device": The Need To Correct the <span class="hlt">Errors</span> of <span class="hlt">Time</span>.</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Logsdon, Loren</p> <p>2002-01-01</p> <p>Asks students reading "The Kilimanjaro Device" to focus on how it illustrates a most central idea about the human condition: how to best live in <span class="hlt">time</span> so that our lives are happy, fulfilled and right. Presents background for "The Kilimanjaro Device," discusses Bradbury's rescue mission, and the principle of good <span class="hlt">timing</span>. Concludes by pointing out…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_12 --> <div id="page_13" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="241"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/biblio/22102141','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/biblio/22102141"><span id="translatedtitle"><span class="hlt">Time</span>-resolved in vivo luminescence dosimetry for online <span class="hlt">error</span> detection in pulsed dose-rate brachytherapy</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Andersen, Claus E.; Nielsen, Soeren Kynde; Lindegaard, Jacob Christian; Tanderup, Kari</p> <p>2009-11-15</p> <p>Purpose: The purpose of this study is to present and evaluate a dose-verification protocol for pulsed dose-rate (PDR) brachytherapy based on in vivo <span class="hlt">time</span>-resolved (1 s <span class="hlt">time</span> resolution) fiber-coupled luminescence dosimetry. Methods: Five cervix cancer patients undergoing PDR brachytherapy (Varian GammaMed Plus with {sup 192}Ir) were monitored. The treatments comprised from 10 to 50 pulses (1 pulse/h) delivered by intracavitary/interstitial applicators (tandem-ring systems and/or needles). For each patient, one or two dosimetry probes were placed directly in or close to the tumor region using stainless steel or titanium needles. Each dosimeter probe consisted of a small aluminum oxide crystal attached to an optical fiber cable (1 mm outer diameter) that could guide radioluminescence (RL) and optically stimulated luminescence (OSL) from the crystal to special readout instrumentation. Positioning uncertainty and hypothetical dose-delivery <span class="hlt">errors</span> (interchanged guide tubes or applicator movements from {+-}5 to {+-}15 mm) were simulated in software in order to assess the ability of the system to detect <span class="hlt">errors</span>. Results: For three of the patients, the authors found no significant differences (P>0.01) for comparisons between in vivo measurements and calculated reference values at the level of dose per dwell position, dose per applicator, or total dose per pulse. The standard deviations of the dose per pulse were less than 3%, indicating a stable dose delivery and a highly stable geometry of applicators and dosimeter probes during the treatments. For the two other patients, the authors noted significant deviations for three individual pulses and for one dosimeter probe. These deviations could have been due to applicator movement during the treatment and one incorrectly positioned dosimeter probe, respectively. Computer simulations showed that the likelihood of detecting a pair of interchanged guide tubes increased by a factor of 10 or more for the considered patients when</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15263709','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15263709"><span id="translatedtitle">Precision goniometer equipped with a 22-bit <span class="hlt">absolute</span> rotary encoder.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Xiaowei, Z; Ando, M; Jidong, W</p> <p>1998-05-01</p> <p>The calibration of a compact precision goniometer equipped with a 22-bit <span class="hlt">absolute</span> rotary encoder is presented. The goniometer is a modified Huber 410 goniometer: the diffraction angles can be coarsely generated by a stepping-motor-driven worm gear and precisely interpolated by a piezoactuator-driven tangent arm. The angular accuracy of the precision rotary stage was evaluated with an autocollimator. It was shown that the deviation from circularity of the rolling bearing utilized in the precision rotary stage restricts the angular positioning accuracy of the goniometer, and results in an angular accuracy ten <span class="hlt">times</span> larger than the angular resolution of 0.01 arcsec. The 22-bit encoder was calibrated by an incremental rotary encoder. It became evident that the accuracy of the <span class="hlt">absolute</span> encoder is approximately 18 bit due to systematic <span class="hlt">errors</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013SPIE.8893E..0MS','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013SPIE.8893E..0MS"><span id="translatedtitle">Assessing <span class="hlt">error</span> sources for Landsat <span class="hlt">time</span> series analysis for tropical test sites in Viet Nam and Ethiopia</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schultz, Michael; Verbesselt, Jan; Herold, Martin; Avitabile, Valerio</p> <p>2013-10-01</p> <p>Researchers who use remotely sensed data can spend half of their total effort analysing prior data. If this data preprocessing does not match the application, this <span class="hlt">time</span> spent on data analysis can increase considerably and can lead to inaccuracies. Despite the existence of a number of methods for pre-processing Landsat <span class="hlt">time</span> series, each method has shortcomings, particularly for mapping forest changes under varying illumination, data availability and atmospheric conditions. Based on the requirements of mapping forest changes as defined by the United Nations (UN) Reducing Emissions from Forest Degradation and Deforestation (REDD) program, the accurate reporting of the spatio-temporal properties of these changes is necessary. We compared the impact of three fundamentally different radiometric preprocessing techniques Moderate Resolution Atmospheric TRANsmission (MODTRAN), Second Simulation of a Satellite Signal in the Solar Spectrum (6S) and simple Dark Object Subtraction (DOS) on mapping forest changes using Landsat <span class="hlt">time</span> series data. A modification of Breaks For Additive Season and Trend (BFAST) monitor was used to jointly map the spatial and temporal agreement of forest changes at test sites in Ethiopia and Viet Nam. The suitability of the pre-processing methods for the occurring forest change drivers was assessed using recently captured Ground Truth and high resolution data (1000 points). A method for creating robust generic forest maps used for the sampling design is presented. An assessment of <span class="hlt">error</span> sources has been performed identifying haze as a major source for <span class="hlt">time</span> series analysis commission <span class="hlt">error</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JHyd..519.2722L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JHyd..519.2722L"><span id="translatedtitle">An integrated <span class="hlt">error</span> parameter estimation and lag-aware data assimilation scheme for real-<span class="hlt">time</span> flood forecasting</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Yuan; Ryu, Dongryeol; Western, Andrew W.; Wang, Q. J.; Robertson, David E.; Crow, Wade T.</p> <p>2014-11-01</p> <p>For operational flood forecasting, discharge observations may be assimilated into a hydrologic model to improve forecasts. However, the performance of conventional filtering schemes can be degraded by ignoring the <span class="hlt">time</span> lag between soil moisture and discharge responses. This has led to ongoing development of more appropriate ways to implement sequential data assimilation. In this paper, an ensemble Kalman smoother (EnKS) with fixed <span class="hlt">time</span> window is implemented for the GR4H hydrologic model (modèle du Génie Rural à 4 paramètres Horaire) to update current and antecedent model states. Model and observation <span class="hlt">error</span> parameters are estimated through the maximum a posteriori method constrained by prior information drawn from flow gauging data. When evaluated in a hypothetical forecasting mode using observed rainfall, the EnKS is found to be more stable and produce more accurate discharge forecasts than a standard ensemble Kalman filter (EnKF) by reducing the mean of the ensemble root mean squared <span class="hlt">error</span> (MRMSE) by 13-17%. The latter tends to over-correct current model states and leads to spurious peaks and oscillations in discharge forecasts. When evaluated in a real-<span class="hlt">time</span> forecasting mode using rainfall forecasts from a numerical weather prediction model, the benefit of the EnKS is reduced as uncertainty in rainfall forecasts becomes dominant, especially at large forecast lead <span class="hlt">time</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3883681','PMC'); return false;" href="http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3883681"><span id="translatedtitle">Measuring Software <span class="hlt">Timing</span> <span class="hlt">Errors</span> in the Presentation of Visual Stimuli in Cognitive Neuroscience Experiments</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Garaizar, Pablo; Vadillo, Miguel A.; López-de-Ipiña, Diego; Matute, Helena</p> <p>2014-01-01</p> <p>Because of the features provided by an abundance of specialized experimental software packages, personal computers have become prominent and powerful tools in cognitive research. Most of these programs have mechanisms to control the precision and accuracy with which visual stimuli are presented as well as the response <span class="hlt">times</span>. However, external factors, often related to the technology used to display the visual information, can have a noticeable impact on the actual performance and may be easily overlooked by researchers. The aim of this study is to measure the precision and accuracy of the <span class="hlt">timing</span> mechanisms of some of the most popular software packages used in a typical laboratory scenario in order to assess whether presentation <span class="hlt">times</span> configured by researchers do not differ from measured <span class="hlt">times</span> more than what is expected due to the hardware limitations. Despite the apparent precision and accuracy of the results, important issues related to <span class="hlt">timing</span> setups in the presentation of visual stimuli were found, and they should be taken into account by researchers in their experiments. PMID:24409318</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.8321E..06H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.8321E..06H"><span id="translatedtitle">Measurement <span class="hlt">error</span> analysis of taxi meter</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>He, Hong; Li, Dan; Li, Hang; Zhang, Da-Jian; Hou, Ming-Feng; Zhang, Shi-pu</p> <p>2011-12-01</p> <p>The <span class="hlt">error</span> test of the taximeter is divided into two aspects: (1) the test about <span class="hlt">time</span> <span class="hlt">error</span> of the taximeter (2) distance test about the usage <span class="hlt">error</span> of the machine. The paper first gives the working principle of the meter and the principle of <span class="hlt">error</span> verification device. Based on JJG517 - 2009 "Taximeter Verification Regulation ", the paper focuses on analyzing the machine <span class="hlt">error</span> and test <span class="hlt">error</span> of taxi meter. And the detect methods of <span class="hlt">time</span> <span class="hlt">error</span> and distance <span class="hlt">error</span> are discussed as well. In the same conditions, standard uncertainty components (Class A) are evaluated, while in different conditions, standard uncertainty components (Class B) are also evaluated and measured repeatedly. By the comparison and analysis of the results, the meter accords with JJG517-2009, "Taximeter Verification Regulation ", thereby it improves the accuracy and efficiency largely. In actual situation, the meter not only makes up the lack of accuracy, but also makes sure the deal between drivers and passengers fair. <span class="hlt">Absolutely</span> it enriches the value of the taxi as a way of transportation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19760026758','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19760026758"><span id="translatedtitle">Secondary task for full flight simulation incorporating tasks that commonly cause pilot <span class="hlt">error</span>: <span class="hlt">Time</span> estimation</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rosch, E.</p> <p>1975-01-01</p> <p>The task of <span class="hlt">time</span> estimation, an activity occasionally performed by pilots during actual flight, was investigated with the objective of providing human factors investigators with an unobtrusive and minimally loading additional task that is sensitive to differences in flying conditions and flight instrumentation associated with the main task of piloting an aircraft simulator. Previous research indicated that the duration and consistency of <span class="hlt">time</span> estimates is associated with the cognitive, perceptual, and motor loads imposed by concurrent simple tasks. The relationships between the length and variability of <span class="hlt">time</span> estimates and concurrent task variables under a more complex situation involving simulated flight were clarified. The wrap-around effect with respect to baseline duration, a consequence of mode switching at intermediate levels of concurrent task distraction, should contribute substantially to estimate variability and have a complex effect on the shape of the resulting distribution of estimates.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/23586876','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/23586876"><span id="translatedtitle"><span class="hlt">Absolute</span> biological needs.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>McLeod, Stephen</p> <p>2014-07-01</p> <p><span class="hlt">Absolute</span> needs (as against instrumental needs) are independent of the ends, goals and purposes of personal agents. Against the view that the only needs are instrumental needs, David Wiggins and Garrett Thomson have defended <span class="hlt">absolute</span> needs on the grounds that the verb 'need' has instrumental and <span class="hlt">absolute</span> senses. While remaining neutral about it, this article does not adopt that approach. Instead, it suggests that there are <span class="hlt">absolute</span> biological needs. The <span class="hlt">absolute</span> nature of these needs is defended by appeal to: their objectivity (as against mind-dependence); the universality of the phenomenon of needing across the plant and animal kingdoms; the impossibility that biological needs depend wholly upon the exercise of the abilities characteristic of personal agency; the contention that the possession of biological needs is prior to the possession of the abilities characteristic of personal agency. Finally, three philosophical usages of 'normative' are distinguished. On two of these, to describe a phenomenon or claim as 'normative' is to describe it as value-dependent. A description of a phenomenon or claim as 'normative' in the third sense does not entail such value-dependency, though it leaves open the possibility that value depends upon the phenomenon or upon the truth of the claim. It is argued that while survival needs (or claims about them) may well be normative in this third sense, they are normative in neither of the first two. Thus, the idea of <span class="hlt">absolute</span> need is not inherently normative in either of the first two senses. PMID:23586876</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23586876','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23586876"><span id="translatedtitle"><span class="hlt">Absolute</span> biological needs.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>McLeod, Stephen</p> <p>2014-07-01</p> <p><span class="hlt">Absolute</span> needs (as against instrumental needs) are independent of the ends, goals and purposes of personal agents. Against the view that the only needs are instrumental needs, David Wiggins and Garrett Thomson have defended <span class="hlt">absolute</span> needs on the grounds that the verb 'need' has instrumental and <span class="hlt">absolute</span> senses. While remaining neutral about it, this article does not adopt that approach. Instead, it suggests that there are <span class="hlt">absolute</span> biological needs. The <span class="hlt">absolute</span> nature of these needs is defended by appeal to: their objectivity (as against mind-dependence); the universality of the phenomenon of needing across the plant and animal kingdoms; the impossibility that biological needs depend wholly upon the exercise of the abilities characteristic of personal agency; the contention that the possession of biological needs is prior to the possession of the abilities characteristic of personal agency. Finally, three philosophical usages of 'normative' are distinguished. On two of these, to describe a phenomenon or claim as 'normative' is to describe it as value-dependent. A description of a phenomenon or claim as 'normative' in the third sense does not entail such value-dependency, though it leaves open the possibility that value depends upon the phenomenon or upon the truth of the claim. It is argued that while survival needs (or claims about them) may well be normative in this third sense, they are normative in neither of the first two. Thus, the idea of <span class="hlt">absolute</span> need is not inherently normative in either of the first two senses.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19980237140','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19980237140"><span id="translatedtitle">To Err is Normable: The Computation of Frequency-Domain <span class="hlt">Error</span> Bounds from <span class="hlt">Time</span>-Domain Data</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hartley, Tom T.; Veillette, Robert J.; DeAbreuGarcia, J. Alexis; Chicatelli, Amy; Hartmann, Richard</p> <p>1998-01-01</p> <p>This paper exploits the relationships among the <span class="hlt">time</span>-domain and frequency-domain system norms to derive information useful for modeling and control design, given only the system step response data. A discussion of system and signal norms is included. The proposed procedures involve only simple numerical operations, such as the discrete approximation of derivatives and integrals, and the calculation of matrix singular values. The resulting frequency-domain and Hankel-operator norm approximations may be used to evaluate the accuracy of a given model, and to determine model corrections to decrease the modeling <span class="hlt">errors</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=motion+AND+estimation&pg=2&id=EJ699352','ERIC'); return false;" href="http://eric.ed.gov/?q=motion+AND+estimation&pg=2&id=EJ699352"><span id="translatedtitle">Learning to Detect <span class="hlt">Error</span> in Movement <span class="hlt">Timing</span> Using Physical and Observational Practice</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Black, Charles B.; Wright, David L.; Magnuson, Curt E.; Brueckner, Sebastian</p> <p>2005-01-01</p> <p>Three experiments assessed the possibility that a physical practice participant 's ability to render appropriate movement <span class="hlt">timing</span> estimates may be hindered compared to those who merely observed. Results from these experiments revealed that observers and physical practice participants executed and estimated the overall durations of movement…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/26010673','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/26010673"><span id="translatedtitle">Different types of <span class="hlt">errors</span> in saccadic task are sensitive to either <span class="hlt">time</span> of day or chronic sleep restriction.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wachowicz, Barbara; Beldzik, Ewa; Domagalik, Aleksandra; Fafrowicz, Magdalena; Gawlowska, Magda; Janik, Justyna; Lewandowska, Koryna; Oginska, Halszka; Marek, Tadeusz</p> <p>2015-01-01</p> <p>Circadian rhythms and restricted sleep length affect cognitive functions and, consequently, the performance of day to day activities. To date, no more than a few studies have explored the consequences of these factors on oculomotor behaviour. We have implemented a spatial cuing paradigm in an eye tracking experiment conducted four <span class="hlt">times</span> of the day after one week of rested wakefulness and after one week of chronic partial sleep restriction. Our aim was to verify whether these conditions affect the number of a variety of saccadic task <span class="hlt">errors</span>. Interestingly, we found that failures in response selection, i.e. premature responses and direction <span class="hlt">errors</span>, were prone to <span class="hlt">time</span> of day variations, whereas failures in response execution, i.e. omissions and commissions, were considerably affected by sleep deprivation. The former can be linked to the cue facilitation mechanism, while the latter to wake state instability and the diminished ability of top-down inhibition. Together, these results may be interpreted in terms of distinctive sensitivity of orienting and alerting systems to fatigue. Saccadic eye movements proved to be a novel and effective measure with which to study the susceptibility of attentional systems to <span class="hlt">time</span> factors, thus, this approach is recommended for future research. PMID:26010673</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=20000064542&hterms=alexander+bruce&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dalexander%2Bbruce','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=20000064542&hterms=alexander+bruce&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dalexander%2Bbruce"><span id="translatedtitle">Issues in <span class="hlt">Absolute</span> Spectral Radiometric Calibration: Intercomparison of Eight Sources</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Goetz, Alexander F. H.; Kindel, Bruce; Pilewskie, Peter</p> <p>1998-01-01</p> <p>The application of atmospheric models to AVIRIS and other spectral imaging data to derive surface reflectance requires that the sensor output be calibrated to <span class="hlt">absolute</span> radiance. Uncertainties in <span class="hlt">absolute</span> calibration are to be expected, and claims of 92% accuracy have been published. Measurements of accurate surface albedos and cloud absorption to be used in radiative balance calculations depend critically on knowing the <span class="hlt">absolute</span> spectral-radiometric response of the sensor. The Earth Observing System project is implementing a rigorous program of <span class="hlt">absolute</span> radiometric calibration for all optical sensors. Since a number of imaging instruments that provide output in terms of <span class="hlt">absolute</span> radiance are calibrated at different sites, it is important to determine the <span class="hlt">errors</span> that can be expected among calibration sites. Another question exists about the <span class="hlt">errors</span> in the <span class="hlt">absolute</span> knowledge of the exoatmospheric spectral solar irradiance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19820003850','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19820003850"><span id="translatedtitle"><span class="hlt">Time</span>-variable Earth's albedo model characteristics and applications to satellite sampling <span class="hlt">errors</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bartman, F. L.</p> <p>1981-01-01</p> <p>Characteristics of the <span class="hlt">time</span> variable Earth albedo model are described. With the cloud cover multiplying factor adjusted to produce a global annual average albedo of 30.3, the global annual average cloud cover is 45.5 percent. Global annual average sunlit cloud cover is 48.5 percent; nighttime cloud cover is 42.7 percent. Month-to-month global average albedo is almost sinusoidal with maxima in June and December and minima in April and October. Month-to-month variation of sunlit cloud cover is similar, but not in all details. The diurnal variation of global albedo is greatest from November to March; the corresponding variation of sunlit cloud cover is greatest from May to October. Annual average zonal albedos and monthly average zonal albedos are in good agreement with satellite-measured values, with notable differences in the polar regions in some months and at 15 S. The albedo of some 10 deg by 10 deg. areas of the Earth versus zenith angle are described. Satellite albedo measurement sampling effects are described in local <span class="hlt">time</span> and in Greenwich mean <span class="hlt">time</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=61470','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=61470"><span id="translatedtitle">Comparing Response <span class="hlt">Time</span>, <span class="hlt">Errors</span>, and Satisfaction Between Text-based and Graphical User Interfaces During Nursing Order Tasks</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Staggers, Nancy; Kobus, David</p> <p>2000-01-01</p> <p>Despite the general adoption of graphical users interfaces (GUIs) in health care, few empirical data document the impact of this move on system users. This study compares two distinctly different user interfaces, a legacy text-based interface and a prototype graphical interface, for differences in nurses' response <span class="hlt">time</span> (RT), <span class="hlt">errors</span>, and satisfaction when the interfaces are used in the performance of computerized nursing order tasks. In a medical center on the East Coast of the United States, 98 randomly selected male and female nurses completed 40 tasks using each interface. Nurses completed four different types of order tasks (create, activate, modify, and discontinue). Using a repeated-measures and Latin square design, the study was counterbalanced for tasks, interface types, and blocks of trials. Overall, nurses had significantly faster response <span class="hlt">times</span> (P < 0.0001) and fewer <span class="hlt">errors</span> (P < 0.0001) using the prototype GUI than the text-based interface. The GUI was also rated significantly higher for satisfaction than the text system, and the GUI was faster to learn (P < 0.0001). Therefore, the results indicated that the use of a prototype GUI for nursing orders significantly enhances user performance and satisfaction. Consideration should be given to redesigning older user interfaces to create more modern ones by using human factors principles and input from user-centered focus groups. Future work should examine prospective nursing interfaces for highly complex interactions in computer-based patient records, detail the severity of <span class="hlt">errors</span> made on line, and explore designs to optimize interactions in life-critical systems. PMID:10730600</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19880036076&hterms=Variable+stars&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3D%2528Variable%2Bstars%2529','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19880036076&hterms=Variable+stars&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3D%2528Variable%2Bstars%2529"><span id="translatedtitle">Statistical <span class="hlt">error</span> analysis in CCD <span class="hlt">time</span>-resolved photometry with applications to variable stars and quasars</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Howell, Steve B.; Warnock, Archibald, III; Mitchell, Kenneth J.</p> <p>1988-01-01</p> <p>Differential photometric <span class="hlt">time</span> series obtained from CCD frames are tested for intrinsic variability using a newly developed analysis of variance technique. In general, the objects used for differential photometry will not all be of equal magnitude, so the techniques derived here explicitly correct for differences in the measured variances due to photon statistics. Other random-noise terms are also considered. The technique tests for the presence of intrinsic variability without regard to its random or periodic nature. It is then applied to observations of the variable stars ZZ Ceti and US 943 and the active extragalactic objects OQ 530, US 211, US 844, LB 9743, and OJ 287.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/20496663','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/20496663"><span id="translatedtitle">[The <span class="hlt">error</span> analysis and experimental verification of laser radar spectrum detection and terahertz <span class="hlt">time</span> domain spectroscopy].</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Wen-Tao; Li, Jing-Wen; Sun, Zhi-Hui</p> <p>2010-03-01</p> <p>Terahertz waves (THz, T-ray) lie between far-infrared and microwave in electromagnetic spectrum with frequency from 0.1 to 10 THz. Many chemical agent explosives show characteristic spectral features in the terahertz. Compared with conventional methods of detecting a variety of threats, such as weapons and chemical agent, THz radiation is low frequency and non-ionizing, and does not give rise to safety concerns. The present paper summarizes the latest progress in the application of terahertz <span class="hlt">time</span> domain spectroscopy (THz-TDS) to chemical agent explosives. A kind of device on laser radar detecting and real <span class="hlt">time</span> spectrum measuring was designed which measures the laser spectrum on the bases of Fourier optics and optical signal processing. Wedge interferometer was used as the beam splitter to wipe off the background light and detect the laser and measure the spectrum. The result indicates that 10 ns laser radar pulse can be detected and many factors affecting experiments are also introduced. The combination of laser radar spectrum detecting, THz-TDS, modern pattern recognition and signal processing technology is the developing trend of remote detection for chemical agent explosives. PMID:20496663</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20496663','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20496663"><span id="translatedtitle">[The <span class="hlt">error</span> analysis and experimental verification of laser radar spectrum detection and terahertz <span class="hlt">time</span> domain spectroscopy].</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Wen-Tao; Li, Jing-Wen; Sun, Zhi-Hui</p> <p>2010-03-01</p> <p>Terahertz waves (THz, T-ray) lie between far-infrared and microwave in electromagnetic spectrum with frequency from 0.1 to 10 THz. Many chemical agent explosives show characteristic spectral features in the terahertz. Compared with conventional methods of detecting a variety of threats, such as weapons and chemical agent, THz radiation is low frequency and non-ionizing, and does not give rise to safety concerns. The present paper summarizes the latest progress in the application of terahertz <span class="hlt">time</span> domain spectroscopy (THz-TDS) to chemical agent explosives. A kind of device on laser radar detecting and real <span class="hlt">time</span> spectrum measuring was designed which measures the laser spectrum on the bases of Fourier optics and optical signal processing. Wedge interferometer was used as the beam splitter to wipe off the background light and detect the laser and measure the spectrum. The result indicates that 10 ns laser radar pulse can be detected and many factors affecting experiments are also introduced. The combination of laser radar spectrum detecting, THz-TDS, modern pattern recognition and signal processing technology is the developing trend of remote detection for chemical agent explosives.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15597557','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15597557"><span id="translatedtitle">Information systems and human <span class="hlt">error</span> in the lab.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bissell, Michael G</p> <p>2004-01-01</p> <p>Health system costs in clinical laboratories are incurred daily due to human <span class="hlt">error</span>. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human <span class="hlt">error</span>. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of <span class="hlt">error</span> will likely appear. Clinical laboratories could potentially benefit by integrating findings on human <span class="hlt">error</span> from modern behavioral science into their operations. Fully understanding human <span class="hlt">error</span> requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular <span class="hlt">error</span> at a particular instant cannot be <span class="hlt">absolutely</span> prevented, human <span class="hlt">error</span> rates can be reduced. The following principles are key: an understanding of the process of learning in relation to <span class="hlt">error</span>; understanding the origin of <span class="hlt">errors</span> since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing <span class="hlt">errors</span>, at least for a <span class="hlt">time</span>; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of <span class="hlt">error</span>, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real <span class="hlt">time</span> that an <span class="hlt">error</span> has occurred.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9751E..16J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9751E..16J"><span id="translatedtitle">Analysis of position <span class="hlt">error</span> by <span class="hlt">time</span> constant in read-out resistive network for gamma-ray imaging detection system</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jeon, Su-Jin; Park, Chang-In; Son, Byung-Hee; Jung, Mi; Jang, Teak-Jin; Lee, Chun-Sik; Choi, Young-Wan</p> <p>2016-03-01</p> <p>Position-sensitive photomultiplier tubes (PSPMTs) in array are used as gamma ray position detector. Each PMT converts the light of wide spectrum range (100 nm ~ 2500 nm) to electrical signal with amplification. Because detection system size is determined by the number of output channels in the PSPMTs, resistive network has been used for reducing the number of output channels. The photo-generated current is distributed to the four output current pulses according to a ratio by resistance values of resistive network. The detected positions are estimated by the peak value of the distributed current pulses. However, due to parasitic capacitance of PSPMTs in parallel with resistor in the resistive network, the <span class="hlt">time</span> constants should be considered. When the duration of current pulse is not long enough, peak value of distributed pulses is reduced and detected position <span class="hlt">error</span> is increased. In this paper, we analyzed the detected position <span class="hlt">error</span> in the resistive network and variation of <span class="hlt">time</span> constant according to the input position of the PSPMTs.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_13 --> <div id="page_14" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="261"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/1163731','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/1163731"><span id="translatedtitle">Large-Scale Uncertainty and <span class="hlt">Error</span> Analysis for <span class="hlt">Time</span>-dependent Fluid/Structure Interactions in Wind Turbine Applications</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Alonso, Juan J.; Iaccarino, Gianluca</p> <p>2013-08-25</p> <p>The following is the final report covering the entire period of this aforementioned grant, June 1, 2011 - May 31, 2013 for the portion of the effort corresponding to Stanford University (SU). SU has partnered with Sandia National Laboratories (PI: Mike S. Eldred) and Purdue University (PI: Dongbin Xiu) to complete this research project and this final report includes those contributions made by the members of the team at Stanford. Dr. Eldred is continuing his contributions to this project under a no-cost extension and his contributions to the overall effort will be detailed at a later <span class="hlt">time</span> (once his effort has concluded) on a separate project submitted by Sandia National Laboratories. At Stanford, the team is made up of Profs. Alonso, Iaccarino, and Duraisamy, post-doctoral researcher Vinod Lakshminarayan, and graduate student Santiago Padron. At Sandia National Laboratories, the team includes Michael Eldred, Matt Barone, John Jakeman, and Stefan Domino, and at Purdue University, we have Prof. Dongbin Xiu as our main collaborator. The overall objective of this project was to develop a novel, comprehensive methodology for uncertainty quantification by combining stochastic expansions (nonintrusive polynomial chaos and stochastic collocation), the adjoint approach, and fusion with experimental data to account for aleatory and epistemic uncertainties from random variable, random field, and model form sources. The expected outcomes of this activity were detailed in the proposal and are repeated here to set the stage for the results that we have generated during the <span class="hlt">time</span> period of execution of this project: 1. The rigorous determination of an <span class="hlt">error</span> budget comprising numerical <span class="hlt">errors</span> in physical space and statistical <span class="hlt">errors</span> in stochastic space and its use for optimal allocation of resources; 2. A considerable increase in efficiency when performing uncertainty quantification with a large number of uncertain variables in complex non-linear multi-physics problems; 3. A</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/1013812','DOE-PATENT-XML'); return false;" href="http://www.osti.gov/scitech/servlets/purl/1013812"><span id="translatedtitle">Comparing range data across the slow-<span class="hlt">time</span> dimension to correct motion measurement <span class="hlt">errors</span> beyond the range resolution of a synthetic aperture radar</span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas</p> <p>2010-08-17</p> <p>Motion measurement <span class="hlt">errors</span> that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the <span class="hlt">error</span>. Range profiles can be compared across the slow-<span class="hlt">time</span> dimension of the input data in order to estimate the <span class="hlt">error</span>. Once the <span class="hlt">error</span> has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4361849','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4361849"><span id="translatedtitle"><span class="hlt">Error</span>-based Extraction of States and Energy Landscapes from Experimental Single-Molecule <span class="hlt">Time</span>-Series</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Taylor, J. Nicholas; Li, Chun-Biu; Cooper, David R.; Landes, Christy F.; Komatsuzaki, Tamiki</p> <p>2015-01-01</p> <p>Characterization of states, the essential components of the underlying energy landscapes, is one of the most intriguing subjects in single-molecule (SM) experiments due to the existence of noise inherent to the measurements. Here we present a method to extract the underlying state sequences from experimental SM <span class="hlt">time</span>-series. Taking into account empirical <span class="hlt">error</span> and the finite sampling of the <span class="hlt">time</span>-series, the method extracts a steady-state network which provides an approximation of the underlying effective free energy landscape. The core of the method is the application of rate-distortion theory from information theory, allowing the individual data points to be assigned to multiple states simultaneously. We demonstrate the method's proficiency in its application to simulated trajectories as well as to experimental SM fluorescence resonance energy transfer (FRET) trajectories obtained from isolated agonist binding domains of the AMPA receptor, an ionotropic glutamate receptor that is prevalent in the central nervous system. PMID:25779909</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015NatSR...5E9174T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015NatSR...5E9174T"><span id="translatedtitle"><span class="hlt">Error</span>-based Extraction of States and Energy Landscapes from Experimental Single-Molecule <span class="hlt">Time</span>-Series</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Taylor, J. Nicholas; Li, Chun-Biu; Cooper, David R.; Landes, Christy F.; Komatsuzaki, Tamiki</p> <p>2015-03-01</p> <p>Characterization of states, the essential components of the underlying energy landscapes, is one of the most intriguing subjects in single-molecule (SM) experiments due to the existence of noise inherent to the measurements. Here we present a method to extract the underlying state sequences from experimental SM <span class="hlt">time</span>-series. Taking into account empirical <span class="hlt">error</span> and the finite sampling of the <span class="hlt">time</span>-series, the method extracts a steady-state network which provides an approximation of the underlying effective free energy landscape. The core of the method is the application of rate-distortion theory from information theory, allowing the individual data points to be assigned to multiple states simultaneously. We demonstrate the method's proficiency in its application to simulated trajectories as well as to experimental SM fluorescence resonance energy transfer (FRET) trajectories obtained from isolated agonist binding domains of the AMPA receptor, an ionotropic glutamate receptor that is prevalent in the central nervous system.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014cos..rept....9M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014cos..rept....9M"><span id="translatedtitle">Updated <span class="hlt">Absolute</span> Flux Calibration of the COS FUV Modes</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Massa, D.; Ely, J.; Osten, R.; Penton, S.; Aloisi, A.; Bostroem, A.; Roman-Duval, J.; Proffitt, C.</p> <p>2014-03-01</p> <p>We present newly derived point source <span class="hlt">absolute</span> flux calibrations for the COS FUV modes at both the original and second lifetime positions. The analysis includes observa- tions through the Primary Science Aperture (PSA) of the standard stars WD0308-565, GD71, WD1057+729 and WD0947+857 obtained as part of two calibration programs. Data were were obtained for all of the gratings at all of the original CENWAVE settings at both the original and second lifetime positions and for the G130M CENWAVE = 1222 at the second lifetime position. Data were also obtained with the FUVB segment for the G130M CENWAVE = 1055 and 1096 setting at the second lifetime position. We also present the derivation of L-flats that were used in processing the data and show that the internal consistency of the primary standards is 1%. The accuracy of the <span class="hlt">absolute</span> flux calibrations over the UV are estimated to be 1-2% for the medium resolution gratings, and 2-3% over most of the wavelength range of the G140L grating, although the uncertainty can be as large as 5% or more at some G140L wavelengths. We note that these <span class="hlt">errors</span> are all relative to the optical flux near the V band and small additional <span class="hlt">errors</span> may be present due to inaccuracies in the V band calibration. In addition, these <span class="hlt">error</span> estimates are for the <span class="hlt">time</span> at which the flux calibration data were obtained; the accuracy of the flux calibration at other <span class="hlt">times</span> can be affected by <span class="hlt">errors</span> in the <span class="hlt">time</span> dependent sensitivity (TDS) correction.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27472816','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27472816"><span id="translatedtitle"><span class="hlt">Time</span>-to-contact estimation <span class="hlt">errors</span> among older drivers with useful field of view impairments.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rusch, Michelle L; Schall, Mark C; Lee, John D; Dawson, Jeffrey D; Edwards, Samantha V; Rizzo, Matthew</p> <p>2016-10-01</p> <p>Previous research indicates that useful field of view (UFOV) decline affects older driver performance. In particular, elderly drivers have difficulty estimating oncoming vehicle <span class="hlt">time</span>-to-contact (TTC). The objective of this study was to evaluate how UFOV impairments affect TTC estimates in elderly drivers deciding when to make a left turn across oncoming traffic. TTC estimates were obtained from 64 middle-aged (n=17, age=46±6years) and older (n=37, age=75±6years) licensed drivers with a range of UFOV abilities using interactive scenarios in a fixed-base driving simulator. Each driver was situated in an intersection to turn left across oncoming traffic approaching and disappearing at differing distances (1.5, 3, or 5s) and speeds (45, 55, or 65mph). Drivers judged when each oncoming vehicle would collide with them if they were to turn left. Findings showed that TTC estimates across all drivers, on average, were most accurate for oncoming vehicles travelling at the highest velocities and least accurate for those travelling at the slowest velocities. Drivers with the worst UFOV scores had the least accurate TTC estimates, especially for slower oncoming vehicles. Results suggest age-related UFOV decline impairs older driver judgment of TTC with oncoming vehicles in safety-critical left-turn situations. Our results are compatible with national statistics on older driver crash proclivity at intersections.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/27472816','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/27472816"><span id="translatedtitle"><span class="hlt">Time</span>-to-contact estimation <span class="hlt">errors</span> among older drivers with useful field of view impairments.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rusch, Michelle L; Schall, Mark C; Lee, John D; Dawson, Jeffrey D; Edwards, Samantha V; Rizzo, Matthew</p> <p>2016-10-01</p> <p>Previous research indicates that useful field of view (UFOV) decline affects older driver performance. In particular, elderly drivers have difficulty estimating oncoming vehicle <span class="hlt">time</span>-to-contact (TTC). The objective of this study was to evaluate how UFOV impairments affect TTC estimates in elderly drivers deciding when to make a left turn across oncoming traffic. TTC estimates were obtained from 64 middle-aged (n=17, age=46±6years) and older (n=37, age=75±6years) licensed drivers with a range of UFOV abilities using interactive scenarios in a fixed-base driving simulator. Each driver was situated in an intersection to turn left across oncoming traffic approaching and disappearing at differing distances (1.5, 3, or 5s) and speeds (45, 55, or 65mph). Drivers judged when each oncoming vehicle would collide with them if they were to turn left. Findings showed that TTC estimates across all drivers, on average, were most accurate for oncoming vehicles travelling at the highest velocities and least accurate for those travelling at the slowest velocities. Drivers with the worst UFOV scores had the least accurate TTC estimates, especially for slower oncoming vehicles. Results suggest age-related UFOV decline impairs older driver judgment of TTC with oncoming vehicles in safety-critical left-turn situations. Our results are compatible with national statistics on older driver crash proclivity at intersections. PMID:27472816</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26599714','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26599714"><span id="translatedtitle">Analytical Calculation of <span class="hlt">Errors</span> in <span class="hlt">Time</span> and Value Perception Due to a Subjective <span class="hlt">Time</span> Accumulator: A Mechanistic Model and the Generation of Weber's Law.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Namboodiri, Vijay Mohan K; Mihalas, Stefan; Hussain Shuler, Marshall G</p> <p>2016-01-01</p> <p>It has been previously shown (Namboodiri, Mihalas, Marton, & Hussain Shuler, 2014) that an evolutionary theory of decision making and <span class="hlt">time</span> perception is capable of explaining numerous behavioral observations regarding how humans and animals decide between differently delayed rewards of differing magnitudes and how they perceive <span class="hlt">time</span>. An implementation of this theory using a stochastic drift-diffusion accumulator model (Namboodiri, Mihalas, & Hussain Shuler, 2014a) showed that <span class="hlt">errors</span> in <span class="hlt">time</span> perception and decision making approximately obey Weber's law for a range of parameters. However, prior calculations did not have a clear mechanistic underpinning. Further, these calculations were only approximate, with the range of parameters being limited. In this letter, we provide a full analytical treatment of such an accumulator model, along with a mechanistic implementation, to calculate the expression of these <span class="hlt">errors</span> for the entirety of the parameter space. In our mechanistic model, Weber's law results from synaptic facilitation and depression within the feedback synapses of the accumulator. Our theory also makes the prediction that the steepness of temporal discounting can be affected by requiring the precise <span class="hlt">timing</span> of temporal intervals. Thus, by presenting exact quantitative calculations, this work provides falsifiable predictions for future experimental testing.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/biblio/1231575-absolute-path-command','SCIGOV-ESTSC'); return false;" href="http://www.osti.gov/scitech/biblio/1231575-absolute-path-command"><span id="translatedtitle">The <span class="hlt">absolute</span> path command</span></a></p> <p><a target="_blank" href=""></a></p> <p></p> <p>2012-05-11</p> <p>The ap command traveres all symlinks in a given file, directory, or executable name to identify the final <span class="hlt">absolute</span> path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it canmore » provide the <span class="hlt">absolute</span> path to a relative directory from the current working directory.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/biblio/1231575','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/biblio/1231575"><span id="translatedtitle">The <span class="hlt">absolute</span> path command</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Moody, A.</p> <p>2012-05-11</p> <p>The ap command traveres all symlinks in a given file, directory, or executable name to identify the final <span class="hlt">absolute</span> path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it can provide the <span class="hlt">absolute</span> path to a relative directory from the current working directory.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4540038','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4540038"><span id="translatedtitle">Quantitative Electroencephalography Reflects Inattention, Visual <span class="hlt">Error</span> Responses, and Reaction <span class="hlt">Times</span> in Male Patients with Attention Deficit Hyperactivity Disorder</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Roh, Sang-Choong; Park, Eun-Jin; Park, Young-Chun; Yoon, Sun-Kyung; Kang, Joong-Gu; Kim, Do-won; Lee, Seung-Hwan</p> <p>2015-01-01</p> <p>Objective Quantitative electroencephalography (qEEG) has been increasingly used to evaluate patients with attention deficit hyperactivity disorder (ADHD). The aim of this study was to assess the correlation between qEEG data and symptom severity in patients with ADHD. Methods Fifteen patients with ADHD and 20 healthy controls (HCs) were recruited. Electroencephalography was assessed in the resting-state, and qEEG data were obtained in the eyes-closed state. The Korean version of the ADHD Rating Scale (K-ARS) and continuous performance tests (CPTs) were used to assess all participants. Results Theta-band (4–7 Hz) power across the brain was significantly positively correlated with inattention scores on the K-ARS, reaction <span class="hlt">times</span> and commission <span class="hlt">errors</span> on the CPTs in ADHD patients. Gamma-band (31–50 Hz) power was significantly positively correlated with the results of the auditory CPTs in ADHD patients. The theta/alpha (8–12 Hz) and theta/beta (13–30 Hz) ratios were significantly negatively correlated with commission and omission <span class="hlt">errors</span> on auditory CPTs in ADHD patients. No significant correlations between qEEG relative power and K-ARS and CPT scores were observed in HCs. Conclusion Our results suggest that qEEG may be a useful adjunctive tool in patients with ADHD. PMID:26243846</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/20876014','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/20876014"><span id="translatedtitle">Adaptive dynamic programming for finite-horizon optimal control of discrete-<span class="hlt">time</span> nonlinear systems with ε-<span class="hlt">error</span> bound.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Fei-Yue; Jin, Ning; Liu, Derong; Wei, Qinglai</p> <p>2011-01-01</p> <p>In this paper, we study the finite-horizon optimal control problem for discrete-<span class="hlt">time</span> nonlinear systems using the adaptive dynamic programming (ADP) approach. The idea is to use an iterative ADP algorithm to obtain the optimal control law which makes the performance index function close to the greatest lower bound of all performance indices within an ε-<span class="hlt">error</span> bound. The optimal number of control steps can also be obtained by the proposed ADP algorithms. A convergence analysis of the proposed ADP algorithms in terms of performance index function and control policy is made. In order to facilitate the implementation of the iterative ADP algorithms, neural networks are used for approximating the performance index function, computing the optimal control policy, and modeling the nonlinear system. Finally, two simulation examples are employed to illustrate the applicability of the proposed method. PMID:20876014</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4709723','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4709723"><span id="translatedtitle">Locating single-point sources from arrival <span class="hlt">times</span> containing large picking <span class="hlt">errors</span> (LPEs): the virtual field optimization method (VFOM)</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun</p> <p>2016-01-01</p> <p>Microseismic monitoring systems using local location techniques tend to be <span class="hlt">timely</span>, automatic and stable. One basic requirement of these systems is the automatic picking of arrival <span class="hlt">times</span>. However, arrival <span class="hlt">times</span> generated by automated techniques always contain large picking <span class="hlt">errors</span> (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission. PMID:26754955</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25856003','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25856003"><span id="translatedtitle">Resimulation of noise: a precision estimator for least square <span class="hlt">error</span> curve-fitting tested for axial strain <span class="hlt">time</span> constant imaging.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nair, S P; Righetti, R</p> <p>2015-05-01</p> <p>Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square <span class="hlt">error</span> (LSE) parameter estimation. It is known that the strain versus <span class="hlt">time</span> relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain <span class="hlt">time</span> constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain <span class="hlt">time</span> constant imaging, a general algorithm is provided for use in all LSE parameter estimation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/26754955','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/26754955"><span id="translatedtitle">Locating single-point sources from arrival <span class="hlt">times</span> containing large picking <span class="hlt">errors</span> (LPEs): the virtual field optimization method (VFOM).</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun</p> <p>2016-01-01</p> <p>Microseismic monitoring systems using local location techniques tend to be <span class="hlt">timely</span>, automatic and stable. One basic requirement of these systems is the automatic picking of arrival <span class="hlt">times</span>. However, arrival <span class="hlt">times</span> generated by automated techniques always contain large picking <span class="hlt">errors</span> (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission. PMID:26754955</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26754955','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26754955"><span id="translatedtitle">Locating single-point sources from arrival <span class="hlt">times</span> containing large picking <span class="hlt">errors</span> (LPEs): the virtual field optimization method (VFOM).</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun</p> <p>2016-01-01</p> <p>Microseismic monitoring systems using local location techniques tend to be <span class="hlt">timely</span>, automatic and stable. One basic requirement of these systems is the automatic picking of arrival <span class="hlt">times</span>. However, arrival <span class="hlt">times</span> generated by automated techniques always contain large picking <span class="hlt">errors</span> (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016NatSR...619205L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016NatSR...619205L"><span id="translatedtitle">Locating single-point sources from arrival <span class="hlt">times</span> containing large picking <span class="hlt">errors</span> (LPEs): the virtual field optimization method (VFOM)</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun</p> <p>2016-01-01</p> <p>Microseismic monitoring systems using local location techniques tend to be <span class="hlt">timely</span>, automatic and stable. One basic requirement of these systems is the automatic picking of arrival <span class="hlt">times</span>. However, arrival <span class="hlt">times</span> generated by automated techniques always contain large picking <span class="hlt">errors</span> (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/2281943','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/2281943"><span id="translatedtitle">Measurement <span class="hlt">error</span> in human dental mensuration.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kieser, J A; Groeneveld, H T; McKee, J; Cameron, N</p> <p>1990-01-01</p> <p>The reliability of human odontometric data was evaluated in a sample of 60 teeth. Three observers, using their own instruments and the same definition of the mesiodistal and buccolingual dimensions were asked to repeat their measurements after 2 months. Precision, or repeatability, was analysed by means of Pearsonian correlation coefficients and mean <span class="hlt">absolute</span> <span class="hlt">error</span> values. Accuracy, or the absence of bias, was evaluated by means of Bland-Altman procedures and attendant Student t-tests, and also by an ANOVA procedure. The present investigation suggests that odontometric data have a high interobserver <span class="hlt">error</span> component. Mesiodistal dimensions show greater imprecision and bias than buccolingual measurements. The results of the ANOVA suggest that bias is the result of interobserver <span class="hlt">error</span> and is not due to the <span class="hlt">time</span> between repeated measurements.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9677E..2FL','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9677E..2FL"><span id="translatedtitle">The correction of vibration in frequency scanning interferometry based <span class="hlt">absolute</span> distance measurement system for dynamic measurements</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lu, Cheng; Liu, Guodong; Liu, Bingguo; Chen, Fengdong; Zhuang, Zhitao; Xu, Xinke; Gan, Yu</p> <p>2015-10-01</p> <p><span class="hlt">Absolute</span> distance measurement systems are of significant interest in the field of metrology, which could improve the manufacturing efficiency and accuracy of large assemblies in fields such as aircraft construction, automotive engineering, and the production of modern windmill blades. Frequency scanning interferometry demonstrates noticeable advantages as an <span class="hlt">absolute</span> distance measurement system which has a high precision and doesn't depend on a cooperative target. In this paper , the influence of inevitable vibration in the frequency scanning interferometry based <span class="hlt">absolute</span> distance measurement system is analyzed. The distance spectrum is broadened as the existence of Doppler effect caused by vibration, which will bring in a measurement <span class="hlt">error</span> more than 103 <span class="hlt">times</span> bigger than the changes of optical path difference. In order to decrease the influence of vibration, the changes of the optical path difference are monitored by a frequency stabilized laser, which runs parallel to the frequency scanning interferometry. The experiment has verified the effectiveness of this method.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2013-title5-vol3/pdf/CFR-2013-title5-vol3-sec1605-22.pdf','CFR2013'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2013-title5-vol3/pdf/CFR-2013-title5-vol3-sec1605-22.pdf"><span id="translatedtitle">5 CFR 1605.22 - Claims for correction of Board or TSP record keeper <span class="hlt">errors</span>; <span class="hlt">time</span> limitations.</span></a></p> <p><a target="_blank" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2013&page.go=Go">Code of Federal Regulations, 2013 CFR</a></p> <p></p> <p>2013-01-01</p> <p>... participant or beneficiary. (b) Board's or TSP record keeper's discovery of <span class="hlt">error</span>. (1) Upon discovery of an... before its discovery, the Board or the TSP record keeper may exercise sound discretion in deciding... <span class="hlt">error</span> if it is discovered before 30 days after the issuance of the earlier of the most recent...</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_14 --> <div id="page_15" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="281"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2011-title5-vol3/pdf/CFR-2011-title5-vol3-sec1605-22.pdf','CFR2011'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2011-title5-vol3/pdf/CFR-2011-title5-vol3-sec1605-22.pdf"><span id="translatedtitle">5 CFR 1605.22 - Claims for correction of Board or TSP record keeper <span class="hlt">errors</span>; <span class="hlt">time</span> limitations.</span></a></p> <p><a target="_blank" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2011&page.go=Go">Code of Federal Regulations, 2011 CFR</a></p> <p></p> <p>2011-01-01</p> <p>... participant or beneficiary. (b) Board's or TSP record keeper's discovery of <span class="hlt">error</span>. (1) Upon discovery of an... before its discovery, the Board or the TSP record keeper may exercise sound discretion in deciding... <span class="hlt">error</span> if it is discovered before 30 days after the issuance of the earlier of the most recent...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2014-title5-vol3/pdf/CFR-2014-title5-vol3-sec1605-22.pdf','CFR2014'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2014-title5-vol3/pdf/CFR-2014-title5-vol3-sec1605-22.pdf"><span id="translatedtitle">5 CFR 1605.22 - Claims for correction of Board or TSP record keeper <span class="hlt">errors</span>; <span class="hlt">time</span> limitations.</span></a></p> <p><a target="_blank" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2014&page.go=Go">Code of Federal Regulations, 2014 CFR</a></p> <p></p> <p>2014-01-01</p> <p>... participant or beneficiary. (b) Board's or TSP record keeper's discovery of <span class="hlt">error</span>. (1) Upon discovery of an... before its discovery, the Board or the TSP record keeper may exercise sound discretion in deciding... <span class="hlt">error</span> if it is discovered before 30 days after the issuance of the earlier of the most recent...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/27301704','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/27301704"><span id="translatedtitle"><span class="hlt">Time</span>-resolved <span class="hlt">absolute</span> measurements by electro-optic effect of giant electromagnetic pulses due to laser-plasma interaction in nanosecond regime.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Consoli, F; De Angelis, R; Duvillaret, L; Andreoli, P L; Cipriani, M; Cristofari, G; Di Giorgio, G; Ingenito, F; Verona, C</p> <p>2016-01-01</p> <p>We describe the first electro-optical <span class="hlt">absolute</span> measurements of electromagnetic pulses (EMPs) generated by laser-plasma interaction in nanosecond regime. Laser intensities are inertial-confinement-fusion (ICF) relevant and wavelength is 1054 nm. These are the first direct EMP amplitude measurements with the detector rather close and in direct view of the plasma. A maximum field of 261 kV/m was measured, two orders of magnitude higher than previous measurements by conductive probes on nanosecond regime lasers with much higher energy. The analysis of measurements and of particle-in-cell simulations indicates that signals match the emission of charged particles detected in the same experiment, and suggests that anisotropic particle emission from target, X-ray photoionization and charge implantation on surfaces directly exposed to plasma, could be important EMP contributions. Significant information achieved on EMP features and sources is crucial for future plants of laser-plasma acceleration and inertial-confinement-fusion and for the use as effective plasma diagnostics. It also opens to remarkable applications of laser-plasma interaction as intense source of RF-microwaves for studies on materials and devices, EMP-radiation-hardening and electromagnetic compatibility. The demonstrated extreme effectivity of electric-fields detection in laser-plasma context by electro-optic effect, leads to great potential for characterization of laser-plasma interaction and generated Terahertz radiation. PMID:27301704</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4908660','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4908660"><span id="translatedtitle"><span class="hlt">Time</span>-resolved <span class="hlt">absolute</span> measurements by electro-optic effect of giant electromagnetic pulses due to laser-plasma interaction in nanosecond regime</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Consoli, F.; De Angelis, R.; Duvillaret, L.; Andreoli, P. L.; Cipriani, M.; Cristofari, G.; Di Giorgio, G.; Ingenito, F.; Verona, C.</p> <p>2016-01-01</p> <p>We describe the first electro-optical <span class="hlt">absolute</span> measurements of electromagnetic pulses (EMPs) generated by laser-plasma interaction in nanosecond regime. Laser intensities are inertial-confinement-fusion (ICF) relevant and wavelength is 1054 nm. These are the first direct EMP amplitude measurements with the detector rather close and in direct view of the plasma. A maximum field of 261 kV/m was measured, two orders of magnitude higher than previous measurements by conductive probes on nanosecond regime lasers with much higher energy. The analysis of measurements and of particle-in-cell simulations indicates that signals match the emission of charged particles detected in the same experiment, and suggests that anisotropic particle emission from target, X-ray photoionization and charge implantation on surfaces directly exposed to plasma, could be important EMP contributions. Significant information achieved on EMP features and sources is crucial for future plants of laser-plasma acceleration and inertial-confinement-fusion and for the use as effective plasma diagnostics. It also opens to remarkable applications of laser-plasma interaction as intense source of RF-microwaves for studies on materials and devices, EMP-radiation-hardening and electromagnetic compatibility. The demonstrated extreme effectivity of electric-fields detection in laser-plasma context by electro-optic effect, leads to great potential for characterization of laser-plasma interaction and generated Terahertz radiation. PMID:27301704</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016NatSR...627889C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016NatSR...627889C"><span id="translatedtitle"><span class="hlt">Time</span>-resolved <span class="hlt">absolute</span> measurements by electro-optic effect of giant electromagnetic pulses due to laser-plasma interaction in nanosecond regime</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Consoli, F.; de Angelis, R.; Duvillaret, L.; Andreoli, P. L.; Cipriani, M.; Cristofari, G.; di Giorgio, G.; Ingenito, F.; Verona, C.</p> <p>2016-06-01</p> <p>We describe the first electro-optical <span class="hlt">absolute</span> measurements of electromagnetic pulses (EMPs) generated by laser-plasma interaction in nanosecond regime. Laser intensities are inertial-confinement-fusion (ICF) relevant and wavelength is 1054 nm. These are the first direct EMP amplitude measurements with the detector rather close and in direct view of the plasma. A maximum field of 261 kV/m was measured, two orders of magnitude higher than previous measurements by conductive probes on nanosecond regime lasers with much higher energy. The analysis of measurements and of particle-in-cell simulations indicates that signals match the emission of charged particles detected in the same experiment, and suggests that anisotropic particle emission from target, X-ray photoionization and charge implantation on surfaces directly exposed to plasma, could be important EMP contributions. Significant information achieved on EMP features and sources is crucial for future plants of laser-plasma acceleration and inertial-confinement-fusion and for the use as effective plasma diagnostics. It also opens to remarkable applications of laser-plasma interaction as intense source of RF-microwaves for studies on materials and devices, EMP-radiation-hardening and electromagnetic compatibility. The demonstrated extreme effectivity of electric-fields detection in laser-plasma context by electro-optic effect, leads to great potential for characterization of laser-plasma interaction and generated Terahertz radiation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27301704','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27301704"><span id="translatedtitle"><span class="hlt">Time</span>-resolved <span class="hlt">absolute</span> measurements by electro-optic effect of giant electromagnetic pulses due to laser-plasma interaction in nanosecond regime.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Consoli, F; De Angelis, R; Duvillaret, L; Andreoli, P L; Cipriani, M; Cristofari, G; Di Giorgio, G; Ingenito, F; Verona, C</p> <p>2016-06-15</p> <p>We describe the first electro-optical <span class="hlt">absolute</span> measurements of electromagnetic pulses (EMPs) generated by laser-plasma interaction in nanosecond regime. Laser intensities are inertial-confinement-fusion (ICF) relevant and wavelength is 1054 nm. These are the first direct EMP amplitude measurements with the detector rather close and in direct view of the plasma. A maximum field of 261 kV/m was measured, two orders of magnitude higher than previous measurements by conductive probes on nanosecond regime lasers with much higher energy. The analysis of measurements and of particle-in-cell simulations indicates that signals match the emission of charged particles detected in the same experiment, and suggests that anisotropic particle emission from target, X-ray photoionization and charge implantation on surfaces directly exposed to plasma, could be important EMP contributions. Significant information achieved on EMP features and sources is crucial for future plants of laser-plasma acceleration and inertial-confinement-fusion and for the use as effective plasma diagnostics. It also opens to remarkable applications of laser-plasma interaction as intense source of RF-microwaves for studies on materials and devices, EMP-radiation-hardening and electromagnetic compatibility. The demonstrated extreme effectivity of electric-fields detection in laser-plasma context by electro-optic effect, leads to great potential for characterization of laser-plasma interaction and generated Terahertz radiation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AdSpR..47..276R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AdSpR..47..276R"><span id="translatedtitle">The use of ionospheric tomography and elevation masks to reduce the overall <span class="hlt">error</span> in single-frequency GPS <span class="hlt">timing</span> applications</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rose, Julian A. R.; Tong, Jenna R.; Allain, Damien J.; Mitchell, Cathryn N.</p> <p>2011-01-01</p> <p>Signals from Global Positioning System (GPS) satellites at the horizon or at low elevations are often excluded from a GPS solution because they experience considerable ionospheric delays and multipath effects. Their exclusion can degrade the overall satellite geometry for the calculations, resulting in greater <span class="hlt">errors</span>; an effect known as the Dilution of Precision (DOP). In contrast, signals from high elevation satellites experience less ionospheric delays and multipath effects. The aim is to find a balance in the choice of elevation mask, to reduce the propagation delays and multipath whilst maintaining good satellite geometry, and to use tomography to correct for the ionosphere and thus improve single-frequency GPS <span class="hlt">timing</span> accuracy. GPS data, collected from a global network of dual-frequency GPS receivers, have been used to produce four GPS <span class="hlt">timing</span> solutions, each with a different ionospheric compensation technique. One solution uses a 4D tomographic algorithm, Multi-Instrument Data Analysis System (MIDAS), to compensate for the ionospheric delay. Maps of ionospheric electron density are produced and used to correct the single-frequency pseudorange observations. This method is compared to a dual-frequency solution and two other single-frequency solutions: one does not include any ionospheric compensation and the other uses the broadcast Klobuchar model. Data from the solar maximum year 2002 and October 2003 have been investigated to display results when the ionospheric delays are large and variable. The study focuses on Europe and results are produced for the chosen test site, VILL (Villafranca, Spain). The effects of excluding all of the GPS satellites below various elevation masks, ranging from 5° to 40°, on <span class="hlt">timing</span> solutions for fixed (static) and mobile (moving) situations are presented. The greatest <span class="hlt">timing</span> accuracies when using the fixed GPS receiver technique are obtained by using a 40° mask, rather than a 5° mask. The mobile GPS <span class="hlt">timing</span> solutions are most</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=h%26m&id=EJ762071','ERIC'); return false;" href="http://eric.ed.gov/?q=h%26m&id=EJ762071"><span id="translatedtitle">Identifying Autocorrelation Generated by Various <span class="hlt">Error</span> Processes in Interrupted <span class="hlt">Time</span>-Series Regression Designs: A Comparison of AR1 and Portmanteau Tests</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Huitema, Bradley E.; McKean, Joseph W.</p> <p>2007-01-01</p> <p>Regression models used in the analysis of interrupted <span class="hlt">time</span>-series designs assume statistically independent <span class="hlt">errors</span>. Four methods of evaluating this assumption are the Durbin-Watson (D-W), Huitema-McKean (H-M), Box-Pierce (B-P), and Ljung-Box (L-B) tests. These tests were compared with respect to Type I <span class="hlt">error</span> and power under a wide variety of error…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3336196','PMC'); return false;" href="http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3336196"><span id="translatedtitle">Visual cortex combines a stimulus and an <span class="hlt">error</span>-like signal with a proportion that is dependent on <span class="hlt">time</span>, space, and stimulus contrast</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Eriksson, David; Wunderle, Thomas; Schmidt, Kerstin</p> <p>2012-01-01</p> <p>Even though the visual cortex is one of the most studied brain areas, the neuronal code in this area is still not fully understood. In the literature, two codes are commonly hypothesized, namely stimulus and predictive (<span class="hlt">error</span>) codes. Here, we examined whether and how these two codes can coexist in a neuron. To this end, we assumed that neurons could predict a constant stimulus across <span class="hlt">time</span> or space, since this is the most fundamental type of prediction. Prediction was examined in <span class="hlt">time</span> using electrophysiology and voltage-sensitive dye imaging in the supragranular layers in area 18 of the anesthetized cat, and in space using a computer model. The distinction into stimulus and <span class="hlt">error</span> code was made by means of the orientation tuning of the recorded unit. The stimulus was constructed as such that a maximum response to the non-preferred orientation indicated an <span class="hlt">error</span> signal, and the maximum response to the preferred orientation indicated a stimulus signal. We demonstrate that a single neuron combines stimulus and <span class="hlt">error</span>-like coding. In addition, we observed that the duration of the <span class="hlt">error</span> coding varies as a function of stimulus contrast. For low contrast the <span class="hlt">error</span>-like coding was prolonged by around 60–100%. Finally, the combination of stimulus and <span class="hlt">error</span> leads to a suboptimal free energy in a recent predictive coding model. We therefore suggest a straightforward modification that can be applied to the free energy model and other predictive coding models. Combining stimulus and <span class="hlt">error</span> might be advantageous because the stimulus code enables a direct stimulus recognition that is free of assumptions whereas the <span class="hlt">error</span> code enables an experience dependent inference of ambiguous and non-salient stimuli. PMID:22539918</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AIPC.1692b0014I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AIPC.1692b0014I"><span id="translatedtitle">The generalized STAR(1,1) modeling with <span class="hlt">time</span> correlated <span class="hlt">errors</span> to red-chili weekly prices of some traditional markets in Bandung, West Java</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nisa Fadlilah F., I.; Mukhaiyar, Utriweni; Fahmi, Fauzia</p> <p>2015-12-01</p> <p>The observations at a certain location may be linearly influenced by the previous <span class="hlt">times</span> of observations at that location and neighbor locations, which could be analyzed by Generalized STAR(1,1). In this paper, the weekly red-chili prices secondary-data of five main traditional markets in Bandung are used as case study. The purpose of GSTAR(1,1) model is to forecast the next <span class="hlt">time</span> red-chili prices at those markets. The model is identified by sample space-<span class="hlt">time</span> ACF and space-<span class="hlt">time</span> PACF, and model parameters are estimated by least square estimation method. Theoretically, the <span class="hlt">errors</span>' independency assumption could simplify the parameter estimation's problem. However, practically that assumption is hard to satisfy since the <span class="hlt">errors</span> may be correlated each other's. In red-chili prices modeling, it is considered that the process has <span class="hlt">time</span>-correlated <span class="hlt">errors</span>, i.e. martingale difference process, instead of follow normal distribution. Here, we do some simulations to investigate the behavior of <span class="hlt">errors</span>' assumptions. Although some of results show that the behavior of the <span class="hlt">errors</span>' model are not always followed the martingale difference process, it does not corrupt the goodness of GSTAR(1,1) model to forecast the red-chili prices at those five markets.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/921934','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/921934"><span id="translatedtitle"><span class="hlt">ABSOLUTE</span> POLARIMETRY AT RHIC.</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>OKADA; BRAVAR, A.; BUNCE, G.; GILL, R.; HUANG, H.; MAKDISI, Y.; NASS, A.; WOOD, J.; ZELENSKI, Z.; ET AL.</p> <p>2007-09-10</p> <p>Precise and <span class="hlt">absolute</span> beam polarization measurements are critical for the RHIC spin physics program. Because all experimental spin-dependent results are normalized by beam polarization, the normalization uncertainty contributes directly to final physics uncertainties. We aimed to perform the beam polarization measurement to an accuracy Of {Delta}P{sub beam}/P{sub beam} < 5%. The <span class="hlt">absolute</span> polarimeter consists of Polarized Atomic Hydrogen Gas Jet Target and left-right pairs of silicon strip detectors and was installed in the RHIC-ring in 2004. This system features proton-proton elastic scattering in the Coulomb nuclear interference (CNI) region. Precise measurements of the analyzing power A{sub N} of this process has allowed us to achieve {Delta}P{sub beam}/P{sub beam} = 4.2% in 2005 for the first long spin-physics run. In this report, we describe the entire set up and performance of the system. The procedure of beam polarization measurement and analysis results from 2004-2005 are described. Physics topics of AN in the CNI region (four-momentum transfer squared 0.001 < -t < 0.032 (GeV/c){sup 2}) are also discussed. We point out the current issues and expected optimum accuracy in 2006 and the future.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EPSC...10..717D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EPSC...10..717D"><span id="translatedtitle"><span class="hlt">Absolute</span> magnitudes of trans-neptunian objects</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Duffard, R.; Alvarez-candal, A.; Pinilla-Alonso, N.; Ortiz, J. L.; Morales, N.; Santos-Sanz, P.; Thirouin, A.</p> <p>2015-10-01</p> <p>Accurate measurements of diameters of trans- Neptunian objects are extremely complicated to obtain. Radiomatric techniques applied to thermal measurements can provide good results, but precise <span class="hlt">absolute</span> magnitudes are needed to constrain diameters and albedos. Our objective is to measure accurate <span class="hlt">absolute</span> magnitudes for a sample of trans- Neptunian objects, many of which have been observed, and modelled, by the "TNOs are cool" team, one of Herschel Space Observatory key projects grantes with ~ 400 hours of observing <span class="hlt">time</span>. We observed 56 objects in filters V and R, if possible. These data, along with data available in the literature, was used to obtain phase curves and to measure <span class="hlt">absolute</span> magnitudes by assuming a linear trend of the phase curves and considering magnitude variability due to rotational light-curve. In total we obtained 234 new magnitudes for the 56 objects, 6 of them with no reported previous measurements. Including the data from the literature we report a total of 109 <span class="hlt">absolute</span> magnitudes.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19840011561&hterms=algorithm+code&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dalgorithm%2Bcode','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19840011561&hterms=algorithm+code&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dalgorithm%2Bcode"><span id="translatedtitle">Simulations for Full Unit-memory and Partial Unit-memory Convolutional Codes with Real-<span class="hlt">time</span> Minimal-byte-<span class="hlt">error</span> Probability Decoding Algorithm</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Vo, Q. D.</p> <p>1984-01-01</p> <p>A program which was written to simulate Real <span class="hlt">Time</span> Minimal-Byte-<span class="hlt">Error</span> Probability (RTMBEP) decoding of full unit-memory (FUM) convolutional codes on a 3-bit quantized AWGN channel is described. This program was used to compute the symbol-<span class="hlt">error</span> probability of FUM codes and to determine the signal to noise (SNR) required to achieve a bit <span class="hlt">error</span> rate (BER) of 10 to the minus 6th power for corresponding concatenated systems. A (6,6/30) FUM code, 6-bit Reed-Solomon code combination was found to achieve the required BER at a SNR of 1.886 dB. The RTMBEP algorithm was then modified for decoding partial unit-memory (PUM) convolutional codes. A simulation program was also written to simulate the symbol-<span class="hlt">error</span> probability of these codes.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20050196615','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20050196615"><span id="translatedtitle">Accurate <span class="hlt">Time</span>-Dependent Traveling-Wave Tube Model Developed for Computational Bit-<span class="hlt">Error</span>-Rate Testing</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kory, Carol L.</p> <p>2001-01-01</p> <p> prohibitively expensive, as it would require manufacturing numerous amplifiers, in addition to acquiring the required digital hardware. As an alternative, the <span class="hlt">time</span>-domain TWT interaction model developed here provides the capability to establish a computational test bench where ISI or bit <span class="hlt">error</span> rate can be simulated as a function of TWT operating parameters and component geometries. Intermodulation products, harmonic generation, and backward waves can also be monitored with the model for similar correlations. The advancements in computational capabilities and corresponding potential improvements in TWT performance may prove to be the enabling technologies for realizing unprecedented data rates for near real <span class="hlt">time</span> transmission of the increasingly larger volumes of data demanded by planned commercial and Government satellite communications applications. This work is in support of the Cross Enterprise Technology Development Program in Headquarters' Advanced Technology & Mission Studies Division and the Air Force Office of Scientific Research Small Business Technology Transfer programs.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20040110742','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20040110742"><span id="translatedtitle"><span class="hlt">Absolute</span> Equilibrium Entropy</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Shebalin, John V.</p> <p>1997-01-01</p> <p>The entropy associated with <span class="hlt">absolute</span> equilibrium ensemble theories of ideal, homogeneous, fluid and magneto-fluid turbulence is discussed and the three-dimensional fluid case is examined in detail. A sigma-function is defined, whose minimum value with respect to global parameters is the entropy. A comparison is made between the use of global functions sigma and phase functions H (associated with the development of various H-theorems of ideal turbulence). It is shown that the two approaches are complimentary though conceptually different: H-theorems show that an isolated system tends to equilibrium while sigma-functions allow the demonstration that entropy never decreases when two previously isolated systems are combined. This provides a more complete picture of entropy in the statistical mechanics of ideal fluids.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26478959','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26478959"><span id="translatedtitle">Stimulus probability effects in <span class="hlt">absolute</span> identification.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kent, Christopher; Lamberts, Koen</p> <p>2016-05-01</p> <p>This study investigated the effect of stimulus presentation probability on accuracy and response <span class="hlt">times</span> in an <span class="hlt">absolute</span> identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of presentation probability on both proportion correct and response <span class="hlt">times</span>. The effects were moderated by the ubiquitous stimulus position effect. The accuracy and response <span class="hlt">time</span> data were predicted by an exemplar-based model of perceptual cognition (Kent & Lamberts, 2005). The bow in discriminability was also attenuated when presentation probability for middle items was relatively high, an effect that will constrain future model development. The study provides evidence for item-specific learning in <span class="hlt">absolute</span> identification. Implications for other theories of <span class="hlt">absolute</span> identification are discussed. (PsycINFO Database Record</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://medlineplus.gov/refractiveerrors.html','NIH-MEDLINEPLUS'); return false;" href="https://medlineplus.gov/refractiveerrors.html"><span id="translatedtitle">Refractive <span class="hlt">Errors</span></span></a></p> <p><a target="_blank" href="http://medlineplus.gov/">MedlinePlus</a></p> <p></p> <p></p> <p>... and lens of your eye helps you focus. Refractive <span class="hlt">errors</span> are vision problems that happen when the shape ... cornea, or aging of the lens. Four common refractive <span class="hlt">errors</span> are Myopia, or nearsightedness - clear vision close up ...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1991NIMPA.304..725J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1991NIMPA.304..725J"><span id="translatedtitle">Field <span class="hlt">error</span> lottery</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>James Elliott, C.; McVey, Brian D.; Quimby, David C.</p> <p>1991-07-01</p> <p>The level of field <span class="hlt">errors</span> in a free electron laser (FEL) is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field <span class="hlt">errors</span> of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is use of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field <span class="hlt">error</span> level, beam position monitor <span class="hlt">error</span> level, gap <span class="hlt">errors</span>, defocusing <span class="hlt">errors</span>, energy slew, displacement and pointing <span class="hlt">errors</span>. Many effects of these <span class="hlt">errors</span> on relative gain and relative power extraction are displayed and are the essential elements of determining an <span class="hlt">error</span> budget. The random <span class="hlt">errors</span> also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus <span class="hlt">error</span> level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these <span class="hlt">errors</span> are evaluated numerically for comprehensive engineering of the system. In particular, gap <span class="hlt">errors</span> are found to place requirements beyond convenient mechanical tolerances of ± 25 μm, and amelioration of these may occur by a procedure using direct measurement of the magnetic fields at assembly <span class="hlt">time</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1990ifel.confR..17E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1990ifel.confR..17E"><span id="translatedtitle">Field <span class="hlt">error</span> lottery</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Elliott, C. James; McVey, Brian D.; Quimby, David C.</p> <p>1990-11-01</p> <p>The level of field <span class="hlt">errors</span> in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field <span class="hlt">errors</span> of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field <span class="hlt">error</span> level, beam position monitor <span class="hlt">error</span> level, gap <span class="hlt">errors</span>, defocusing <span class="hlt">errors</span>, energy slew, displacement, and pointing <span class="hlt">errors</span>. Many effects of these <span class="hlt">errors</span> on relative gain and relative power extraction are displayed and are the essential elements of determining an <span class="hlt">error</span> budget. The random <span class="hlt">errors</span> also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus <span class="hlt">error</span> level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these <span class="hlt">errors</span> are evaluated numerically for comprehensive engineering of the system. In particular, gap <span class="hlt">errors</span> are found to place requirements beyond mechanical tolerances of (plus minus)25(mu)m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly <span class="hlt">time</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/6526941','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/6526941"><span id="translatedtitle">Field <span class="hlt">error</span> lottery</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Elliott, C.J.; McVey, B. ); Quimby, D.C. )</p> <p>1990-01-01</p> <p>The level of field <span class="hlt">errors</span> in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field <span class="hlt">errors</span> of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field <span class="hlt">error</span> level, beam position monitor <span class="hlt">error</span> level, gap <span class="hlt">errors</span>, defocusing <span class="hlt">errors</span>, energy slew, displacement and pointing <span class="hlt">errors</span>. Many effects of these <span class="hlt">errors</span> on relative gain and relative power extraction are displayed and are the essential elements of determining an <span class="hlt">error</span> budget. The random <span class="hlt">errors</span> also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus <span class="hlt">error</span> level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these <span class="hlt">errors</span> are evaluated numerically for comprehensive engineering of the system. In particular, gap <span class="hlt">errors</span> are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly <span class="hlt">time</span>. 4 refs., 12 figs.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_15 --> <div id="page_16" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="301"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/25873868','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/25873868"><span id="translatedtitle">Comparison of haptic guidance and <span class="hlt">error</span> amplification robotic trainings for the learning of a <span class="hlt">timing</span>-based motor task by healthy seniors.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bouchard, Amy E; Corriveau, Hélène; Milot, Marie-Hélène</p> <p>2015-01-01</p> <p>With age, a decline in the temporal aspect of movement is observed such as a longer movement execution <span class="hlt">time</span> and a decreased <span class="hlt">timing</span> accuracy. Robotic training can represent an interesting approach to help improve movement <span class="hlt">timing</span> among the elderly. Two types of robotic training-haptic guidance (HG; demonstrating the correct movement for a better movement planning and improved execution of movement) and <span class="hlt">error</span> amplification (EA; exaggerating movement <span class="hlt">errors</span> to have a more rapid and complete learning) have been positively used in young healthy subjects to boost <span class="hlt">timing</span> accuracy. For healthy seniors, only HG training has been used so far where significant and positive <span class="hlt">timing</span> gains have been obtained. The goal of the study was to evaluate and compare the impact of both HG and EA robotic trainings on the improvement of seniors' movement <span class="hlt">timing</span>. Thirty-two healthy seniors (mean age 68 ± 4 years) learned to play a pinball-like game by triggering a one-degree-of-freedom hand robot at the proper <span class="hlt">time</span> to make a flipper move and direct a falling ball toward a randomly positioned target. During HG and EA robotic trainings, the subjects' <span class="hlt">timing</span> <span class="hlt">errors</span> were decreased and increased, respectively, based on the subjects' <span class="hlt">timing</span> <span class="hlt">errors</span> in initiating a movement. Results showed that only HG training benefited learning, but the improvement did not generalize to untrained targets. Also, age had no influence on the efficacy of HG robotic training, meaning that the oldest subjects did not benefit more from HG training than the younger senior subjects. Using HG to teach the correct <span class="hlt">timing</span> of movement seems to be a good strategy to improve motor learning for the elderly as for younger people. However, more studies are needed to assess the long-term impact of HG robotic training on improvement in movement <span class="hlt">timing</span>. PMID:25873868</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25873868','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25873868"><span id="translatedtitle">Comparison of haptic guidance and <span class="hlt">error</span> amplification robotic trainings for the learning of a <span class="hlt">timing</span>-based motor task by healthy seniors.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bouchard, Amy E; Corriveau, Hélène; Milot, Marie-Hélène</p> <p>2015-01-01</p> <p>With age, a decline in the temporal aspect of movement is observed such as a longer movement execution <span class="hlt">time</span> and a decreased <span class="hlt">timing</span> accuracy. Robotic training can represent an interesting approach to help improve movement <span class="hlt">timing</span> among the elderly. Two types of robotic training-haptic guidance (HG; demonstrating the correct movement for a better movement planning and improved execution of movement) and <span class="hlt">error</span> amplification (EA; exaggerating movement <span class="hlt">errors</span> to have a more rapid and complete learning) have been positively used in young healthy subjects to boost <span class="hlt">timing</span> accuracy. For healthy seniors, only HG training has been used so far where significant and positive <span class="hlt">timing</span> gains have been obtained. The goal of the study was to evaluate and compare the impact of both HG and EA robotic trainings on the improvement of seniors' movement <span class="hlt">timing</span>. Thirty-two healthy seniors (mean age 68 ± 4 years) learned to play a pinball-like game by triggering a one-degree-of-freedom hand robot at the proper <span class="hlt">time</span> to make a flipper move and direct a falling ball toward a randomly positioned target. During HG and EA robotic trainings, the subjects' <span class="hlt">timing</span> <span class="hlt">errors</span> were decreased and increased, respectively, based on the subjects' <span class="hlt">timing</span> <span class="hlt">errors</span> in initiating a movement. Results showed that only HG training benefited learning, but the improvement did not generalize to untrained targets. Also, age had no influence on the efficacy of HG robotic training, meaning that the oldest subjects did not benefit more from HG training than the younger senior subjects. Using HG to teach the correct <span class="hlt">timing</span> of movement seems to be a good strategy to improve motor learning for the elderly as for younger people. However, more studies are needed to assess the long-term impact of HG robotic training on improvement in movement <span class="hlt">timing</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19890000284&hterms=Hilbert&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DHilbert','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19890000284&hterms=Hilbert&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DHilbert"><span id="translatedtitle"><span class="hlt">Absolute</span> Stability And Hyperstability In Hilbert Space</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Wen, John Ting-Yung</p> <p>1989-01-01</p> <p>Theorems on stabilities of feedback control systems proved. Paper presents recent developments regarding theorems of <span class="hlt">absolute</span> stability and hyperstability of feedforward-and-feedback control system. Theorems applied in analysis of nonlinear, adaptive, and robust control. Extended to provide sufficient conditions for stability in system including nonlinear feedback subsystem and linear <span class="hlt">time</span>-invariant (LTI) feedforward subsystem, state space of which is Hilbert space, and input and output spaces having finite numbers of dimensions. (In case of <span class="hlt">absolute</span> stability, feedback subsystem memoryless and possibly <span class="hlt">time</span> varying. For hyperstability, feedback system dynamical system.)</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2010-title5-vol3/pdf/CFR-2010-title5-vol3-sec1605-22.pdf','CFR'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2010-title5-vol3/pdf/CFR-2010-title5-vol3-sec1605-22.pdf"><span id="translatedtitle">5 CFR 1605.22 - Claims for correction of Board or TSP record keeper <span class="hlt">errors</span>; <span class="hlt">time</span> limitations.</span></a></p> <p><a target="_blank" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2010&page.go=Go">Code of Federal Regulations, 2010 CFR</a></p> <p></p> <p>2010-01-01</p> <p>... participant or beneficiary. (b) Board's or TSP record keeper's discovery of <span class="hlt">error</span>. (1) Upon discovery of an... before its discovery, the Board or the TSP record keeper may exercise sound discretion in deciding... correct it, but, in any event, must act promptly in doing so. (c) Participant's or beneficiary's...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/3217208','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/3217208"><span id="translatedtitle">Reconsideration of measurement of <span class="hlt">error</span> in human motor learning.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Crabtree, D A; Antrim, L R</p> <p>1988-10-01</p> <p>Human motor learning is often measured by <span class="hlt">error</span> scores. The convention of using mean <span class="hlt">absolute</span> <span class="hlt">error</span>, mean constant <span class="hlt">error</span>, and variable <span class="hlt">error</span> shows lack of desirable parsimony and interpretability. This paper provides the background of <span class="hlt">error</span> measurement and states criticisms of conventional methodology. A parsimonious model of <span class="hlt">error</span> analysis is provided, along with operationalized interpretations and implications for motor learning. Teaching, interpreting, and using <span class="hlt">error</span> scores in research may be simplified and facilitated with the model.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://pubs.er.usgs.gov/publication/70024269','USGSPUBS'); return false;" href="http://pubs.er.usgs.gov/publication/70024269"><span id="translatedtitle"><span class="hlt">Absolute</span> <span class="hlt">timing</span> of sulfide and gold mineralization: A comparison of Re-Os molybdenite and Ar-Ar mica methods from the Tintina Gold Belt, Alaska</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Selby, D.; Creaser, R.A.; Hart, C.J.R.; Rombach, C.S.; Thompson, J.F.H.; Smith, M.T.; Bakke, A.A.; Goldfarb, R.J.</p> <p>2002-01-01</p> <p>New Re-Os molybdenite dates from two lode gold deposits of the Tintina Gold Belt, Alaska, provide direct <span class="hlt">timing</span> constraints for sulfide and gold mineralization. At Fort Knox, the Re-Os molybdenite date is identical to the U-Pb zircon age for the host intrusion, supporting an intrusive-related origin for the deposit. However, 40Ar/39Ar dates from hydrothermal and igneous mica are considerably younger. At the Pogo deposit, Re-Os molybdenite dates are also much older than 40Ar/39Ar dates from hydrothermal mica, but dissimilar to the age of local granites. These age relationships indicate that the Re-Os molybdenite method records the <span class="hlt">timing</span> of sulfide and gold mineralization, whereas much younger 40Ar/39Ar dates are affected by post-ore thermal events, slow cooling, and/or systemic analytical effects. The results of this study complement a growing body of evidence to indicate that the Re-Os chronometer in molybdenite can be an accurate and robust tool for establishing <span class="hlt">timing</span> relations in ore systems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/biblio/21611867','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/biblio/21611867"><span id="translatedtitle"><span class="hlt">Absolute</span> neutrino mass measurements</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Wolf, Joachim</p> <p>2011-10-06</p> <p>The neutrino mass plays an important role in particle physics, astrophysics and cosmology. In recent years the detection of neutrino flavour oscillations proved that neutrinos carry mass. However, oscillation experiments are only sensitive to the mass-squared difference of the mass eigenvalues. In contrast to cosmological observations and neutrino-less double beta decay (0v2{beta}) searches, single {beta}-decay experiments provide a direct, model-independent way to determine the <span class="hlt">absolute</span> neutrino mass by measuring the energy spectrum of decay electrons at the endpoint region with high accuracy.Currently the best kinematic upper limits on the neutrino mass of 2.2eV have been set by two experiments in Mainz and Troitsk, using tritium as beta emitter. The next generation tritium {beta}-experiment KATRIN is currently under construction in Karlsruhe/Germany by an international collaboration. KATRIN intends to improve the sensitivity by one order of magnitude to 0.2eV. The investigation of a second isotope ({sup 137}Rh) is being pursued by the international MARE collaboration using micro-calorimeters to measure the beta spectrum. The technology needed to reach 0.2eV sensitivity is still in the R and D phase. This paper reviews the present status of neutrino-mass measurements with cosmological data, 0v2{beta} decay and single {beta}-decay.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19930041085&hterms=definition+time&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Ddefinition%2Btime','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19930041085&hterms=definition+time&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Ddefinition%2Btime"><span id="translatedtitle">New definitions of pointing stability - ac and dc effects. [constant and <span class="hlt">time</span>-dependent pointing <span class="hlt">error</span> effects on image sensor performance</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lucke, Robert L.; Sirlin, Samuel W.; San Martin, A. M.</p> <p>1992-01-01</p> <p>For most imaging sensors, a constant (dc) pointing <span class="hlt">error</span> is unimportant (unless large), but <span class="hlt">time</span>-dependent (ac) <span class="hlt">errors</span> degrade performance by either distorting or smearing the image. When properly quantified, the separation of the root-mean-square effects of random line-of-sight motions into dc and ac components can be used to obtain the minimum necessary line-of-sight stability specifications. The relation between stability requirements and sensor resolution is discussed, with a view to improving communication between the data analyst and the control systems engineer.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/1227359','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/1227359"><span id="translatedtitle"><span class="hlt">Absolute</span> nuclear material assay using count distribution (LAMBDA) space</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Prasad, Mano K.; Snyderman, Neal J.; Rowland, Mark S.</p> <p>2015-12-01</p> <p>A method of <span class="hlt">absolute</span> nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an <span class="hlt">absolute</span> nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an <span class="hlt">absolute</span> nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous <span class="hlt">time</span>-evolving sequence of event-counts by spreading the fission chain distribution in <span class="hlt">time</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/1055713','DOE-PATENT-XML'); return false;" href="http://www.osti.gov/scitech/servlets/purl/1055713"><span id="translatedtitle"><span class="hlt">Absolute</span> nuclear material assay using count distribution (LAMBDA) space</span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.</p> <p>2012-06-05</p> <p>A method of <span class="hlt">absolute</span> nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an <span class="hlt">absolute</span> nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an <span class="hlt">absolute</span> nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous <span class="hlt">time</span>-evolving sequence of event-counts by spreading the fission chain distribution in <span class="hlt">time</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://pubs.er.usgs.gov/publication/70010730','USGSPUBS'); return false;" href="http://pubs.er.usgs.gov/publication/70010730"><span id="translatedtitle"><span class="hlt">Absolute</span> method of measuring magnetic susceptibility</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Thorpe, A.; Senftle, F.E.</p> <p>1959-01-01</p> <p>An <span class="hlt">absolute</span> method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an <span class="hlt">error</span> of less than 2% can be achieved. ?? 1959 The American Institute of Physics.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/cgi-bin/nph-data_query?bibcode=1991ITGRS..29..922U&link_type=ABSTRACT','NASAADS'); return false;" href="http://adsabs.harvard.edu/cgi-bin/nph-data_query?bibcode=1991ITGRS..29..922U&link_type=ABSTRACT"><span id="translatedtitle"><span class="hlt">Absolute</span> radiometric calibration of the CCRS SAR</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ulander, Lars M. H.; Hawkins, Robert K.; Livingstone, Charles E.; Lukowski, Tom I.</p> <p>1991-11-01</p> <p>Determining the radar scattering coefficients from SAR (synthetic aperture radar) image data requires <span class="hlt">absolute</span> radiometric calibration of the SAR system. The authors describe an internal calibration methodology for the airborne Canada Centre for Remote Sensing (CCRS) SAR system, based on radar theory, a detailed model of the radar system, and measurements of system parameters. The methodology is verified by analyzing external calibration data acquired over a 6-month period in 1988 by the C-band radar using HH polarization. The results indicate that the overall <span class="hlt">error</span> is +/- 0.8 dB (1-sigma) for incidence angles +/- 20 deg from antenna boresight. The dominant <span class="hlt">error</span> contributions are due to the antenna radome and uncertainties in the elevation angle relative to the antenna boresight.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/cgi-bin/nph-data_query?bibcode=2015APS..DPPJP2035B&link_type=ABSTRACT','NASAADS'); return false;" href="http://adsabs.harvard.edu/cgi-bin/nph-data_query?bibcode=2015APS..DPPJP2035B&link_type=ABSTRACT"><span id="translatedtitle">First <span class="hlt">Absolutely</span> Calibrated Localized Measurements of Ion Velocity in the MST in Locked and Rotating Plasmas</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Baltzer, M.; Craig, D.; den Hartog, D. J.; Nornberg, M. D.; Munaretto, S.</p> <p>2015-11-01</p> <p>An Ion Doppler Spectrometer (IDS) is used on MST for high <span class="hlt">time</span>-resolution passive and active measurements of impurity ion emission. <span class="hlt">Absolutely</span> calibrated measurements of flow are difficult because the spectrometer records data within 0.3 nm of the C+5 line of interest, and commercial calibration lamps do not produce lines in this narrow range . A novel optical system was designed to <span class="hlt">absolutely</span> calibrate the IDS. The device uses an UV LED to produce a broad emission curve in the desired region. A Fabry-Perot etalon filters this light, cutting transmittance peaks into the pattern of the LED emission. An optical train of fused silica lenses focuses the light into the IDS with f/4. A holographic diffuser blurs the light cone to increase homogeneity. Using this light source, the <span class="hlt">absolute</span> Doppler shift of ion emissions can be measured in MST plasmas. In combination with charge exchange recombination spectroscopy, localized ion velocities can now be measured. Previously, a <span class="hlt">time</span>-averaged measurement along the chord bisecting the poloidal plane was used to calibrate the IDS; the quality of these central chord calibrations can be characterized with our <span class="hlt">absolute</span> calibration. Calibration <span class="hlt">errors</span> may also be quantified and minimized by optimizing the curve-fitting process. Preliminary measurements of toroidal velocity in locked and rotating plasmas will be shown. This work has been supported by the US DOE.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4903013','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4903013"><span id="translatedtitle">Color-coded prefilled medication syringes decrease <span class="hlt">time</span> to delivery and dosing <span class="hlt">errors</span> in simulated prehospital pediatric resuscitations: A randomized crossover trial☆, ☆</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Stevens, Allen D.; Hernandez, Caleb; Jones, Seth; Moreira, Maria E.; Blumen, Jason R.; Hopkins, Emily; Sande, Margaret; Bakes, Katherine; Haukoos, Jason S.</p> <p>2016-01-01</p> <p>Background Medication dosing <span class="hlt">errors</span> remain commonplace and may result in potentially life-threatening outcomes, particularly for pediatric patients where dosing often requires weight-based calculations. Novel medication delivery systems that may reduce dosing <span class="hlt">errors</span> resonate with national healthcare priorities. Our goal was to evaluate novel, prefilled medication syringes labeled with color-coded volumes corresponding to the weight-based dosing of the Broselow Tape, compared to conventional medication administration, in simulated prehospital pediatric resuscitation scenarios. Methods We performed a prospective, block-randomized, cross-over study, where 10 full-<span class="hlt">time</span> paramedics each managed two simulated pediatric arrests in situ using either prefilled, color-coded-syringes (intervention) or their own medication kits stocked with conventional ampoules (control). Each paramedic was paired with two emergency medical technicians to provide ventilations and compressions as directed. The ambulance patient compartment and the intravenous medication port were video recorded. Data were extracted from video review by blinded, independent reviewers. Results Median <span class="hlt">time</span> to delivery of all doses for the intervention and control groups was 34 (95% CI: 28–39) seconds and 42 (95% CI: 36–51) seconds, respectively (difference = 9 [95% CI: 4–14] seconds). Using the conventional method, 62 doses were administered with 24 (39%) critical dosing <span class="hlt">errors</span>; using the prefilled, color-coded syringe method, 59 doses were administered with 0 (0%) critical dosing <span class="hlt">errors</span> (difference = 39%, 95% CI: 13–61%). Conclusions A novel color-coded, prefilled syringe decreased <span class="hlt">time</span> to medication administration and significantly reduced critical dosing <span class="hlt">errors</span> by paramedics during simulated prehospital pediatric resuscitations. PMID:26247145</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=effect&pg=3&id=EJ1099263','ERIC'); return false;" href="http://eric.ed.gov/?q=effect&pg=3&id=EJ1099263"><span id="translatedtitle">Stimulus Probability Effects in <span class="hlt">Absolute</span> Identification</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Kent, Christopher; Lamberts, Koen</p> <p>2016-01-01</p> <p>This study investigated the effect of stimulus presentation probability on accuracy and response <span class="hlt">times</span> in an <span class="hlt">absolute</span> identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=serial+AND+position+AND+effect&id=EJ735377','ERIC'); return false;" href="http://eric.ed.gov/?q=serial+AND+position+AND+effect&id=EJ735377"><span id="translatedtitle"><span class="hlt">Absolute</span> Identification by Relative Judgment</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Stewart, Neil; Brown, Gordon D. A.; Chater, Nick</p> <p>2005-01-01</p> <p>In unidimensional <span class="hlt">absolute</span> identification tasks, participants identify stimuli that vary along a single dimension. Performance is surprisingly poor compared with discrimination of the same stimuli. Existing models assume that identification is achieved using long-term representations of <span class="hlt">absolute</span> magnitudes. The authors propose an alternative…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=VALUE+AND+ABSOLUTE&id=EJ765743','ERIC'); return false;" href="http://eric.ed.gov/?q=VALUE+AND+ABSOLUTE&id=EJ765743"><span id="translatedtitle">Be Resolute about <span class="hlt">Absolute</span> Value</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Kidd, Margaret L.</p> <p>2007-01-01</p> <p>This article explores how conceptualization of <span class="hlt">absolute</span> value can start long before it is introduced. The manner in which <span class="hlt">absolute</span> value is introduced to students in middle school has far-reaching consequences for their future mathematical understanding. It begins to lay the foundation for students' understanding of algebra, which can change…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.usgs.gov/pp/1774/','USGSPUBS'); return false;" href="https://pubs.usgs.gov/pp/1774/"><span id="translatedtitle">Field evaluation of the <span class="hlt">error</span> arising from inadequate <span class="hlt">time</span> averaging in the standard use of depth-integrating suspended-sediment samplers</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.</p> <p>2011-01-01</p> <p>Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally <span class="hlt">time</span>-averaged data. Four sources of <span class="hlt">error</span> exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate <span class="hlt">time</span> averaging. The first two of these <span class="hlt">errors</span> arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of <span class="hlt">error</span>, the least understood source of <span class="hlt">error</span> arises from the fact that depth-integrating samplers collect only minimally <span class="hlt">time</span>-averaged data. To evaluate this fourth source of <span class="hlt">error</span>, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of <span class="hlt">time</span> averaging during standard field operation of depth-integrating samplers leads to an <span class="hlt">error</span> that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random <span class="hlt">error</span> arising from inadequate <span class="hlt">time</span> averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over <span class="hlt">time</span> scales >1 minute is the likely minimum duration required</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003MmSAI..74..884C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003MmSAI..74..884C"><span id="translatedtitle">The Carina Project: <span class="hlt">Absolute</span> and Relative Calibrations</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Corsi, C. E.; Bono, G.; Walker, A. R.; Brocato, E.; Buonanno, R.; Caputo, F.; Castellani, M.; Castellani, V.; Dall'Ora, M.; Marconi, M.; Monelli, M.; Nonino, M.; Pulone, L.; Ripepi, V.; Smith, H. A.</p> <p></p> <p>We discuss the reduction strategy adopted to perform the relative and the <span class="hlt">absolute</span> calibration of the Wide Field Imager (WFI) available at the 2.2m ESO/MPI telescope and of the Mosaic Camera (MC) available at the 4m CTIO Blanco telescope. To properly constrain the occurrence of deceptive systematic <span class="hlt">errors</span> in the relative calibration we observed with each chip the same set of stars. Current photometry seems to suggest that the WFI shows a positional effect when moving from the top to the bottom of individual chips. Preliminary results based on an independent data set collected with the MC suggest that this camera is only marginally affected by the same problem. To perform the <span class="hlt">absolute</span> calibration we observed with each chip the same set of standard stars. The sample covers a wide color range and the accuracy both in the B and in the V-band appears to be of the order of a few hundredths of magnitude. Finally, we briefly outline the observing strategy to improve both relative and <span class="hlt">absolute</span> calibrations of mosaic CCD cameras.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014SPIE.9206E..0ER','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014SPIE.9206E..0ER"><span id="translatedtitle">Experimental results for <span class="hlt">absolute</span> cylindrical wavefront testing</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Reardon, Patrick J.; Alatawi, Ayshah</p> <p>2014-09-01</p> <p>Applications for Cylindrical and near-cylindrical surfaces are ever-increasing. However, fabrication of high quality cylindrical surfaces is limited by the difficulty of accurate and affordable metrology. <span class="hlt">Absolute</span> testing of such surfaces represents a challenge to the optical testing community as cylindrical reference wavefronts are difficult to produce. In this paper, preliminary results for a new method of <span class="hlt">absolute</span> testing of cylindrical wavefronts are presented. The method is based on the merging of the random ball test method with the fiber optic reference test. The random ball test assumes a large number of interferograms of a good quality sphere with <span class="hlt">errors</span> that are statistically distributed such that the average of the <span class="hlt">errors</span> goes to zero. The fiber optic reference test utilizes a specially processed optical fiber to provide a clean high quality reference wave from an incident line focus from the cylindrical wave under test. By taking measurements at different rotation and translations of the fiber, an analogous procedure can be employed to determine the quality of the converging cylindrical wavefront with high accuracy. This paper presents and discusses the results of recent tests of this method using a null optic formed by a COTS cylindrical lens and a free-form polished corrector element.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1997A%26A...319..881G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1997A%26A...319..881G"><span id="translatedtitle"><span class="hlt">Absolute</span> magnitudes and kinematics of barium stars.</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gomez, A. E.; Luri, X.; Grenier, S.; Prevot, L.; Mennessier, M. O.; Figueras, F.; Torra, J.</p> <p>1997-03-01</p> <p>The <span class="hlt">absolute</span> magnitude of barium stars has been obtained from kinematical data using a new algorithm based on the maximum-likelihood principle. The method allows to separate a sample into groups characterized by different mean <span class="hlt">absolute</span> magnitudes, kinematics and z-scale heights. It also takes into account, simultaneously, the censorship in the sample and the <span class="hlt">errors</span> on the observables. The method has been applied to a sample of 318 barium stars. Four groups have been detected. Three of them show a kinematical behaviour corresponding to disk population stars. The fourth group contains stars with halo kinematics. The luminosities of the disk population groups spread a large range. The intrinsically brightest one (M_v_=-1.5mag, σ_M_=0.5mag) seems to be an inhomogeneous group containing barium binaries as well as AGB single stars. The most numerous group (about 150 stars) has a mean <span class="hlt">absolute</span> magnitude corresponding to stars in the red giant branch (M_v_=0.9mag, σ_M_=0.8mag). The third group contains barium dwarfs, the obtained mean <span class="hlt">absolute</span> magnitude is characteristic of stars on the main sequence or on the subgiant branch (M_v_=3.3mag, σ_M_=0.5mag). The obtained mean luminosities as well as the kinematical results are compatible with an evolutionary link between barium dwarfs and classical barium giants. The highly luminous group is not linked with these last two groups. More high-resolution spectroscopic data will be necessary in order to better discriminate between barium and non-barium stars.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19830054935&hterms=VALUE+ABSOLUTE&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3DVALUE%2BABSOLUTE','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19830054935&hterms=VALUE+ABSOLUTE&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3DVALUE%2BABSOLUTE"><span id="translatedtitle">The solar <span class="hlt">absolute</span> spectral irradiance 1150-3173 A - May 17, 1982</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Mount, G. H.; Rottman, G. J.</p> <p>1983-01-01</p> <p>The full-disk solar spectral irradiance in the spectral range 1150-3173 A was obtained from a rocket observation above White Sands Missile Range, NM, on May 17, 1982, half way in <span class="hlt">time</span> between solar maximum and solar minimum. Comparison with measurements made during solar maximum in 1980 indicate a large decrease in the <span class="hlt">absolute</span> solar irradiance at wavelengths below 1900 A to approximately solar minimum values. No change above 1900 A from solar maximum to this flight was observed to within the <span class="hlt">errors</span> of the measurements. Irradiance values lower than the Broadfoot results in the 2100-2500 A spectral range are found, but excellent agreement with Broadfoot between 2500 and 3173 A is found. The <span class="hlt">absolute</span> calibration of the instruments for this flight was accomplished at the National Bureau of Standards Synchrotron Radiation Facility which significantly improves calibration of solar measurements made in this spectral region.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19820006962&hterms=1606&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3D%2526%25231606','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19820006962&hterms=1606&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3D%2526%25231606"><span id="translatedtitle">Software <span class="hlt">error</span> detection</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Buechler, W.; Tucker, A. G.</p> <p>1981-01-01</p> <p>Several methods were employed to detect both the occurrence and source of <span class="hlt">errors</span> in the operational software of the AN/SLQ-32. A large embedded real <span class="hlt">time</span> electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size <span class="hlt">errors</span>. Finally, data are saved to provide information about the status of the system when an <span class="hlt">error</span> is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various <span class="hlt">errors</span> detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These <span class="hlt">error</span> detection techniques were a major factor in the success of finding the primary cause of <span class="hlt">error</span> in 98% of over 500 system dumps.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3906118','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3906118"><span id="translatedtitle">Accounting for Sampling <span class="hlt">Error</span> When Inferring Population Synchrony from <span class="hlt">Time</span>-Series Data: A Bayesian State-Space Modelling Approach with Applications</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique</p> <p>2014-01-01</p> <p>Background Data collected to inform <span class="hlt">time</span> variations in natural population size are tainted by sampling <span class="hlt">error</span>. Ignoring sampling <span class="hlt">error</span> in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling <span class="hlt">errors</span> are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. Methodology/Principal findings The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling <span class="hlt">error</span> is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling <span class="hlt">error</span> when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. Conclusion/Significance The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.fda.gov/Drugs/DrugSafety/MedicationErrors/default.htm','NIH-MEDLINEPLUS'); return false;" href="http://www.fda.gov/Drugs/DrugSafety/MedicationErrors/default.htm"><span id="translatedtitle">Medication <span class="hlt">Errors</span></span></a></p> <p><a target="_blank" href="http://medlineplus.gov/">MedlinePlus</a></p> <p></p> <p></p> <p>... to reduce the risk of medication <span class="hlt">errors</span> to industry and others at FDA. Additionally, DMEPA prospectively reviews ... List of Abbreviations Regulations and Guidances Guidance for Industry: Safety Considerations for Product Design to Minimize Medication ...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://medlineplus.gov/medicationerrors.html','NIH-MEDLINEPLUS'); return false;" href="https://medlineplus.gov/medicationerrors.html"><span id="translatedtitle">Medication <span class="hlt">Errors</span></span></a></p> <p><a target="_blank" href="http://medlineplus.gov/">MedlinePlus</a></p> <p></p> <p></p> <p>Medicines cure infectious diseases, prevent problems from chronic diseases, and ease pain. But medicines can also cause harmful reactions if not used ... You can help prevent <span class="hlt">errors</span> by Knowing your medicines. Keep a list of the names of your ...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19720059878&hterms=rights+author&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Drights%2Bauthor','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19720059878&hterms=rights+author&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Drights%2Bauthor"><span id="translatedtitle">Singular perturbation of <span class="hlt">absolute</span> stability.</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Siljak, D. D.</p> <p>1972-01-01</p> <p>It was previously shown (author, 1969) that the regions of <span class="hlt">absolute</span> stability in the parameter space can be determined when the parameters appear on the right-hand side of the system equations, i.e., the regular case. Here, the effect on <span class="hlt">absolute</span> stability of a small parameter attached to higher derivatives in the equations (the singular case) is studied. The Lur'e-Postnikov class of nonlinear systems is considered.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/26737821','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/26737821"><span id="translatedtitle">Estimation of the reaction <span class="hlt">times</span> in tasks of varying difficulty from the phase coherence of the auditory steady-state response using the least <span class="hlt">absolute</span> shrinkage and selection operator analysis.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yokota, Yusuke; Igarashi, Yasuhiko; Okada, Masato; Naruse, Yasushi</p> <p>2015-01-01</p> <p>Quantitative estimation of the workload in the brain is an important factor for helping to predict the behavior of humans. The reaction <span class="hlt">time</span> when performing a difficult task is longer than that when performing an easy task. Thus, the reaction <span class="hlt">time</span> reflects the workload in the brain. In this study, we employed an N-back task in order to regulate the degree of difficulty of the tasks, and then estimated the reaction <span class="hlt">times</span> from the brain activity. The brain activity that we used to estimate the reaction <span class="hlt">time</span> was the auditory steady-state response (ASSR) evoked by a 40-Hz click sound. Fifteen healthy participants participated in the present study and magnetoencephalogram (MEG) responses were recorded using a 148-channel magnetometer system. The least <span class="hlt">absolute</span> shrinkage and selection operator (LASSO), which is a type of sparse modeling, was employed to estimate the reaction <span class="hlt">times</span> from the ASSR recorded by MEG. The LASSO showed higher estimation accuracy than the least squares method. This result indicates that LASSO overcame the over-fitting to the learning data. Furthermore, the LASSO selected channels in not only the parietal region, but also in the frontal and occipital regions. Since the ASSR is evoked by auditory stimuli, it is usually large in the parietal region. However, since LASSO also selected channels in regions outside the parietal region, this suggests that workload-related neural activity occurs in many brain regions. In the real world, it is more practical to use a wearable electroencephalography device with a limited number of channels than to use MEG. Therefore, determining which brain areas should be measured is essential. The channels selected by the sparse modeling method are informative for determining which brain areas to measure. PMID:26737821</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..1612126K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..1612126K"><span id="translatedtitle">Using residual stacking to mitigate site-specific <span class="hlt">errors</span> in order to improve the quality of GNSS-based coordinate <span class="hlt">time</span> series of CORS</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Knöpfler, Andreas; Mayer, Michael; Heck, Bernhard</p> <p>2014-05-01</p> <p>Within the last decades, positioning using GNSS (Global Navigation Satellite Systems; e.g., GPS) has become a standard tool in many (geo-) sciences. The positioning methods Precise Point Positioning and differential point positioning based on carrier phase observations have been developed for a broad variety of applications with different demands for example on accuracy. In high precision applications, a lot of effort was invested to mitigate different <span class="hlt">error</span> sources: the products for satellite orbits and satellite clocks were improved; the misbehaviour of satellite and receiver antennas compared to an ideal antenna is modelled by calibration values on <span class="hlt">absolute</span> level, the modelling of the ionosphere and the troposphere is updated year by year. Therefore, within processing of data of CORS (continuously operating reference sites), equipped with geodetic hardware using a sophisticated strategy, the latest products and models nowadays enable positioning accuracies at low mm level. Despite the considerable improvements that have been achieved within GNSS data processing, a generally valid multipath model is still lacking. Therefore, site specific multipath still represents a major <span class="hlt">error</span> source in precise GNSS positioning. Furthermore, the calibration information of receiving GNSS antennas, which is for instance derived by a robot or chamber calibration, is valid strictly speaking only for the location of the calibration. The calibrated antenna can show a slightly different behaviour at the CORS due to near field multipath effects. One very promising strategy to mitigate multipath effects as well as imperfectly calibrated receiver antennas is to stack observation residuals of several days, thereby, multipath-loaded observation residuals are analysed for example with respect to signal direction, to find and reduce systematic constituents. This presentation will give a short overview about existing stacking approaches. In addition, first results of the stacking approach</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFMGC32C..03C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFMGC32C..03C"><span id="translatedtitle">Prospects for the Moon as an SI-Traceable <span class="hlt">Absolute</span> Spectroradiometric Standard for Satellite Remote Sensing</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cramer, C. E.; Stone, T. C.; Lykke, K.; Woodward, J. T.</p> <p>2015-12-01</p> <p>The Earth's Moon has many physical properties that make it suitable for use as a reference light source for radiometric calibration of remote sensing satellite instruments. Lunar calibration has been successfully applied to many imagers in orbit, including both MODIS instruments and NPP-VIIRS, using the USGS ROLO model to predict the reference exoatmospheric lunar irradiance. Sensor response trending was developed for SeaWIFS with a relative accuracy better than 0.1 % per year with lunar calibration techniques. However, the Moon rarely is used as an <span class="hlt">absolute</span> reference for on-orbit calibration, primarily due to uncertainties in the ROLO model <span class="hlt">absolute</span> scale of 5%-10%. But this limitation lies only with the models - the Moon itself is radiometrically stable, and development of a high-accuracy <span class="hlt">absolute</span> lunar reference is inherently feasible. A program has been undertaken by NIST to collect <span class="hlt">absolute</span> measurements of the lunar spectral irradiance with <span class="hlt">absolute</span> accuracy <1 % (k=2), traceable to SI radiometric units. Initial Moon observations were acquired from the Whipple Observatory on Mt. Hopkins, Arizona, elevation 2367 meters, with continuous spectral coverage from 380 nm to 1040 nm at ~3 nm resolution. The lunar spectrometer acquired calibration measurements several <span class="hlt">times</span> each observing night by pointing to a calibrated integrating sphere source. The lunar spectral irradiance at the top of the atmosphere was derived from a <span class="hlt">time</span> series of ground-based measurements by a Langley analysis that incorporated measured atmospheric conditions and ROLO model predictions for the change in irradiance resulting from the changing Sun-Moon-Observer geometry throughout each night. Two nights were selected for further study. An extensive <span class="hlt">error</span> analysis, which includes instrument calibration and atmospheric correction terms, shows a combined standard uncertainty under 1 % over most of the spectral range. Comparison of these two nights' spectral irradiance measurements with predictions</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JPCM...27u4016L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JPCM...27u4016L"><span id="translatedtitle">Mechanical temporal fluctuation induced distance and force systematic <span class="hlt">errors</span> in Casimir force experiments</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lamoreaux, Steve; Wong, Douglas</p> <p>2015-06-01</p> <p>The basic theory of temporal mechanical fluctuation induced systematic <span class="hlt">errors</span> in Casimir force experiments is developed and applications of this theory to several experiments is reviewed. This class of systematic <span class="hlt">error</span> enters in a manner similar to the usual surface roughness correction, but unlike the treatment of surface roughness for which an exact result requires an electromagnetic mode analysis, <span class="hlt">time</span> dependent fluctuations can be treated exactly, assuming the fluctuation <span class="hlt">times</span> are much longer than the zero point and thermal fluctuation correlation <span class="hlt">times</span> of the electromagnetic field between the plates. An experimental method for measuring <span class="hlt">absolute</span> distance with high bandwidth is also described and measurement data presented.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1988ITIM...37..315K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1988ITIM...37..315K"><span id="translatedtitle">Determination of short-term <span class="hlt">error</span> caused by the reference clock in precision <span class="hlt">time</span>-interval measurement and generation</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kalisz, Jozef</p> <p>1988-06-01</p> <p>A simple analysis based on the randomized clock cycle T(o) yields a useful formula on its variance in terms of the Allan variance. The short-term uncertainty of the measured or generated <span class="hlt">time</span> interval t is expressed by the standard deviation in an approximate form as a function of the Allen variance. The estimates obtained are useful for determining the measurement uncertainty of <span class="hlt">time</span> intervals within the approximate range of 10 ms-100 s.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1989Metro..26...81S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1989Metro..26...81S"><span id="translatedtitle"><span class="hlt">Absolute</span> Radiometer for Reproducing the Solar Irradiance Unit</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sapritskii, V. I.; Pavlovich, M. N.</p> <p>1989-01-01</p> <p>A high-precision <span class="hlt">absolute</span> radiometer with a thermally stabilized cavity as receiving element has been designed for use in solar irradiance measurements. The State Special Standard of the Solar Irradiance Unit has been built on the basis of the developed <span class="hlt">absolute</span> radiometer. The Standard also includes the sun tracking system and the system for automatic thermal stabilization and information processing, comprising a built-in microcalculator which calculates the irradiance according to the input program. During metrological certification of the Standard, main <span class="hlt">error</span> sources have been analysed and the non-excluded systematic and accidental <span class="hlt">errors</span> of the irradiance-unit realization have been determined. The total <span class="hlt">error</span> of the Standard does not exceed 0.3%. Beginning in 1984 the Standard has been taking part in a comparison with the Å 212 pyrheliometer and other Soviet and foreign standards. In 1986 it took part in the international comparison of <span class="hlt">absolute</span> radiometers and standard pyrheliometers of socialist countries. The results of the comparisons proved the high metrological quality of this Standard based on an <span class="hlt">absolute</span> radiometer.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/15007544','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/15007544"><span id="translatedtitle">Early-<span class="hlt">time</span> observations of gamma-ray burst <span class="hlt">error</span> boxes with the Livermore optical transient imaging system</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Williams, G G</p> <p>2000-08-01</p> <p>Despite the enormous wealth of gamma-ray burst (GRB) data collected over the past several years the physical mechanism which causes these extremely powerful phenomena is still unknown. Simultaneous and early <span class="hlt">time</span> optical observations of GRBs will likely make an great contribution t o our understanding. LOTIS is a robotic wide field-of-view telescope dedicated to the search for prompt and early-<span class="hlt">time</span> optical afterglows from gamma-ray bursts. LOTIS began routine operations in October 1996 and since that <span class="hlt">time</span> has responded to over 145 gamma-ray burst triggers. Although LOTIS has not yet detected prompt optical emission from a GRB its upper limits have provided constraints on the theoretical emission mechanisms. Super-LOTIS, also a robotic wide field-of-view telescope, can detect emission 100 <span class="hlt">times</span> fainter than LOTIS is capable of detecting. Routine observations from Steward Observatory's Kitt Peak Station will begin in the immediate future. During engineering test runs under bright skies from the grounds of Lawrence Livermore National Laboratory Super-LOTIS provided its first upper limits on the early-<span class="hlt">time</span> optical afterglow of GRBs. This dissertation provides a summary of the results from LOTIS and Super-LOTIS through the <span class="hlt">time</span> of writing. Plans for future studies with both systems are also presented.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/10116719','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/10116719"><span id="translatedtitle">Developing control charts to review and monitor medication <span class="hlt">errors</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ciminera, J L; Lease, M P</p> <p>1992-03-01</p> <p>There is a need to monitor reported medication <span class="hlt">errors</span> in a hospital setting. Because the quantity of <span class="hlt">errors</span> vary due to external reporting, quantifying the data is extremely difficult. Typically, these <span class="hlt">errors</span> are reviewed using classification systems that often have wide variations in the numbers per class per month. The authors recommend the use of control charts to review historical data and to monitor future data. The procedure they have adopted is a modification of schemes using <span class="hlt">absolute</span> (i.e., positive) values of successive differences to estimate the standard deviation when only single incidence values are available in <span class="hlt">time</span> rather than sample averages, and when many successive differences may be zero. PMID:10116719</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26095906','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26095906"><span id="translatedtitle">Computing reward-prediction <span class="hlt">error</span>: an integrated account of cortical <span class="hlt">timing</span> and basal-ganglia pathways for appetitive and aversive learning.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Morita, Kenji; Kawaguchi, Yasuo</p> <p>2015-08-01</p> <p>There are two prevailing notions regarding the involvement of the corticobasal ganglia system in value-based learning: (i) the direct and indirect pathways of the basal ganglia are crucial for appetitive and aversive learning, respectively, and (ii) the activity of midbrain dopamine neurons represents reward-prediction <span class="hlt">error</span>. Although (ii) constitutes a critical assumption of (i), it remains elusive how (ii) holds given (i), with the basal-ganglia influence on the dopamine neurons. Here we present a computational neural-circuit model that potentially resolves this issue. Based on the latest analyses of the heterogeneous corticostriatal neurons and connections, our model posits that the direct and indirect pathways, respectively, represent the values of upcoming and previous actions, and up-regulate and down-regulate the dopamine neurons via the basal-ganglia output nuclei. This explains how the difference between the upcoming and previous values, which constitutes the core of reward-prediction <span class="hlt">error</span>, is calculated. Simultaneously, it predicts that blockade of the direct/indirect pathway causes a negative/positive shift of reward-prediction <span class="hlt">error</span> and thereby impairs learning from positive/negative <span class="hlt">error</span>, i.e. appetitive/aversive learning. Through simulation of reward-reversal learning and punishment-avoidance learning, we show that our model could indeed account for the experimentally observed features that are suggested to support notion (i) and could also provide predictions on neural activity. We also present a behavioral prediction of our model, through simulation of inter-temporal choice, on how the balance between the two pathways relates to the subject's <span class="hlt">time</span> preference. These results indicate that our model, incorporating the heterogeneity of the cortical influence on the basal ganglia, is expected to provide a closed-circuit mechanistic understanding of appetitive/aversive learning.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/26095906','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/26095906"><span id="translatedtitle">Computing reward-prediction <span class="hlt">error</span>: an integrated account of cortical <span class="hlt">timing</span> and basal-ganglia pathways for appetitive and aversive learning.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Morita, Kenji; Kawaguchi, Yasuo</p> <p>2015-08-01</p> <p>There are two prevailing notions regarding the involvement of the corticobasal ganglia system in value-based learning: (i) the direct and indirect pathways of the basal ganglia are crucial for appetitive and aversive learning, respectively, and (ii) the activity of midbrain dopamine neurons represents reward-prediction <span class="hlt">error</span>. Although (ii) constitutes a critical assumption of (i), it remains elusive how (ii) holds given (i), with the basal-ganglia influence on the dopamine neurons. Here we present a computational neural-circuit model that potentially resolves this issue. Based on the latest analyses of the heterogeneous corticostriatal neurons and connections, our model posits that the direct and indirect pathways, respectively, represent the values of upcoming and previous actions, and up-regulate and down-regulate the dopamine neurons via the basal-ganglia output nuclei. This explains how the difference between the upcoming and previous values, which constitutes the core of reward-prediction <span class="hlt">error</span>, is calculated. Simultaneously, it predicts that blockade of the direct/indirect pathway causes a negative/positive shift of reward-prediction <span class="hlt">error</span> and thereby impairs learning from positive/negative <span class="hlt">error</span>, i.e. appetitive/aversive learning. Through simulation of reward-reversal learning and punishment-avoidance learning, we show that our model could indeed account for the experimentally observed features that are suggested to support notion (i) and could also provide predictions on neural activity. We also present a behavioral prediction of our model, through simulation of inter-temporal choice, on how the balance between the two pathways relates to the subject's <span class="hlt">time</span> preference. These results indicate that our model, incorporating the heterogeneity of the cortical influence on the basal ganglia, is expected to provide a closed-circuit mechanistic understanding of appetitive/aversive learning. PMID:26095906</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000PhDT.......169W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000PhDT.......169W"><span id="translatedtitle">Early-<span class="hlt">time</span> Observations of Gamma-ray Burst <span class="hlt">Error</span> Boxes with the Livermore Optical Transient Imaging System</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Williams, George Grant</p> <p>2000-08-01</p> <p>Approximately three <span class="hlt">times</span> per day a bright flash of high energy radiation from the depths of the universe encounters the Earth. These gamma-ray bursts (GRBs) were discovered circa 1970 yet their origin remains a mystery. Traditional astronomical observations of GRBs are hindered by their transient nature. They have durations of only a few seconds and occur at random <span class="hlt">times</span> from unpredictable directions. In recent years, precise GRB localizations and rapid coordinate dissemination have permitted sensitive follow-up observations. These observations resulted in the identification of long wavelength counterparts within distant galaxies. Despite the wealth of data now available the physical mechanism which produces these extremely energetic phenomena is still unknown. In the near future, simultaneous and early-<span class="hlt">time</span> optical observations of GRBs will aid in constraining the theoretical models. The Livermore Optical Transient Imaging System (LOTIS) is an automated robotic wide field-of-view telescope dedicated to the search for prompt and early-<span class="hlt">time</span> optical emission from GRBs. Since routine operations began in October 1996 LOTIS has responded to over 145 GRB triggers. LOTIS has not yet detected optical emission from a GRB but upper limits provided by the telescope constrain the theoretical emission mechanisms. Super-LOTIS, also a robotic wide field-of-view telescope, is 100 <span class="hlt">times</span> more sensitive than LOTIS. Routine observations from Steward Observatory's Kitt Peak Station will begin in the immediate future. During engineering test runs Super-LOTIS obtained its first upper limit on the early-<span class="hlt">time</span> optical afterglow of GRBs. An overview of the history and current state of GRBs is presented. Theoretical models are reviewed briefly. The LOTIS and Super-LOTIS hardware and operating procedures are discussed. A summary of the results from both LOTIS and Super-LOTIS and an interpretation of those results is presented. Plans for future studies with both systems are briefly stated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=20070032798&hterms=radiometric+calibration&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dradiometric%2Bcalibration','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=20070032798&hterms=radiometric+calibration&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dradiometric%2Bcalibration"><span id="translatedtitle"><span class="hlt">Absolute</span> Radiometric Calibration of EUNIS-06</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Thomas, R. J.; Rabin, D. M.; Kent, B. J.; Paustian, W.</p> <p>2007-01-01</p> <p>The Extreme-Ultraviolet Normal-Incidence Spectrometer (EUNIS) is a soundingrocket payload that obtains imaged high-resolution spectra of individual solar features, providing information about the Sun's corona and upper transition region. Shortly after its successful initial flight last year, a complete end-to-end calibration was carried out to determine the instrument's <span class="hlt">absolute</span> radiometric response over its Longwave bandpass of 300 - 370A. The measurements were done at the Rutherford-Appleton Laboratory (RAL) in England, using the same vacuum facility and EUV radiation source used in the pre-flight calibrations of both SOHO/CDS and Hinode/EIS, as well as in three post-flight calibrations of our SERTS sounding rocket payload, the precursor to EUNIS. The unique radiation source provided by the Physikalisch-Technische Bundesanstalt (PTB) had been calibrated to an <span class="hlt">absolute</span> accuracy of 7% (l-sigma) at 12 wavelengths covering our bandpass directly against the Berlin electron storage ring BESSY, which is itself a primary radiometric source standard. Scans of the EUNIS aperture were made to determine the instrument's <span class="hlt">absolute</span> spectral sensitivity to +- 25%, considering all sources of <span class="hlt">error</span>, and demonstrate that EUNIS-06 was the most sensitive solar E W spectrometer yet flown. The results will be matched against prior calibrations which relied on combining measurements of individual optical components, and on comparisons with theoretically predicted 'insensitive' line ratios. Coordinated observations were made during the EUNIS-06 flight by SOHO/CDS and EIT that will allow re-calibrations of those instruments as well. In addition, future EUNIS flights will provide similar calibration updates for TRACE, Hinode/EIS, and STEREO/SECCHI/EUVI.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=300758','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=300758"><span id="translatedtitle">An integrated <span class="hlt">error</span> estimation and lag-aware data assimilation scheme for real-<span class="hlt">time</span> flood forecasting</span></a></p> <p><a target="_blank" href="http://www.ars.usda.gov/services/TekTran.htm">Technology Transfer Automated Retrieval System (TEKTRAN)</a></p> <p></p> <p></p> <p>The performance of conventional filtering methods can be degraded by ignoring the <span class="hlt">time</span> lag between soil moisture and discharge response when discharge observations are assimilated into streamflow modelling. This has led to the ongoing development of more optimal ways to implement sequential data ass...</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/biblio/7023362','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/biblio/7023362"><span id="translatedtitle"><span class="hlt">Absolute</span> flux scale for radioastronomy</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Ivanov, V.P.; Stankevich, K.S.</p> <p>1986-07-01</p> <p>The authors propose and provide support for a new <span class="hlt">absolute</span> flux scale for radio astronomy, which is not encumbered with the inadequacies of the previous scales. In constructing it the method of relative spectra was used (a powerful tool for choosing reference spectra). A review is given of previous flux scales. The authors compare the AIS scale with the scale they propose. Both scales are based on <span class="hlt">absolute</span> measurements by the ''artificial moon'' method, and they are practically coincident in the range from 0.96 to 6 GHz. At frequencies above 6 GHz, 0.96 GHz, the AIS scale is overestimated because of incorrect extrapolation of the spectra of the primary and secondary standards. The major results which have emerged from this review of <span class="hlt">absolute</span> scales in radio astronomy are summarized.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24692025','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24692025"><span id="translatedtitle">Equilibrating <span class="hlt">errors</span>: reliable estimation of information transmission rates in biological systems with spectral analysis-based methods.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti</p> <p>2014-06-01</p> <p>Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of <span class="hlt">error</span>: <span class="hlt">time</span> delay bias <span class="hlt">error</span> and random <span class="hlt">error</span>. These <span class="hlt">errors</span> are particularly important for systems with relatively large <span class="hlt">time</span> delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random <span class="hlt">errors</span>, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these <span class="hlt">errors</span> using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random <span class="hlt">error</span> on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the <span class="hlt">time</span> delay bias <span class="hlt">error</span>: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of <span class="hlt">time</span> delay bias <span class="hlt">error</span> and random <span class="hlt">error</span>, based on discovering, and then using that size of window, at which the <span class="hlt">absolute</span> values of these <span class="hlt">errors</span> are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1996AAS...188.0803D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1996AAS...188.0803D"><span id="translatedtitle"><span class="hlt">Absolute</span> Proper Motions of Southern Globular Clusters</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dinescu, D. I.; Girard, T. M.; van Altena, W. F.</p> <p>1996-05-01</p> <p>Our program involves the determination of <span class="hlt">absolute</span> proper motions with respect to galaxies for a sample of globular clusters situated in the southern sky. The plates cover a 6(deg) x 6(deg) area and are taken with the 51-cm double astrograph at Cesco Observatory in El Leoncito, Argentina. We have developed special methods to deal with the modelling <span class="hlt">error</span> of the plate transformation and we correct for magnitude equation using the cluster stars. This careful astrometric treatment leads to accuracies of from 0.5 to 1.0 mas/yr for the <span class="hlt">absolute</span> proper motion of each cluster, depending primarily on the number of measurable cluster stars which in turn is related to the cluster's distance. Space velocities are then derived which, in association with metallicities, provide key information for the formation scenario of the Galaxy, i.e. accretion and/or dissipational collapse. Here we present results for NGC 1851, NGC 6752, NGC 6584, NGC 6362 and NGC 288.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFM.H24F..01S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFM.H24F..01S"><span id="translatedtitle"><span class="hlt">Absolute</span> Humidity and the Seasonality of Influenza (Invited)</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shaman, J. L.; Pitzer, V.; Viboud, C.; Grenfell, B.; Goldstein, E.; Lipsitch, M.</p> <p>2010-12-01</p> <p>Much of the observed wintertime increase of mortality in temperate regions is attributed to seasonal influenza. A recent re-analysis of laboratory experiments indicates that <span class="hlt">absolute</span> humidity strongly modulates the airborne survival and transmission of the influenza virus. Here we show that the onset of increased wintertime influenza-related mortality in the United States is associated with anomalously low <span class="hlt">absolute</span> humidity levels during the prior weeks. We then use an epidemiological model, in which observed <span class="hlt">absolute</span> humidity conditions temper influenza transmission rates, to successfully simulate the seasonal cycle of observed influenza-related mortality. The model results indicate that direct modulation of influenza transmissibility by <span class="hlt">absolute</span> humidity alone is sufficient to produce this observed seasonality. These findings provide epidemiological support for the hypothesis that <span class="hlt">absolute</span> humidity drives seasonal variations of influenza transmission in temperate regions. In addition, we show that variations of the basic and effective reproductive numbers for influenza, caused by seasonal changes in <span class="hlt">absolute</span> humidity, are consistent with the general <span class="hlt">timing</span> of pandemic influenza outbreaks observed for 2009 A/H1N1 in temperate regions. Indeed, <span class="hlt">absolute</span> humidity conditions correctly identify the region of the United States vulnerable to a third, wintertime wave of pandemic influenza. These findings suggest that the <span class="hlt">timing</span> of pandemic influenza outbreaks is controlled by a combination of <span class="hlt">absolute</span> humidity conditions, levels of susceptibility and changes in population mixing and contact rates.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25273506','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25273506"><span id="translatedtitle">Preventing <span class="hlt">errors</span> in laterality.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie</p> <p>2015-04-01</p> <p>An <span class="hlt">error</span> in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such <span class="hlt">errors</span>, very few studies have been published that describe these <span class="hlt">errors</span> in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in separate colors. This allows the radiologist to correlate all detected laterality terms of the report with the images open in PACS and correct them before the report is finalized. The system is monitored every <span class="hlt">time</span> an <span class="hlt">error</span> in laterality was detected. The system detected 32 <span class="hlt">errors</span> in laterality over a 7-month period (rate of 0.0007 %), with CT containing the highest <span class="hlt">error</span> detection rate of all modalities. Significantly, more <span class="hlt">errors</span> were detected in male patients compared with female patients. In conclusion, our study demonstrated that with our system, laterality <span class="hlt">errors</span> can be detected and corrected prior to finalizing reports.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4556681','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4556681"><span id="translatedtitle">LEMming: A Linear <span class="hlt">Error</span> Model to Normalize Parallel Quantitative Real-<span class="hlt">Time</span> PCR (qPCR) Data as an Alternative to Reference Gene Based Methods</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Feuer, Ronny; Vlaic, Sebastian; Arlt, Janine; Sawodny, Oliver; Dahmen, Uta; Zanger, Ulrich M.; Thomas, Maria</p> <p>2015-01-01</p> <p>Background Gene expression analysis is an essential part of biological and medical investigations. Quantitative real-<span class="hlt">time</span> PCR (qPCR) is characterized with excellent sensitivity, dynamic range, reproducibility and is still regarded to be the gold standard for quantifying transcripts abundance. Parallelization of qPCR such as by microfluidic Taqman Fluidigm Biomark Platform enables evaluation of multiple transcripts in samples treated under various conditions. Despite advanced technologies, correct evaluation of the measurements remains challenging. Most widely used methods for evaluating or calculating gene expression data include geNorm and ΔΔCt, respectively. They rely on one or several stable reference genes (RGs) for normalization, thus potentially causing biased results. We therefore applied multivariable regression with a tailored <span class="hlt">error</span> model to overcome the necessity of stable RGs. Results We developed a RG independent data normalization approach based on a tailored linear <span class="hlt">error</span> model for parallel qPCR data, called LEMming. It uses the assumption that the mean Ct values within samples of similarly treated groups are equal. Performance of LEMming was evaluated in three data sets with different stability patterns of RGs and compared to the results of geNorm normalization. Data set 1 showed that both methods gave similar results if stable RGs are available. Data set 2 included RGs which are stable according to geNorm criteria, but became differentially expressed in normalized data evaluated by a t-test. geNorm-normalized data showed an effect of a shifted mean per gene per condition whereas LEMming-normalized data did not. Comparing the decrease of standard deviation from raw data to geNorm and to LEMming, the latter was superior. In data set 3 according to geNorm calculated average expression stability and pairwise variation, stable RGs were available, but t-tests of raw data contradicted this. Normalization with RGs resulted in distorted data contradicting</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26183038','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26183038"><span id="translatedtitle"><span class="hlt">Errors</span> in neuroradiology.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca</p> <p>2015-09-01</p> <p>Approximately 4 % of radiologic interpretation in daily practice contains <span class="hlt">errors</span> and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree <span class="hlt">errors</span>, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic <span class="hlt">errors</span> become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. <span class="hlt">Errors</span> can be summarized into four main categories: observer <span class="hlt">errors</span>, <span class="hlt">errors</span> in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a <span class="hlt">timely</span> and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of <span class="hlt">error</span>, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a <span class="hlt">timely</span> fashion directly with the treatment team.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014PhyA..407...15O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014PhyA..407...15O"><span id="translatedtitle">An <span class="hlt">absolute</span> measure for a key currency</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Oya, Shunsuke; Aihara, Kazuyuki; Hirata, Yoshito</p> <p></p> <p>It is generally considered that the US dollar and the euro are the key currencies in the world and in Europe, respectively. However, there is no <span class="hlt">absolute</span> general measure for a key currency. Here, we investigate the 24-hour periodicity of foreign exchange markets using a recurrence plot, and define an <span class="hlt">absolute</span> measure for a key currency based on the strength of the periodicity. Moreover, we analyze the <span class="hlt">time</span> evolution of this measure. The results show that the credibility of the US dollar has not decreased significantly since the Lehman shock, when the Lehman Brothers bankrupted and influenced the economic markets, and has increased even relatively better than that of the euro and that of the Japanese yen.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=Absolutism&pg=3&id=EJ265369','ERIC'); return false;" href="http://eric.ed.gov/?q=Absolutism&pg=3&id=EJ265369"><span id="translatedtitle">Relativistic <span class="hlt">Absolutism</span> in Moral Education.</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Vogt, W. Paul</p> <p>1982-01-01</p> <p>Discusses Emile Durkheim's "Moral Education: A Study in the Theory and Application of the Sociology of Education," which holds that morally healthy societies may vary in culture and organization but must possess <span class="hlt">absolute</span> rules of moral behavior. Compares this moral theory with current theory and practice of American educators. (MJL)</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19720027445&hterms=Phosphorus&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3DPhosphorus','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19720027445&hterms=Phosphorus&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3DPhosphorus"><span id="translatedtitle"><span class="hlt">Absolute</span> transition probabilities of phosphorus.</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Miller, M. H.; Roig, R. A.; Bengtson, R. D.</p> <p>1971-01-01</p> <p>Use of a gas-driven shock tube to measure the <span class="hlt">absolute</span> strengths of 21 P I lines and 126 P II lines (from 3300 to 6900 A). Accuracy for prominent, isolated neutral and ionic lines is estimated to be 28 to 40% and 18 to 30%, respectively. The data and the corresponding theoretical predictions are examined for conformity with the sum rules.-</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/21385585','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/21385585"><span id="translatedtitle">Measurement of <span class="hlt">absolute</span> T cell receptor rearrangement diversity.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Baum, Paul D; Young, Jennifer J; McCune, Joseph M</p> <p>2011-05-31</p> <p>T cell receptor (TCR) diversity is critical for adaptive immunity. Existing methods for measuring such diversity are qualitative, expensive, and/or of uncertain accuracy. Here, we describe a method and associated reagents for estimating the <span class="hlt">absolute</span> number of unique TCR Vβ rearrangements present in a given number of cells or volume of blood. Compared to next generation sequencing, this method is rapid, reproducible, and affordable. Diversity of a sample is calculated based on three independent measurements of one Vβ-Jβ family of TCR rearrangements at a <span class="hlt">time</span>. The percentage of receptors using the given Vβ gene is determined by flow cytometric analysis of T cells stained with anti-Vβ family antibodies. The percentage of receptors using the Vβ gene in combination with the chosen Jβ gene is determined by quantitative PCR. Finally, the <span class="hlt">absolute</span> clonal diversity of the Vβ-Jβ family is determined with the AmpliCot method of DNA hybridization kinetics, by interpolation relative to PCR standards of known sequence diversity. These three component measurements are reproducible and linear. Using titrations of known numbers of input cells, we show that the TCR diversity estimates obtained by this approach approximate expected values within a two-fold <span class="hlt">error</span>, have a coefficient of variation of 20%, and yield similar results when different Vβ-Jβ pairs are chosen. The ability to obtain accurate measurements of the total number of different TCR gene rearrangements in a cell sample should be useful for basic studies of the adaptive immune system as well as in clinical studies of conditions such as HIV disease, transplantation, aging, and congenital immunodeficiencies. PMID:21385585</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23214826','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23214826"><span id="translatedtitle">Tracking <span class="hlt">time</span>-varying causality and directionality of information flow using an <span class="hlt">error</span> reduction ratio test with applications to electroencephalography data.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhao, Yifan; Billings, Steve A; Wei, Hualiang; Sarrigiannis, Ptolemaios G</p> <p>2012-11-01</p> <p>This paper introduces an <span class="hlt">error</span> reduction ratio-causality (ERR-causality) test that can be used to detect and track causal relationships between two signals. In comparison to the traditional Granger method, one significant advantage of the new ERR-causality test is that it can effectively detect the <span class="hlt">time</span>-varying direction of linear or nonlinear causality between two signals without fitting a complete model. Another important advantage is that the ERR-causality test can detect both the direction of interactions and estimate the relative <span class="hlt">time</span> shift between the two signals. Numerical examples are provided to illustrate the effectiveness of the new method together with the determination of the causality between electroencephalograph signals from different cortical sites for patients during an epileptic seizure.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19840057943&hterms=VALUE+ABSOLUTE&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DVALUE%2BABSOLUTE','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19840057943&hterms=VALUE+ABSOLUTE&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DVALUE%2BABSOLUTE"><span id="translatedtitle"><span class="hlt">Absolute</span> measurement of the extreme UV solar flux</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Carlson, R. W.; Ogawa, H. S.; Judge, D. L.; Phillips, E.</p> <p>1984-01-01</p> <p>A windowless rare-gas ionization chamber has been developed to measure the <span class="hlt">absolute</span> value of the solar extreme UV flux in the 50-575-A region. Successful results were obtained on a solar-pointing sounding rocket. The ionization chamber, operated in total absorption, is an inherently stable <span class="hlt">absolute</span> detector of ionizing UV radiation and was designed to be independent of effects from secondary ionization and gas effusion. The net <span class="hlt">error</span> of the measurement is + or - 7.3 percent, which is primarily due to residual outgassing in the instrument, other <span class="hlt">errors</span> such as multiple ionization, photoelectron collection, and extrapolation to the zero atmospheric optical depth being small in comparison. For the day of the flight, Aug. 10, 1982, the solar irradiance (50-575 A), normalized to unit solar distance, was found to be 5.71 + or - 0.42 x 10 to the 10th photons per sq cm sec.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007AGUFM.V41E..01S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007AGUFM.V41E..01S"><span id="translatedtitle">A Methodology for <span class="hlt">Absolute</span> Isotope Composition Measurement</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shen, J. J.; Lee, D.; Liang, W.</p> <p>2007-12-01</p> <p>Double spike technique was a well defined method for isotope composition measurement by TIMS of samples which have natural mass fractionation effect, but it is still a problem to define the isotope composition for double spike itself. In this study, we modified the old double spike technique and found that we could use the modified technique to solve the ¡§true¡¨ isotope composition of double spike itself. According the true isotope composition of double spike, we can measure the <span class="hlt">absolute</span> isotope composition if the sample has natural fractionation effect. A new vector analytical method has been developed in order to obtain the true isotopic composition of a 42Ca-48Ca double spike, and this is achieved by using two different sample-spike mixtures combined with the double spike and the natural Ca data. Because the natural sample, the two mixtures, and the spike should all lie on a single mixing line, we are able to constrain the true isotopic composition of our double spike using this new approach. This method not only can be used in Ca system but also in Ti, Cr, Fe, Ni, Zn, Mo, Ba and Pb systems. The <span class="hlt">absolute</span> double spike isotopic ratio is important, which can save a lot of <span class="hlt">time</span> to check different reference standards. Especially for Pb, radiogenic isotope system, the decay systems embodied in three of four naturally occurring isotopes induce difficult to obtain true isotopic ratios for <span class="hlt">absolute</span> dating.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4729390','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4729390"><span id="translatedtitle">SEMIPARAMETRIC <span class="hlt">TIME</span> TO EVENT MODELS IN THE PRESENCE OF <span class="hlt">ERROR</span>-PRONE, SELF-REPORTED OUTCOMES—WITH APPLICATION TO THE WOMEN’S HEALTH INITIATIVE1</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gu, Xiangdong; Ma, Yunsheng; Balasubramanian, Raji</p> <p>2016-01-01</p> <p>The onset of several silent, chronic diseases such as diabetes can be detected only through diagnostic tests. Due to cost considerations, self-reported outcomes are routinely collected in lieu of expensive diagnostic tests in large-scale prospective investigations such as the Women’s Health Initiative. However, self-reported outcomes are subject to imperfect sensitivity and specificity. Using a semiparametric likelihood-based approach, we present <span class="hlt">time</span> to event models to estimate the association of one or more covariates with a <span class="hlt">error</span>-prone, self-reported outcome. We present simulation studies to assess the effect of <span class="hlt">error</span> in self-reported outcomes with regard to bias in the estimation of the regression parameter of interest. We apply the proposed methods to prospective data from 152,830 women enrolled in the Women’s Health Initiative to evaluate the effect of statin use with the risk of incident diabetes mellitus among postmenopausal women. The current analysis is based on follow-up through 2010, with a median duration of follow-up of 12.1 years. The methods proposed in this paper are readily implemented using our freely available R software package icensmis, which is available at the Comprehensive R Archive Network (CRAN) website. PMID:26834908</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012PMB....57.5841S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012PMB....57.5841S"><span id="translatedtitle">Regional <span class="hlt">absolute</span> conductivity reconstruction using projected current density in MREIT</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sajib, Saurav Z. K.; Kim, Hyung Joong; In Kwon, Oh; Woo, Eung Je</p> <p>2012-09-01</p> <p>Magnetic resonance electrical impedance tomography (MREIT) is a non-invasive technique for imaging the internal conductivity distribution in tissue within an MRI scanner, utilizing the magnetic flux density, which is introduced when a current is injected into the tissue from external electrodes. This magnetic flux alters the MRI signal, so that appropriate reconstruction can provide a map of the additional z-component of the magnetic field (Bz) as well as the internal current density distribution that created it. To extract the internal electrical properties of the subject, including the conductivity and/or the current density distribution, MREIT techniques use the relationship between the external injection current and the z-component of the magnetic flux density B = (Bx, By, Bz). The tissue studied typically contains defective regions, regions with a low MRI signal and/or low MRI signal-to-noise-ratio, due to the low density of nuclear magnetic resonance spins, short T2 or T*2 relaxation <span class="hlt">times</span>, as well as regions with very low electrical conductivity, through which very little current traverses. These defective regions provide noisy Bz data, which can severely degrade the overall reconstructed conductivity distribution. Injecting two independent currents through surface electrodes, this paper proposes a new direct method to reconstruct a regional <span class="hlt">absolute</span> isotropic conductivity distribution in a region of interest (ROI) while avoiding the defective regions. First, the proposed method reconstructs the contrast of conductivity using the transversal J-substitution algorithm, which blocks the propagation of severe accumulated noise from the defective region to the ROI. Second, the proposed method reconstructs the regional projected current density using the relationships between the internal current density, which stems from a current injection on the surface, and the measured Bz data. Combining the contrast conductivity distribution in the entire imaging slice and</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JPhCS.633a2080F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JPhCS.633a2080F"><span id="translatedtitle">Mathematical Model for <span class="hlt">Absolute</span> Magnetic Measuring Systems in Industrial Applications</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fügenschuh, Armin; Fügenschuh, Marzena; Ludszuweit, Marina; Mojsic, Aleksandar; Sokół, Joanna</p> <p>2015-09-01</p> <p>Scales for measuring systems are either based on incremental or <span class="hlt">absolute</span> measuring methods. Incremental scales need to initialize a measurement cycle at a reference point. From there, the position is computed by counting increments of a periodic graduation. <span class="hlt">Absolute</span> methods do not need reference points, since the position can be read directly from the scale. The positions on the complete scales are encoded using two incremental tracks with different graduation. We present a new method for <span class="hlt">absolute</span> measuring using only one track for position encoding up to micrometre range. Instead of the common perpendicular magnetic areas, we use a pattern of trapezoidal magnetic areas, to store more complex information. For positioning, we use the magnetic field where every position is characterized by a set of values measured by a hall sensor array. We implement a method for reconstruction of <span class="hlt">absolute</span> positions from the set of unique measured values. We compare two patterns with respect to uniqueness, accuracy, stability and robustness of positioning. We discuss how stability and robustness are influenced by different <span class="hlt">errors</span> during the measurement in real applications and how those <span class="hlt">errors</span> can be compensated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/cgi-bin/nph-data_query?bibcode=2016PhRvA..94a3808D&link_type=ABSTRACT','NASAADS'); return false;" href="http://adsabs.harvard.edu/cgi-bin/nph-data_query?bibcode=2016PhRvA..94a3808D&link_type=ABSTRACT"><span id="translatedtitle">Optomechanics for <span class="hlt">absolute</span> rotation detection</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Davuluri, Sankar</p> <p>2016-07-01</p> <p>In this article, we present an application of optomechanical cavity for the <span class="hlt">absolute</span> rotation detection. The optomechanical cavity is arranged in a Michelson interferometer in such a way that the classical centrifugal force due to rotation changes the length of the optomechanical cavity. The change in the cavity length induces a shift in the frequency of the cavity mode. The phase shift corresponding to the frequency shift in the cavity mode is measured at the interferometer output to estimate the angular velocity of <span class="hlt">absolute</span> rotation. We derived an analytic expression to estimate the minimum detectable rotation rate in our scheme for a given optomechanical cavity. Temperature dependence of the rotation detection sensitivity is studied.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/11262641','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/11262641"><span id="translatedtitle">Moral <span class="hlt">absolutism</span> and ectopic pregnancy.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kaczor, C</p> <p>2001-02-01</p> <p>If one accepts a version of <span class="hlt">absolutism</span> that excludes the intentional killing of any innocent human person from conception to natural death, ectopic pregnancy poses vexing difficulties. Given that the embryonic life almost certainly will die anyway, how can one retain one's moral principle and yet adequately respond to a situation that gravely threatens the life of the mother and her future fertility? The four options of treatment most often discussed in the literature are non-intervention, salpingectomy (removal of tube with embryo), salpingostomy (removal of embryo alone), and use of methotrexate (MXT). In this essay, I review these four options and introduce a fifth (the milking technique). In order to assess these options in terms of the <span class="hlt">absolutism</span> mentioned, it will also be necessary to discuss various accounts of the intention/foresight distinction. I conclude that salpingectomy, salpingostomy, and the milking technique are compatible with absolutist presuppositions, but not the use of methotrexate.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/11262641','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/11262641"><span id="translatedtitle">Moral <span class="hlt">absolutism</span> and ectopic pregnancy.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kaczor, C</p> <p>2001-02-01</p> <p>If one accepts a version of <span class="hlt">absolutism</span> that excludes the intentional killing of any innocent human person from conception to natural death, ectopic pregnancy poses vexing difficulties. Given that the embryonic life almost certainly will die anyway, how can one retain one's moral principle and yet adequately respond to a situation that gravely threatens the life of the mother and her future fertility? The four options of treatment most often discussed in the literature are non-intervention, salpingectomy (removal of tube with embryo), salpingostomy (removal of embryo alone), and use of methotrexate (MXT). In this essay, I review these four options and introduce a fifth (the milking technique). In order to assess these options in terms of the <span class="hlt">absolutism</span> mentioned, it will also be necessary to discuss various accounts of the intention/foresight distinction. I conclude that salpingectomy, salpingostomy, and the milking technique are compatible with absolutist presuppositions, but not the use of methotrexate. PMID:11262641</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=20100014902&hterms=asp&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dasp','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=20100014902&hterms=asp&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dasp"><span id="translatedtitle">The <span class="hlt">Absolute</span> Spectrum Polarimeter (ASP)</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kogut, A. J.</p> <p>2010-01-01</p> <p>The <span class="hlt">Absolute</span> Spectrum Polarimeter (ASP) is an Explorer-class mission to map the <span class="hlt">absolute</span> intensity and linear polarization of the cosmic microwave background and diffuse astrophysical foregrounds over the full sky from 30 GHz to 5 THz. The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r much greater than 1O(raised to the power of { -3}) and Compton distortion y < 10 (raised to the power of{-6}). We describe the ASP instrument and mission architecture needed to detect the signature of an inflationary epoch in the early universe using only 4 semiconductor bolometers.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15831074','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15831074"><span id="translatedtitle">Classification images predict <span class="hlt">absolute</span> efficiency.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Murray, Richard F; Bennett, Patrick J; Sekuler, Allison B</p> <p>2005-02-24</p> <p>How well do classification images characterize human observers' strategies in perceptual tasks? We show mathematically that from the classification image of a noisy linear observer, it is possible to recover the observer's <span class="hlt">absolute</span> efficiency. If we could similarly predict human observers' performance from their classification images, this would suggest that the linear model that underlies use of the classification image method is adequate over the small range of stimuli typically encountered in a classification image experiment, and that a classification image captures most important aspects of human observers' performance over this range. In a contrast discrimination task and in a shape discrimination task, we found that observers' <span class="hlt">absolute</span> efficiencies were generally well predicted by their classification images, although consistently slightly (approximately 13%) higher than predicted. We consider whether a number of plausible nonlinearities can account for the slight under prediction, and of these we find that only a form of phase uncertainty can account for the discrepancy.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3432865','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3432865"><span id="translatedtitle">Experimental Quantum <span class="hlt">Error</span> Detection</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Jin, Xian-Min; Yi, Zhen-Huan; Yang, Bin; Zhou, Fei; Yang, Tao; Peng, Cheng-Zhi</p> <p>2012-01-01</p> <p>Faithful transmission of quantum information is a crucial ingredient in quantum communication networks. To overcome the unavoidable decoherence in a noisy channel, to date, many efforts have been made to transmit one state by consuming large numbers of <span class="hlt">time</span>-synchronized ancilla states. However, such huge demands of quantum resources are hard to meet with current technology and this restricts practical applications. Here we experimentally demonstrate quantum <span class="hlt">error</span> detection, an economical approach to reliably protecting a qubit against bit-flip <span class="hlt">errors</span>. Arbitrary unknown polarization states of single photons and entangled photons are converted into <span class="hlt">time</span> bins deterministically via a modified Franson interferometer. Noise arising in both 10 m and 0.8 km fiber, which induces associated <span class="hlt">errors</span> on the reference frame of <span class="hlt">time</span> bins, is filtered when photons are detected. The demonstrated resource efficiency and state independence make this protocol a promising candidate for implementing a real-world quantum communication network. PMID:22953047</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007AGUSM.A53C..02H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007AGUSM.A53C..02H"><span id="translatedtitle">Accurate real-<span class="hlt">time</span> ionospheric corrections as the key to extend the centimeter-<span class="hlt">error</span>-level GNSS navigation at continental scale (WARTK)</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hernandez-Pajares, M.; Juan, J.; Sanz, J.; Aragon-Angel, A.</p> <p>2007-05-01</p> <p>The main focus of this presentation is to show the recent improvements in real-<span class="hlt">time</span> GNSS ionospheric determination extending the service area of the so called "Wide Area Real <span class="hlt">Time</span> Kinematic" technique (WARTK), which allow centimeter-<span class="hlt">error</span>-level navigation up to hundreds of kilometers far from the nearest GNSS reference site.[-4mm] The real-<span class="hlt">time</span> GNSS navigation with centimeters of <span class="hlt">error</span> has been feasible since the nineties thanks to the so- called "Real-<span class="hlt">Time</span> Kinematic" technique (RTK), by exactly solving the integer values of the double-differenced carrier phase ambiguities. This was possible thanks to dual-frequency carrier phase data acquired simultaneously with data from a close (less than 10-20 km) reference GNSS site, under the assumption of common atmospheric effects on the satellite signal. This technique has been improved by different authors with the consideration of a network of reference sites. However the differential ionospheric refraction has remained as the main limiting factor in the extension of the applicability distance regarding to the reference site.[-4mm] In this context the authors have been developing the Wide Area RTK technique (WARTK) in different works and projects since 1998, overworking the mentioned limitations. In this way the RTK applicability with the existing sparse (Wide Area) networks of reference GPS stations, separated hundreds of kilometers, is feasible. And such networks are presently deployed in the context of other projects, such as SBAS support, over Europe and North America (EGNOS and WAAS respectively) among other regions.[-4mm] In particular WARTK is based on computing very accurate differential ionospheric corrections from a Wide Area network of permanent GNSS receivers, and providing them in real-<span class="hlt">time</span> to the users. The key points addressed by the technique are an accurate real-<span class="hlt">time</span> ionospheric modeling -combined with the corresponding geodetic model- by means of:[-4mm] a) A tomographic voxel model of the ionosphere</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3945107','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3945107"><span id="translatedtitle">Temporal Dynamics of Microbial Rhodopsin Fluorescence Reports <span class="hlt">Absolute</span> Membrane Voltage</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Hou, Jennifer H.; Venkatachalam, Veena; Cohen, Adam E.</p> <p>2014-01-01</p> <p>Plasma membrane voltage is a fundamentally important property of a living cell; its value is tightly coupled to membrane transport, the dynamics of transmembrane proteins, and to intercellular communication. Accurate measurement of the membrane voltage could elucidate subtle changes in cellular physiology, but existing genetically encoded fluorescent voltage reporters are better at reporting relative changes than <span class="hlt">absolute</span> numbers. We developed an Archaerhodopsin-based fluorescent voltage sensor whose <span class="hlt">time</span>-domain response to a stepwise change in illumination encodes the <span class="hlt">absolute</span> membrane voltage. We validated this sensor in human embryonic kidney cells. Measurements were robust to variation in imaging parameters and in gene expression levels, and reported voltage with an <span class="hlt">absolute</span> accuracy of 10 mV. With further improvements in membrane trafficking and signal amplitude, <span class="hlt">time</span>-domain encoding of <span class="hlt">absolute</span> voltage could be applied to investigate many important and previously intractable bioelectric phenomena. PMID:24507604</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://pubs.er.usgs.gov/publication/70023251','USGSPUBS'); return false;" href="http://pubs.er.usgs.gov/publication/70023251"><span id="translatedtitle">Using <span class="hlt">absolute</span> gravimeter data to determine vertical gravity gradients</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Robertson, D.S.</p> <p>2001-01-01</p> <p>The position versus <span class="hlt">time</span> data from a free-fall <span class="hlt">absolute</span> gravimeter can be used to estimate the vertical gravity gradient in addition to the gravity value itself. Hipkin has reported success in estimating the vertical gradient value using a data set of unusually good quality. This paper explores techniques that may be applicable to a broader class of data that may be contaminated with "system response" <span class="hlt">errors</span> of larger magnitude than were evident in the data used by Hipkin. This system response function is usually modelled as a sum of exponentially decaying sinusoidal components. The technique employed here involves combining the x0, v0 and g parameters from all the drops made during a site occupation into a single least-squares solution, and including the value of the vertical gradient and the coefficients of system response function in the same solution. The resulting non-linear equations must be solved iteratively and convergence presents some difficulties. Sparse matrix techniques are used to make the least-squares problem computationally tractable.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/cgi-bin/nph-data_query?bibcode=2009acse.book..955S&link_type=ABSTRACT','NASAADS'); return false;" href="http://adsabs.harvard.edu/cgi-bin/nph-data_query?bibcode=2009acse.book..955S&link_type=ABSTRACT"><span id="translatedtitle"><span class="hlt">Absolute</span> Priority for a Vehicle in VANET</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shirani, Rostam; Hendessi, Faramarz; Montazeri, Mohammad Ali; Sheikh Zefreh, Mohammad</p> <p></p> <p>In today's world, traffic jams waste hundreds of hours of our life. This causes many researchers try to resolve the problem with the idea of Intelligent Transportation System. For some applications like a travelling ambulance, it is important to reduce delay even for a second. In this paper, we propose a completely infrastructure-less approach for finding shortest path and controlling traffic light to provide <span class="hlt">absolute</span> priority for an emergency vehicle. We use the idea of vehicular ad-hoc networking to reduce the imposed travelling <span class="hlt">time</span>. Then, we simulate our proposed protocol and compare it with a centrally controlled traffic light system.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/27189174','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/27189174"><span id="translatedtitle">Study design for non-recurring, <span class="hlt">time</span>-to-event outcomes in the presence of <span class="hlt">error</span>-prone diagnostic tests or self-reports.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gu, Xiangdong; Balasubramanian, Raji</p> <p>2016-09-30</p> <p>Sequentially administered, laboratory-based diagnostic tests or self-reported questionnaires are often used to determine the occurrence of a silent event. In this paper, we consider issues relevant in design of studies aimed at estimating the association of one or more covariates with a non-recurring, <span class="hlt">time</span>-to-event outcome that is observed using a repeatedly administered, <span class="hlt">error</span>-prone diagnostic procedure. The problem is motivated by the Women's Health Initiative, in which diabetes incidence among the approximately 160,000 women is obtained from annually collected self-reported data. For settings of imperfect diagnostic tests or self-reports with known sensitivity and specificity, we evaluate the effects of various factors on resulting power and sample size calculations and compare the relative efficiency of different study designs. The methods illustrated in this paper are readily implemented using our freely available R software package icensmis, which is available at the Comprehensive R Archive Network website. An important special case is that when diagnostic procedures are perfect, they result in interval-censored, <span class="hlt">time</span>-to-event outcomes. The proposed methods are applicable for the design of studies in which a <span class="hlt">time</span>-to-event outcome is interval censored. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27189174</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013qec..book.....L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013qec..book.....L"><span id="translatedtitle">Quantum <span class="hlt">Error</span> Correction</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lidar, Daniel A.; Brun, Todd A.</p> <p>2013-09-01</p> <p>Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum <span class="hlt">error</span> correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum <span class="hlt">Error</span> Correction: 6. Operator quantum <span class="hlt">error</span> correction David Kribs and David Poulin; 7. Entanglement-assisted quantum <span class="hlt">error</span>-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-<span class="hlt">time</span> quantum <span class="hlt">error</span> correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum <span class="hlt">error</span> correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum <span class="hlt">error</span> correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. <span class="hlt">Error</span> correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19790013324','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19790013324"><span id="translatedtitle">The AFGL <span class="hlt">absolute</span> gravity program</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hammond, J. A.; Iliff, R. L.</p> <p>1978-01-01</p> <p>A brief discussion of the AFGL's (Air Force Geophysics Laboratory) program in <span class="hlt">absolute</span> gravity is presented. Support of outside work and in-house studies relating to gravity instrumentation are discussed. A description of the current transportable system is included and the latest results are presented. These results show good agreement with measurements at the AFGL site by an Italian system. The accuracy obtained by the transportable apparatus is better than 0.1 microns sq sec 10 microgal and agreement with previous measurements is within the combined uncertainties of the measurements.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1287535','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1287535"><span id="translatedtitle">Familial Aggregation of <span class="hlt">Absolute</span> Pitch</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Baharloo, Siamak; Service, Susan K.; Risch, Neil; Gitschier, Jane; Freimer, Nelson B.</p> <p>2000-01-01</p> <p><span class="hlt">Absolute</span> pitch (AP) is a behavioral trait that is defined as the ability to identify the pitch of tones in the absence of a reference pitch. AP is an ideal phenotype for investigation of gene and environment interactions in the development of complex human behaviors. Individuals who score exceptionally well on formalized auditory tests of pitch perception are designated as “AP-1.” As described in this report, auditory testing of siblings of AP-1 probands and of a control sample indicates that AP-1 aggregates in families. The implications of this finding for the mapping of loci for AP-1 predisposition are discussed. PMID:10924408</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20090032008','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20090032008"><span id="translatedtitle">Magnetospheric Multiscale (MMS) Mission Commissioning Phase Orbit Determination <span class="hlt">Error</span> Analysis</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chung, Lauren R.; Novak, Stefan; Long, Anne; Gramling, Cheryl</p> <p>2009-01-01</p> <p>The Magnetospheric MultiScale (MMS) mission commissioning phase starts in a 185 km altitude x 12 Earth radii (RE) injection orbit and lasts until the Phase 1 mission orbits and orientation to the Earth-Sun li ne are achieved. During a limited <span class="hlt">time</span> period in the early part of co mmissioning, five maneuvers are performed to raise the perigee radius to 1.2 R E, with a maneuver every other apogee. The current baseline is for the Goddard Space Flight Center Flight Dynamics Facility to p rovide MMS orbit determination support during the early commissioning phase using all available two-way range and Doppler tracking from bo th the Deep Space Network and Space Network. This paper summarizes th e results from a linear covariance analysis to determine the type and amount of tracking data required to accurately estimate the spacecraf t state, plan each perigee raising maneuver, and support thruster cal ibration during this phase. The primary focus of this study is the na vigation accuracy required to plan the first and the final perigee ra ising maneuvers. <span class="hlt">Absolute</span> and relative position and velocity <span class="hlt">error</span> hi stories are generated for all cases and summarized in terms of the ma ximum root-sum-square consider and measurement noise <span class="hlt">error</span> contributi ons over the definitive and predictive arcs and at discrete <span class="hlt">times</span> inc luding the maneuver planning and execution <span class="hlt">times</span>. Details of the meth odology, orbital characteristics, maneuver timeline, <span class="hlt">error</span> models, and <span class="hlt">error</span> sensitivities are provided.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4190128','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4190128"><span id="translatedtitle">Real-<span class="hlt">Time</span> Correction of Rigid-Body-Motion-Induced Phase <span class="hlt">Errors</span> for Diffusion-Weighted Steady State Free Precession Imaging</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>O’Halloran, R; Aksoy, M; Aboussouan, E; Peterson, E; Van, A; Bammer, R</p> <p>2014-01-01</p> <p>Purpose Diffusion contrast in diffusion-weighted steady state free precession MRI is generated through the constructive addition of signal from many coherence pathways. Motion-induced phase causes destructive interference which results in loss of signal magnitude and diffusion contrast. In this work, a 3D navigator-based real-<span class="hlt">time</span> correction of the rigid-body-motion-induced phase <span class="hlt">errors</span> is developed for diffusion-weighted steady state free precession MRI. Methods The efficacy of the real-<span class="hlt">time</span> prospective correction method in preserving phase coherence of the steady-state is tested in 3D phantom experiments and 3D scans of healthy human subjects. Results In nearly all experiments, the signal magnitude in images obtained with proposed prospective correction was higher than the signal magnitude in images obtained with no correction. In the human subjects the mean magnitude signal in the data was up to 30 percent higher with prospective motion correction than without. Prospective correction never resulted in a decrease in mean signal magnitude in either the data or in the images. Conclusions The proposed prospective motion correction method is shown to preserve the phase coherence of the steady state in diffusion-weighted steady state free precession MRI, thus mitigating signal magnitude losses that would confound the desired diffusion contrast. PMID:24715414</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19770040149&hterms=Astronomers&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DAstronomers','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19770040149&hterms=Astronomers&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DAstronomers"><span id="translatedtitle">Analyses of atmospheric extinction data obtained by astronomers. I - A <span class="hlt">time</span>-trend analysis of data with internal accidental <span class="hlt">errors</span> obtained at four observatories</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Taylor, B. J.; Lucke, P. B.; Laulainen, N. S.</p> <p>1977-01-01</p> <p>Long-term <span class="hlt">time</span>-trend analysis was performed on astronomical atmospheric extinction data in wideband UBV and various narrow-band systems recorded at Cerro Tololo, Kitt Peak, Lick, and McDonald observatories. All of the data had to be transformed into uniform monochromatic extinction data before trend analysis could be performed. The paper describes the various reduction techniques employed. The <span class="hlt">time</span>-trend analysis was then carried out by the method of least squares. A special technique, called 'histogram shaping', was employed to adjust for the fact that the <span class="hlt">errors</span> of the reduced monochromatic extinction data were not essentially Gaussian. On the assumption that there are no compensatory background and local extinction changes, the best values obtained for extinction trends due to background aerosol changes during the years 1960 to 1972 are 0.006 + or - 0.013 (rms) and 0.009 + or - 0.009 (rms) stellar magnitudes per air mass per decade in the blue and yellow wavelength regions, respectively.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2635625','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2635625"><span id="translatedtitle">Upscaled CTAB-Based DNA Extraction and Real-<span class="hlt">Time</span> PCR Assays for Fusarium culmorum and F. graminearum DNA in Plant Material with Reduced Sampling <span class="hlt">Error</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Brandfass, Christoph; Karlovsky, Petr</p> <p>2008-01-01</p> <p>Fusarium graminearum Schwabe (Gibberella zeae Schwein. Petch.) and F. culmorum W.G. Smith are major mycotoxin producers in small-grain cereals afflicted with Fusarium head blight (FHB). Real-<span class="hlt">time</span> PCR (qPCR) is the method of choice for species-specific, quantitative estimation of fungal biomass in plant tissue. We demonstrated that increasing the amount of plant material used for DNA extraction to 0.5–1.0 g considerably reduced sampling <span class="hlt">error</span> and improved the reproducibility of DNA yield. The costs of DNA extraction at different scales and with different methods (commercial kits versus cetyltrimethylammonium bromide-based protocol) and qPCR systems (doubly labeled hybridization probes versus SYBR Green) were compared. A cost-effective protocol for the quantification of F. graminearum and F. culmorum DNA in wheat grain and maize stalk debris based on DNA extraction from 0.5–1.0 g material and real-<span class="hlt">time</span> PCR with SYBR Green fluorescence detection was developed. PMID:19330077</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19740020571','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19740020571"><span id="translatedtitle">An <span class="hlt">error</span> criterion for determining sampling rates in closed-loop control systems</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Brecher, S. M.</p> <p>1972-01-01</p> <p>The determination of an <span class="hlt">error</span> criterion which will give a sampling rate for adequate performance of linear, <span class="hlt">time</span>-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the <span class="hlt">error</span> behavior, and the determination of an <span class="hlt">absolute</span> <span class="hlt">error</span> definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative <span class="hlt">error</span> criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23350305','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23350305"><span id="translatedtitle">Relational versus <span class="hlt">absolute</span> representation in categorization.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Edwards, Darren J; Pothos, Emmanuel M; Perlman, Amotz</p> <p>2012-01-01</p> <p>This study explores relational-like and <span class="hlt">absolute</span>-like representations in categorization. Although there is much evidence that categorization processes can involve information about both the particular physical properties of studied instances and abstract (relational) properties, there has been little work on the factors that lead to one kind of representation as opposed to the other. We tested 370 participants in 6 experiments, in which participants had to classify new items into predefined artificial categories. In 4 experiments, we observed a predominantly relational-like mode of classification, and in 2 experiments we observed a shift toward an <span class="hlt">absolute</span>-like mode of classification. These results suggest 3 factors that promote a relational-like mode of classification: fewer items per group, more training groups, and the presence of a <span class="hlt">time</span> delay. Overall, we propose that less information about the distributional properties of a category or weaker memory traces for the category exemplars (induced, e.g., by having smaller categories or a <span class="hlt">time</span> delay) can encourage relational-like categorization.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/27581485','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/27581485"><span id="translatedtitle">Transient <span class="hlt">absolute</span> robustness in stochastic biochemical networks.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Enciso, German A</p> <p>2016-08-01</p> <p><span class="hlt">Absolute</span> robustness allows biochemical networks to sustain a consistent steady-state output in the face of protein concentration variability from cell to cell. This property is structural and can be determined from the topology of the network alone regardless of rate parameters. An important question regarding these systems is the effect of discrete biochemical noise in the dynamical behaviour. In this paper, a variable freezing technique is developed to show that under mild hypotheses the corresponding stochastic system has a transiently robust behaviour. Specifically, after finite <span class="hlt">time</span> the distribution of the output approximates a Poisson distribution, centred around the deterministic mean. The approximation becomes increasingly accurate, and it holds for increasingly long finite <span class="hlt">times</span>, as the total protein concentrations grow to infinity. In particular, the stochastic system retains a transient, <span class="hlt">absolutely</span> robust behaviour corresponding to the deterministic case. This result contrasts with the long-term dynamics of the stochastic system, which eventually must undergo an extinction event that eliminates robustness and is completely different from the deterministic dynamics. The transiently robust behaviour may be sufficient to carry out many forms of robust signal transduction and cellular decision-making in cellular organisms. PMID:27581485</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20020012984','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20020012984"><span id="translatedtitle"><span class="hlt">Absolute</span> Density Calibration Cell for Laser Induced Fluorescence Erosion Rate Measurements</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Domonkos, Matthew T.; Stevens, Richard E.</p> <p>2001-01-01</p> <p>Flight qualification of ion thrusters typically requires testing on the order of 10,000 hours. Extensive knowledge of wear mechanisms and rates is necessary to establish design confidence prior to long duration tests. Consequently, real-<span class="hlt">time</span> erosion rate measurements offer the potential both to reduce development costs and to enhance knowledge of the dependency of component wear on operating conditions. Several previous studies have used laser-induced fluorescence (LIF) to measure real-<span class="hlt">time</span>, in situ erosion rates of ion thruster accelerator grids. Those studies provided only relative measurements of the erosion rate. In the present investigation, a molybdenum tube was resistively heated such that the evaporation rate yielded densities within the tube on the order of those expected from accelerator grid erosion. This work examines the suitability of the density cell as an <span class="hlt">absolute</span> calibration source for LIF measurements, and the intrinsic <span class="hlt">error</span> was evaluated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940010515','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940010515"><span id="translatedtitle">Compact disk <span class="hlt">error</span> measurements</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Howe, D.; Harriman, K.; Tehranchi, B.</p> <p>1993-01-01</p> <p>The objectives of this project are as follows: provide hardware and software that will perform simple, real-<span class="hlt">time</span>, high resolution (single-byte) measurement of the <span class="hlt">error</span> burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit <span class="hlt">error</span> flags) and soft decision (i.e., 2-bit <span class="hlt">error</span> flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output <span class="hlt">error</span> rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) <span class="hlt">error</span> statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PPN....46..157A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PPN....46..157A"><span id="translatedtitle">Sources of the systematic <span class="hlt">errors</span> in measurements of 214Po decay half-life <span class="hlt">time</span> variations at the Baksan deep underground experiments</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Alexeyev, E. N.; Gavrilyuk, Yu. M.; Gangapshev, A. M.; Kazalov, V. V.; Kuzminov, V. V.; Panasenko, S. I.; Ratkevich, S. S.</p> <p>2015-03-01</p> <p>The design changes of the Baksan low-background TAU-1 and TAU-2 set-ups allowed to improve a sensitivity of 214Po half-life (τ) measurements up to the 2.5 × 10-4 are described. Different possible sources of systematic <span class="hlt">errors</span> influencing on the τ-value are studied. An annual variation of 214Po half-life <span class="hlt">time</span> measurements with an amplitude of A = (6.9 ± 3) × 10-4 and a phase of φ = 93 ± 10 days was found in a sequence of the week-collected τ-values obtained from the TAU-2 data sample with total duration of 480 days. 24 hours' variation of the t-value measurements with an amplitude of A = (10.0 ± 2.6) × 10-4 and phase of φ = 1 ± 0.5 hours was found in a solar day 1 hour step t-value sequence formed from the same data sample. It was found that the 214Po half-life averaged at 480 days is equal to 163.45 ± 0.04 μs.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/cgi-bin/nph-data_query?bibcode=2016JCAP...08..060V&link_type=ABSTRACT','NASAADS'); return false;" href="http://adsabs.harvard.edu/cgi-bin/nph-data_query?bibcode=2016JCAP...08..060V&link_type=ABSTRACT"><span id="translatedtitle">Cosmology with negative <span class="hlt">absolute</span> temperatures</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vieira, J. P. P.; Byrnes, Christian T.; Lewis, Antony</p> <p>2016-08-01</p> <p>Negative <span class="hlt">absolute</span> temperatures (NAT) are an exotic thermodynamical consequence of quantum physics which has been known since the 1950's (having been achieved in the lab on a number of occasions). Recently, the work of Braun et al. [1] has rekindled interest in negative temperatures and hinted at a possibility of using NAT systems in the lab as dark energy analogues. This paper goes one step further, looking into the cosmological consequences of the existence of a NAT component in the Universe. NAT-dominated expanding Universes experience a borderline phantom expansion (w < ‑1) with no Big Rip, and their contracting counterparts are forced to bounce after the energy density becomes sufficiently large. Both scenarios might be used to solve horizon and flatness problems analogously to standard inflation and bouncing cosmologies. We discuss the difficulties in obtaining and ending a NAT-dominated epoch, and possible ways of obtaining density perturbations with an acceptable spectrum.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19730021662','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19730021662"><span id="translatedtitle">Apparatus for <span class="hlt">absolute</span> pressure measurement</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hecht, R. (Inventor)</p> <p>1969-01-01</p> <p>An <span class="hlt">absolute</span> pressure sensor (e.g., the diaphragm of a capacitance manometer) was subjected to a superimposed potential to effectively reduce the mechanical stiffness of the sensor. This substantially increases the sensitivity of the sensor and is particularly useful in vacuum gauges. An oscillating component of the superimposed potential induced vibrations of the sensor. The phase of these vibrations with respect to that of the oscillating component was monitored, and served to initiate an automatic adjustment of the static component of the superimposed potential, so as to bring the sensor into resonance at the frequency of the oscillating component. This establishes a selected sensitivity for the sensor, since a definite relationship exists between resonant frequency and sensitivity.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JCAP...08..060V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JCAP...08..060V"><span id="translatedtitle">Cosmology with negative <span class="hlt">absolute</span> temperatures</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vieira, J. P. P.; Byrnes, Christian T.; Lewis, Antony</p> <p>2016-08-01</p> <p>Negative <span class="hlt">absolute</span> temperatures (NAT) are an exotic thermodynamical consequence of quantum physics which has been known since the 1950's (having been achieved in the lab on a number of occasions). Recently, the work of Braun et al. [1] has rekindled interest in negative temperatures and hinted at a possibility of using NAT systems in the lab as dark energy analogues. This paper goes one step further, looking into the cosmological consequences of the existence of a NAT component in the Universe. NAT-dominated expanding Universes experience a borderline phantom expansion (w < -1) with no Big Rip, and their contracting counterparts are forced to bounce after the energy density becomes sufficiently large. Both scenarios might be used to solve horizon and flatness problems analogously to standard inflation and bouncing cosmologies. We discuss the difficulties in obtaining and ending a NAT-dominated epoch, and possible ways of obtaining density perturbations with an acceptable spectrum.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.T13A2965T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.T13A2965T"><span id="translatedtitle">The Application of Optimisation Methods to Constrain <span class="hlt">Absolute</span> Plate Motions</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tetley, M. G.; Williams, S.; Hardy, S.; Müller, D.</p> <p>2015-12-01</p> <p>Plate tectonic reconstructions are an excellent tool for understanding the configuration and behaviour of continents through <span class="hlt">time</span> on both global and regional scales, and are relatively well understood back to ~200 Ma. However, many of these models represent only relative motions between continents, providing little information of <span class="hlt">absolute</span> tectonic motions and their relationship with the deep Earth. Significant issues exist in solving this problem, including how to combine constraints from multiple, diverse data into a unified model of <span class="hlt">absolute</span> plate motions; and how to address uncertainties both in the available data, and in the assumptions involved in this process (e.g. hotspot motion, true polar wander). In deep <span class="hlt">time</span> (pre-Pangea breakup), plate reconstructions rely more heavily on paleomagnetism, but these data often imply plate velocities much larger than those observed since the breakup of the supercontinent Pangea where plate velocities are constrained by the seafloor spreading record. Here we present two complementary techniques to address these issues, applying parallelized numerical methods to quantitatively investigate <span class="hlt">absolute</span> plate motions through <span class="hlt">time</span>. Firstly, we develop a data-fit optimized global <span class="hlt">absolute</span> reference frame constrained by kinematic reconstruction data, hotspot-trail observations, and trench migration statistics. Secondly we calculate optimized paleomagnetic data-derived apparent polar wander paths (APWPs) for both the Phanerozoic and Precambrian. Paths are generated from raw pole data with optimal spatial and temporal pole configurations calculated using all known uncertainties and quality criteria to produce velocity-optimized <span class="hlt">absolute</span> motion paths through deep <span class="hlt">time</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/7900888','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/7900888"><span id="translatedtitle">An ultrasonic system for measurement of <span class="hlt">absolute</span> myocardial thickness using a single transducer.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pitsillides, K F; Longhurst, J C</p> <p>1995-03-01</p> <p>We have developed an ultrasonic instrument that can measure <span class="hlt">absolute</span> regional myocardial wall motion throughout the cardiac cycle using a single epicardial piezoelectric transducer. The methods in place currently that utilize ultrasound to measure myocardial wall thickness are the transit-<span class="hlt">time</span> sonomicrometer (TTS) and, more recently, the Doppler echo displacement method. Both methods have inherent disadvantages. To address the need for an instrument that can measure <span class="hlt">absolute</span> dimensions of myocardial wall at any depth, an ultrasonic single-crystal sonomicrometer (SCS) system was developed. This system can identify and track the boundary of the endocardial muscle-blood interface. With this instrument, it is possible to obtain, from a single epicardial transducer, measurement of myocardial wall motion that is calibrated in <span class="hlt">absolute</span> dimensional units. The operating principles of the proposed myocardial dimension measurement system are as follows. A short duration ultrasonic burst having a frequency of 10 MHz is transmitted from the piezoelectric transducer. Reflected echoes are sampled at two distinct <span class="hlt">time</span> intervals to generate reference and interface sample volumes. During steady state, the two sample volumes are adjusted so that the reference volume remains entirely within the myocardium, whereas half of the interface sampled volume is located within the myocardium. After amplification and filtering, the true root mean square values of both signals are compared and an <span class="hlt">error</span> signal is generated. A closed-loop circuit uses the integrated <span class="hlt">error</span> signal to continuously adjust the position of the two sample volumes. We have compared our system in vitro against a known signal and in vivo against the two-crystal TTS system during control, suppression (ischemia), and enhancement (isoproterenol) of myocardial function. Results were obtained in vitro for accuracy (> 99%), signal linearity (r = 0.99), and frequency response to heart rates > 450 beats/min, and in vivo data were</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..17.7431K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..17.7431K"><span id="translatedtitle"><span class="hlt">Absolute</span> Plate Velocities from Seismic Anisotropy</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kreemer, Corné; Zheng, Lin; Gordon, Richard</p> <p>2015-04-01</p> <p>The orientation of seismic anisotropy inferred beneath plate interiors may provide a means to estimate the motions of the plate relative to the sub-asthenospheric mantle. Here we analyze two global sets of shear-wave splitting data, that of Kreemer [2009] and an updated and expanded data set, to estimate plate motions and to better understand the dispersion of the data, correlations in the <span class="hlt">errors</span>, and their relation to plate speed. We also explore the effect of using geologically current plate velocities (i.e., the MORVEL set of angular velocities [DeMets et al. 2010]) compared with geodetically current plate velocities (i.e., the GSRM v1.2 angular velocities [Kreemer et al. 2014]). We demonstrate that the <span class="hlt">errors</span> in plate motion azimuths inferred from shear-wave splitting beneath any one tectonic plate are correlated with the <span class="hlt">errors</span> of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. The SKS-MORVEL <span class="hlt">absolute</span> plate angular velocities (based on the Kreemer [2009] data set) are determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25±0.11° Ma-1 (95% confidence limits) right-handed about 57.1°S, 68.6°E. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ=19.2° ) differs insignificantly from that for continental lithosphere (σ=21.6° ). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ=7.4° ) than for continental</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/cgi-bin/nph-data_query?bibcode=2016APS..APRB16007K&link_type=ABSTRACT','NASAADS'); return false;" href="http://adsabs.harvard.edu/cgi-bin/nph-data_query?bibcode=2016APS..APRB16007K&link_type=ABSTRACT"><span id="translatedtitle"><span class="hlt">Absolute</span> Electron Extraction Efficiency of Liquid Xenon</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kamdin, Katayun; Mizrachi, Eli; Morad, James; Sorensen, Peter</p> <p>2016-03-01</p> <p>Dual phase liquid/gas xenon <span class="hlt">time</span> projection chambers (TPCs) currently set the world's most sensitive limits on weakly interacting massive particles (WIMPs), a favored dark matter candidate. These detectors rely on extracting electrons from liquid xenon into gaseous xenon, where they produce proportional scintillation. The proportional scintillation from the extracted electrons serves to internally amplify the WIMP signal; even a single extracted electron is detectable. Credible dark matter searches can proceed with electron extraction efficiency (EEE) lower than 100%. However, electrons systematically left at the liquid/gas boundary are a concern. Possible effects include spontaneous single or multi-electron proportional scintillation signals in the gas, or charging of the liquid/gas interface or detector materials. Understanding EEE is consequently a serious concern for this class of rare event search detectors. Previous EEE measurements have mostly been relative, not <span class="hlt">absolute</span>, assuming efficiency plateaus at 100%. I will present an <span class="hlt">absolute</span> EEE measurement with a small liquid/gas xenon TPC test bed located at Lawrence Berkeley National Laboratory.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..1714045W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..1714045W"><span id="translatedtitle">Global <span class="hlt">absolut</span> gravity reference system as replacement of IGSN 71</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wilmes, Herbert; Wziontek, Hartmut; Falk, Reinhard</p> <p>2015-04-01</p> <p>The determination of precise gravity field parameters is of great importance in a period in which earth sciences are achieving the necessary accuracy to monitor and document global change processes. This is the reason why experts from geodesy and metrology joined in a successful cooperation to make <span class="hlt">absolute</span> gravity observations traceable to SI quantities, to improve the metrological kilogram definition and to monitor mass movements and smallest height changes for geodetic and geophysical applications. The international gravity datum is still defined by the International Gravity Standardization Net adopted in 1971 (IGSN 71). The network is based upon pendulum and spring gravimeter observations taken in the 1950s and 60s supported by the early free fall <span class="hlt">absolute</span> gravimeters. Its gravity values agreed in every case to better than 0.1 mGal. Today, more than 100 <span class="hlt">absolute</span> gravimeters are in use worldwide. The series of repeated international comparisons confirms the traceability of <span class="hlt">absolute</span> gravity measurements to SI quantities and confirm the degree of equivalence of the gravimeters in the order of a few µGal. For applications in geosciences where e.g. gravity changes over <span class="hlt">time</span> need to be analyzed, the temporal stability of an <span class="hlt">absolute</span> gravimeter is most important. Therefore, the proposition is made to replace the IGSN 71 by an up-to-date gravity reference system which is based upon repeated <span class="hlt">absolute</span> gravimeter comparisons and a global network of well controlled gravity reference stations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26082151','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26082151"><span id="translatedtitle"><span class="hlt">Time</span>-order <span class="hlt">errors</span> and standard-position effects in duration discrimination: An experimental study and an analysis by the sensation-weighting model.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hellström, Åke; Rammsayer, Thomas H</p> <p>2015-10-01</p> <p>Studies have shown that the discriminability of successive <span class="hlt">time</span> intervals depends on the presentation order of the standard (St) and the comparison (Co) stimuli. Also, this order affects the point of subjective equality. The first effect is here called the standard-position effect (SPE); the latter is known as the <span class="hlt">time</span>-order <span class="hlt">error</span>. In the present study, we investigated how these two effects vary across interval types and standard durations, using Hellström's sensation-weighting model to describe the results and relate them to stimulus comparison mechanisms. In Experiment 1, four modes of interval presentation were used, factorially combining interval type (filled, empty) and sensory modality (auditory, visual). For each mode, two presentation orders (St-Co, Co-St) and two standard durations (100 ms, 1,000 ms) were used; half of the participants received correctness feedback, and half of them did not. The interstimulus interval was 900 ms. The SPEs were negative (i.e., a smaller difference limen for St-Co than for Co-St), except for the filled-auditory and empty-visual 100-ms standards, for which a positive effect was obtained. In Experiment 2, duration discrimination was investigated for filled auditory intervals with four standards between 100 and 1,000 ms, an interstimulus interval of 900 ms, and no feedback. Standard duration interacted with presentation order, here yielding SPEs that were negative for standards of 100 and 1,000 ms, but positive for 215 and 464 ms. Our findings indicate that the SPE can be positive as well as negative, depending on the interval type and standard duration, reflecting the relative weighting of the stimulus information, as is described by the sensation-weighting model.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26649954','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26649954"><span id="translatedtitle">[Diagnostic <span class="hlt">Errors</span> in Medicine].</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Buser, Claudia; Bankova, Andriyana</p> <p>2015-12-01</p> <p>The recognition of diagnostic <span class="hlt">errors</span> in everyday practice can help improve patient safety. The most common diagnostic <span class="hlt">errors</span> are the cognitive <span class="hlt">errors</span>, followed by system-related <span class="hlt">errors</span> and no fault <span class="hlt">errors</span>. The cognitive <span class="hlt">errors</span> often result from mental shortcuts, known as heuristics. The rate of cognitive <span class="hlt">errors</span> can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic <span class="hlt">errors</span>. Diagnostic <span class="hlt">errors</span> occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient <span class="hlt">errors</span> are more severe than the outpatient <span class="hlt">errors</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/26649954','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/26649954"><span id="translatedtitle">[Diagnostic <span class="hlt">Errors</span> in Medicine].</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Buser, Claudia; Bankova, Andriyana</p> <p>2015-12-01</p> <p>The recognition of diagnostic <span class="hlt">errors</span> in everyday practice can help improve patient safety. The most common diagnostic <span class="hlt">errors</span> are the cognitive <span class="hlt">errors</span>, followed by system-related <span class="hlt">errors</span> and no fault <span class="hlt">errors</span>. The cognitive <span class="hlt">errors</span> often result from mental shortcuts, known as heuristics. The rate of cognitive <span class="hlt">errors</span> can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic <span class="hlt">errors</span>. Diagnostic <span class="hlt">errors</span> occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient <span class="hlt">errors</span> are more severe than the outpatient <span class="hlt">errors</span>. PMID:26649954</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19900043517&hterms=VALUE+ABSOLUTE&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DVALUE%2BABSOLUTE','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19900043517&hterms=VALUE+ABSOLUTE&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DVALUE%2BABSOLUTE"><span id="translatedtitle">Sounding rocket measurement of the <span class="hlt">absolute</span> solar EUV flux utilizing a silicon photodiode</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ogawa, H. S.; Mcmullin, D.; Judge, D. L.; Canfield, L. R.</p> <p>1990-01-01</p> <p>A newly developed stable and high quantum efficiency silicon photodiode was used to obtain an accurate measurement of the integrated <span class="hlt">absolute</span> magnitude of the solar extreme UV photon flux in the spectral region between 50 and 800 A. The adjusted daily 10.7-cm solar radio flux and sunspot number were 168.4 and 121, respectively. The unattenuated <span class="hlt">absolute</span> value of the solar EUV flux at 1 AU in the specified wavelength region was 6.81 x 10 to the 10th photons/sq cm per s. Based on a nominal probable <span class="hlt">error</span> of 7 percent for National Institute of Standards and Technology detector efficiency measurements in the 50- to 500-A region (5 percent on longer wavelength measurements between 500 and 1216 A), and based on experimental <span class="hlt">errors</span> associated with the present rocket instrumentation and analysis, a conservative total <span class="hlt">error</span> estimate of about 14 percent is assigned to the <span class="hlt">absolute</span> integral solar flux obtained.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/18770842','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/18770842"><span id="translatedtitle">Flow rate calibration for <span class="hlt">absolute</span> cell counting rationale and design.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Walker, Clare; Barnett, David</p> <p>2006-05-01</p> <p>There is a need for <span class="hlt">absolute</span> leukocyte enumeration in the clinical setting, and accurate, reliable (and affordable) technology to determine <span class="hlt">absolute</span> leukocyte counts has been developed. Such technology includes single platform and dual platform approaches. Derivations of these counts commonly incorporate the addition of a known number of latex microsphere beads to a blood sample, although it has been suggested that the addition of beads to a sample may only be required to act as an internal quality control procedure for assessing the pipetting <span class="hlt">error</span>. This unit provides the technical details for undertaking flow rate calibration that obviates the need to add reference beads to each sample. It is envisaged that this report will provide the basis for subsequent clinical evaluations of this novel approach. PMID:18770842</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19910042706&hterms=censorship&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dcensorship','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19910042706&hterms=censorship&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dcensorship"><span id="translatedtitle"><span class="hlt">Absolute</span> magnitude calibration using trigonometric parallax - Incomplete, spectroscopic samples</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ratnatunga, Kavan U.; Casertano, Stefano</p> <p>1991-01-01</p> <p>A new numerical algorithm is used to calibrate the <span class="hlt">absolute</span> magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum-likelihood estimation, can retrieve unbiased estimates of the intrinsic <span class="hlt">absolute</span> magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate <span class="hlt">error</span> estimates are derived for each of the fitted parameters. The algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. The procedure is described in general and applied to both real and simulated data.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4408737','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4408737"><span id="translatedtitle">Estimating the Population Distribution of Usual 24-Hour Sodium Excretion from <span class="hlt">Timed</span> Urine Void Specimens Using a Statistical Approach Accounting for Correlated Measurement <span class="hlt">Errors</span>1234</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wang, Chia-Yih; Carriquiry, Alicia L; Chen, Te-Ching; Loria, Catherine M; Pfeiffer, Christine M; Liu, Kiang; Sempos, Christopher T; Perrine, Cria G; Cogswell, Mary E</p> <p>2015-01-01</p> <p>Background: High US sodium intake and national reduction efforts necessitate developing a feasible and valid monitoring method across the distribution of low-to-high sodium intake. Objective: We examined a statistical approach using <span class="hlt">timed</span> urine voids to estimate the population distribution of usual 24-h sodium excretion. Methods: A sample of 407 adults, aged 18–39 y (54% female, 48% black), collected each void in a separate container for 24 h; 133 repeated the procedure 4–11 d later. Four <span class="hlt">timed</span> voids (morning, afternoon, evening, overnight) were selected from each 24-h collection. We developed gender-specific equations to calibrate total sodium excreted in each of the one-void (e.g., morning) and combined two-void (e.g., morning + afternoon) urines to 24-h sodium excretion. The calibrated sodium excretions were used to estimate the population distribution of usual 24-h sodium excretion. Participants were then randomly assigned to modeling (n = 160) or validation (n = 247) groups to examine the bias in estimated population percentiles. Results: Median bias in predicting selected percentiles (5th, 25th, 50th, 75th, 95th) of usual 24-h sodium excretion with one-void urines ranged from −367 to 284 mg (−7.7 to 12.2% of the observed usual excretions) for men and −604 to 486 mg (−14.6 to 23.7%) for women, and with two-void urines from −338 to 263 mg (−6.9 to 10.4%) and −166 to 153 mg (−4.1 to 8.1%), respectively. Four of the 6 two-void urine combinations produced no significant bias in predicting selected percentiles. Conclusions: Our approach to estimate the population usual 24-h sodium excretion, which uses calibrated <span class="hlt">timed</span>-void sodium to account for day-to-day variation and covariance between measurement <span class="hlt">errors</span>, produced percentile estimates with relatively low biases across low-to-high sodium excretions. This may provide a low-burden, low-cost alternative to 24-h collections in monitoring population sodium intake among healthy young adults and</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016OptEn..55f6115D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016OptEn..55f6115D"><span id="translatedtitle">Assessment of <span class="hlt">absolute</span> added correlative coding in optical intensity modulation and direct detection channels</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dong-Nhat, Nguyen; Elsherif, Mohamed A.; Malekmohammadi, Amin</p> <p>2016-06-01</p> <p>The performance of <span class="hlt">absolute</span> added correlative coding (AACC) modulation format with direct detection has been numerically and analytically reported, targeting metro data center interconnects. Hereby, the focus lies on the performance of the bit <span class="hlt">error</span> rate, noise contributions, spectral efficiency, and chromatic dispersion tolerance. The signal space model of AACC, where the average electrical and optical power expressions are derived for the first <span class="hlt">time</span>, is also delineated. The proposed modulation format was also compared to other well-known signaling, such as on-off-keying (OOK) and four-level pulse-amplitude modulation, at the same bit rate in a directly modulated vertical-cavity surface-emitting laser-based transmission system. The comparison results show a clear advantage of AACC in achieving longer fiber delivery distance due to the higher dispersion tolerance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016RScI...87kE509B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016RScI...87kE509B"><span id="translatedtitle"><span class="hlt">Absolute</span> wavelength calibration of a Doppler spectrometer with a custom Fabry-Perot optical system</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Baltzer, M. M.; Craig, D.; Den Hartog, D. J.; Nishizawa, T.; Nornberg, M. D.</p> <p>2016-11-01</p> <p>An Ion Doppler Spectrometer (IDS) is used for fast measurements of C VI line emission (343.4 nm) in the Madison Symmetric Torus. <span class="hlt">Absolutely</span> calibrated flow measurements are difficult because the IDS records data within 0.25 nm of the line. Commercial calibration lamps do not produce lines in this narrow range. A light source using an ultraviolet LED and etalon was designed to provide a fiducial marker 0.08 nm wide. The light is coupled into the IDS at f/4, and a holographic diffuser increases homogeneity of the final image. Random and systematic <span class="hlt">errors</span> in data analysis were assessed. The calibration is accurate to 0.003 nm, allowing for flow measurements accurate to 3 km/s. This calibration is superior to the previous method which used a <span class="hlt">time</span>-averaged measurement along a chord believed to have zero net Doppler shift.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3007289','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3007289"><span id="translatedtitle"><span class="hlt">Absolute</span> configuration of isovouacapenol C</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Fun, Hoong-Kun; Yodsaoue, Orapun; Karalai, Chatchanok; Chantrapromma, Suchada</p> <p>2010-01-01</p> <p>The title compound, C27H34O5 {systematic name: (4aR,5R,6R,6aS,7R,11aS,11bR)-4a,6-dihy­droxy-4,4,7,11b-tetra­methyl-1,2,3,4,4a,5,6,6a,7,11,11a,11b-dodeca­hydro­phenanthro[3,2-b]furan-5-yl benzoate}, is a cassane furan­oditerpene, which was isolated from the roots of Caesalpinia pulcherrima. The three cyclo­hexane rings are trans fused: two of these are in chair conformations with the third in a twisted half-chair conformation, whereas the furan ring is almost planar (r.m.s. deviation = 0.003 Å). An intra­molecular C—H⋯O inter­action generates an S(6) ring. The <span class="hlt">absolute</span> configurations of the stereogenic centres at positions 4a, 5, 6, 6a, 7, 11a and 11b are R, R, R, S, R, S and R, respectively. In the crystal, mol­ecules are linked into infinite chains along [010] by O—H⋯O hydrogen bonds. C⋯O [3.306 (2)–3.347 (2) Å] short contacts and C—H⋯π inter­actions also occur. PMID:21588364</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=20010125144&hterms=gps+deformation&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dgps%2Bdeformation','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=20010125144&hterms=gps+deformation&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dgps%2Bdeformation"><span id="translatedtitle">Measuring Postglacial Rebound with GPS and <span class="hlt">Absolute</span> Gravity</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Larson, Kristine M.; vanDam, Tonie</p> <p>2000-01-01</p> <p>We compare vertical rates of deformation derived from continuous Global Positioning System (GPS) observations and episodic measurements of <span class="hlt">absolute</span> gravity. We concentrate on four sites in a region of North America experiencing postglacial rebound. The rates of uplift from gravity and GPS agree within one standard deviation for all sites. The GPS vertical deformation rates are significantly more precise than the gravity rates, primarily because of the denser temporal spacing provided by continuous GPS tracking. We conclude that continuous GPS observations are more cost efficient and provide more precise estimates of vertical deformation rates than campaign style gravity observations where systematic <span class="hlt">errors</span> are difficult to quantify.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/19213452','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/19213452"><span id="translatedtitle">[Medical device use <span class="hlt">errors</span>].</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Friesdorf, Wolfgang; Marsolek, Ingo</p> <p>2008-01-01</p> <p>Medical devices define our everyday patient treatment processes. But despite the beneficial effect, every use can also lead to damages. Use <span class="hlt">errors</span> are thus often explained by human failure. But human <span class="hlt">errors</span> can never be completely extinct, especially in such complex work processes like those in medicine that often involve <span class="hlt">time</span> pressure. Therefore we need <span class="hlt">error</span>-tolerant work systems in which potential problems are identified and solved as early as possible. In this context human engineering uses the TOP principle: technological before organisational and then person-related solutions. But especially in everyday medical work we realise that <span class="hlt">error</span>-prone usability concepts can often only be counterbalanced by organisational or person-related measures. Thus human failure is pre-programmed. In addition, many medical work places represent a somewhat chaotic accumulation of individual devices with totally different user interaction concepts. There is not only a lack of holistic work place concepts, but of holistic process and system concepts as well. However, this can only be achieved through the co-operation of producers, healthcare providers and clinical users, by systematically analyzing and iteratively optimizing the underlying treatment processes from both a technological and organizational perspective. What we need is a joint platform like medilab V of the TU Berlin, in which the entire medical treatment chain can be simulated in order to discuss, experiment and model--a key to a safe and efficient healthcare system of the future. PMID:19213452</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19720007018','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19720007018"><span id="translatedtitle">Sun compass <span class="hlt">error</span> model</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Blucker, T. J.; Ferry, W. W.</p> <p>1971-01-01</p> <p>An <span class="hlt">error</span> model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The <span class="hlt">errors</span> reported include a random <span class="hlt">error</span> resulting from tilt in leveling the sun compass, a random <span class="hlt">error</span> because of observer sighting inaccuracies, a bias <span class="hlt">error</span> because of mean tilt in compass leveling, a bias <span class="hlt">error</span> in the sun compass itself, and a bias <span class="hlt">error</span> because the device is leveled to the local terrain slope.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16729864','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16729864"><span id="translatedtitle"><span class="hlt">Errors</span> in clinical laboratories or <span class="hlt">errors</span> in laboratory medicine?</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Plebani, Mario</p> <p>2006-01-01</p> <p>Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on <span class="hlt">errors</span> in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most <span class="hlt">errors</span> are due to pre-analytical factors (46-68.2% of total <span class="hlt">errors</span>), while a high <span class="hlt">error</span> rate (18.5-47% of total <span class="hlt">errors</span>) has also been found in the post-analytical phase. <span class="hlt">Errors</span> due to analytical problems have been significantly reduced over <span class="hlt">time</span>, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical <span class="hlt">errors</span> and advice on practical steps for measuring and reducing the risk of <span class="hlt">errors</span> is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory <span class="hlt">errors</span>", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory <span class="hlt">error</span>" and a classification of <span class="hlt">errors</span> according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of <span class="hlt">errors</span> and mistakes</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1981PhDT........23Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1981PhDT........23Z"><span id="translatedtitle">a Portable Apparatus for <span class="hlt">Absolute</span> Measurements of the Earth's Gravity.</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zumberge, Mark Andrew</p> <p></p> <p>We have developed a new, portable apparatus for making <span class="hlt">absolute</span> measurements of the acceleration due to the earth's gravity. We use the method of interferometrically determining the acceleration of a freely falling corner -cube prism. The falling object is surrounded by a chamber which is driven vertically inside a fixed vacuum chamber. This falling chamber is servoed to track the falling corner -cube to shield it from drag due to background gas. In addition, the drag-free falling chamber removes the need for a magnetic release, shields the falling object from electrostatic forces, and provides a means of both gently arresting the falling object and quickly returning it to its start position, to allow rapid acquisition of data. A synthesized long period isolation device reduces the noise due to seismic oscillations. A new type of Zeeman laser is used as the light source in the interferometer, and is compared with the wavelength of an iodine stabilized laser. The <span class="hlt">times</span> of occurrence of 45 interference fringes are measured to within 0.2 nsec over a 20 cm drop and are fit to a quadratic by an on-line minicomputer. 150 drops can be made in ten minutes resulting in a value of g having a precision of 3 to 6 parts in 10('9). Systematic <span class="hlt">errors</span> have been determined to be less than 5 parts in 10('9) through extensive tests. Three months of gravity data have been obtained with a reproducibility ranging from 5 to 10 parts in 10('9). The apparatus has been designed to be easily portable. Field measurements are planned for the immediate future. An accuracy of 6 parts in 10('9) corresponds to a height sensitivity of 2 cm. Vertical motions in the earth's crust and tectonic density changes that may precede earthquakes are to be investigated using this apparatus.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2577482','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2577482"><span id="translatedtitle">Unforced <span class="hlt">errors</span> and <span class="hlt">error</span> reduction in tennis</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Brody, H</p> <p>2006-01-01</p> <p>Only at the highest level of tennis is the number of winners comparable to the number of unforced <span class="hlt">errors</span>. As the average player loses many more points due to unforced <span class="hlt">errors</span> than due to winners by an opponent, if the rate of unforced <span class="hlt">errors</span> can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced <span class="hlt">errors</span>. PMID:16632568</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/11675313','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/11675313"><span id="translatedtitle"><span class="hlt">Error</span> in radiology.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Goddard, P; Leslie, A; Jones, A; Wakeley, C; Kabala, J</p> <p>2001-10-01</p> <p>The level of <span class="hlt">error</span> in radiology has been tabulated from articles on <span class="hlt">error</span> and on "double reporting" or "double reading". The level of <span class="hlt">error</span> varies depending on the radiological investigation, but the range is 2-20% for clinically significant or major <span class="hlt">error</span>. The greatest reduction in <span class="hlt">error</span> rates will come from changes in systems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1995navy.reptV....S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1995navy.reptV....S"><span id="translatedtitle">Instantaneous bit-<span class="hlt">error</span>-rate meter</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Slack, Robert A.</p> <p>1995-06-01</p> <p>An instantaneous bit <span class="hlt">error</span> rate meter provides an instantaneous, real <span class="hlt">time</span> reading of bit <span class="hlt">error</span> rate for digital communications data. Bit <span class="hlt">error</span> pulses are input into the meter and are first filtered in a buffer stage to provide input impedance matching and desensitization to pulse variations in amplitude, rise <span class="hlt">time</span> and pulse width. The bit <span class="hlt">error</span> pulses are transformed into trigger signals for a <span class="hlt">timing</span> pulse generator. The <span class="hlt">timing</span> pulse generator generates <span class="hlt">timing</span> pulses for each transformed bit <span class="hlt">error</span> pulse, and is calibrated to generate <span class="hlt">timing</span> pulses having a preselected pulse width corresponding to the baud rate of the communications data. An integrator generates a voltage from the <span class="hlt">timing</span> pulses that is representative of the bit <span class="hlt">error</span> rate as a function of the data transmission rate. The integrated voltage is then displayed on a meter to indicate the bit <span class="hlt">error</span> rate.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://pubs.er.usgs.gov/publication/70025069','USGSPUBS'); return false;" href="http://pubs.er.usgs.gov/publication/70025069"><span id="translatedtitle"><span class="hlt">Absolute</span> irradiance of the Moon for on-orbit calibration</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Stone, T.C.; Kieffer, H.H.; ,</p> <p>2002-01-01</p> <p>The recognized need for on-orbit calibration of remote sensing imaging instruments drives the ROLO project effort to characterize the Moon for use as an <span class="hlt">absolute</span> radiance source. For over 5 years the ground-based ROLO telescopes have acquired spatially-resolved lunar images in 23 VNIR (Moon diameter ???500 pixels) and 9 SWIR (???250 pixels) passbands at phase angles within ??90 degrees. A numerical model for lunar irradiance has been developed which fits hundreds of ROLO images in each band, corrected for atmospheric extinction and calibrated to <span class="hlt">absolute</span> radiance, then integrated to irradiance. The band-coupled extinction algorithm uses absorption spectra of several gases and aerosols derived from MODTRAN to fit <span class="hlt">time</span>-dependent component abundances to nightly observations of standard stars. The <span class="hlt">absolute</span> radiance scale is based upon independent telescopic measurements of the star Vega. The fitting process yields uncertainties in lunar relative irradiance over small ranges of phase angle and the full range of lunar libration well under 0.5%. A larger source of uncertainty enters in the <span class="hlt">absolute</span> solar spectral irradiance, especially in the SWIR, where solar models disagree by up to 6%. Results of ROLO model direct comparisons to spacecraft observations demonstrate the ability of the technique to track sensor responsivity drifts to sub-percent precision. Intercomparisons among instruments provide key insights into both calibration issues and the <span class="hlt">absolute</span> scale for lunar irradiance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19860059673&hterms=ram&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dram','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19860059673&hterms=ram&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dram"><span id="translatedtitle">STS-9 Shuttle grow - Ram angle effect and <span class="hlt">absolute</span> intensities</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Swenson, G. R.; Mende, S. B.; Clifton, K. S.</p> <p>1986-01-01</p> <p>Visible imagery from Space Shuttle mission STS-9 (Spacelab 1) has been analyzed for the ram angle effect and the <span class="hlt">absolute</span> intensity of glow. The data are compared with earlier measurements and the anomalous high intensities at large ram angles are confirmed. <span class="hlt">Absolute</span> intensities of the ram glow on the shuttle tile, at 6563 A, are observed to be about 20 <span class="hlt">times</span> more intense than those measured on the AE-E spacecraft. Implications of these observations for an existing theory of glow involving NO2 are presented.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/11067442','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/11067442"><span id="translatedtitle">Improving medication administration <span class="hlt">error</span> reporting systems. Why do <span class="hlt">errors</span> occur?</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wakefield, B J; Wakefield, D S; Uden-Holman, T</p> <p>2000-01-01</p> <p>Monitoring medication administration <span class="hlt">errors</span> (MAE) is often included as part of the hospital's risk management program. While observation of actual medication administration is the most accurate way to identify <span class="hlt">errors</span>, hospitals typically rely on voluntary incident reporting processes. Although incident reporting systems are more economical than other methods of <span class="hlt">error</span> detection, incident reporting can also be a <span class="hlt">time</span>-consuming process depending on the complexity or "user-friendliness" of the reporting system. Accurate incident reporting systems are also dependent on the ability of the practitioner to: 1) recognize an <span class="hlt">error</span> has actually occurred; 2) believe the <span class="hlt">error</span> is significant enough to warrant reporting; and 3) overcome the embarrassment of having committed a MAE and the fear of punishment for reporting a mistake (either one's own or another's mistake).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=318935','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=318935"><span id="translatedtitle"><span class="hlt">Absolute</span> configurations of zingiberenols isolated from ginger (Zingiber officinale) rhizomes</span></a></p> <p><a target="_blank" href="http://www.ars.usda.gov/services/TekTran.htm">Technology Transfer Automated Retrieval System (TEKTRAN)</a></p> <p></p> <p></p> <p>The sesquiterpene alcohol zingiberenol, or 1,10-bisaboladien-3-ol, was isolated some <span class="hlt">time</span> ago from ginger, Zingiber officinale, rhizomes, but its <span class="hlt">absolute</span> configuration had not been determined. With three chiral centers present in the molecule, zingiberenol can exist in eight stereoisomeric forms. ...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=20060030502&hterms=Herring&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DHerring','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=20060030502&hterms=Herring&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DHerring"><span id="translatedtitle">Urey: to measure the <span class="hlt">absolute</span> age of Mars</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Randolph, J. E.; Plescia, J.; Bar-Cohen, Y.; Bartlett, P.; Bickler, D.; Carlson, R.; Carr, G.; Fong, M.; Gronroos, H.; Guske, P. J.; Herring, M.; Javadi, H.; Johnson, D. W.; Larson, T.; Malaviarachchi, K.; Sherrit, S.; Stride, S.; Trebi-Ollennu, A.; Warwick, R.</p> <p>2003-01-01</p> <p>UREY, a proposed NASA Mars Scout mission will, for the first <span class="hlt">time</span>, measure the <span class="hlt">absolute</span> age of an identified igneous rock formation on Mars. By extension to relatively older and younger rock formations dated by remote sensing, these results will enable a new and better understanding of Martian geologic history.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=Happiness&pg=7&id=EJ804151','ERIC'); return false;" href="http://eric.ed.gov/?q=Happiness&pg=7&id=EJ804151"><span id="translatedtitle"><span class="hlt">Absolute</span> Income, Relative Income, and Happiness</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Ball, Richard; Chernova, Kateryna</p> <p>2008-01-01</p> <p>This paper uses data from the World Values Survey to investigate how an individual's self-reported happiness is related to (i) the level of her income in <span class="hlt">absolute</span> terms, and (ii) the level of her income relative to other people in her country. The main findings are that (i) both <span class="hlt">absolute</span> and relative income are positively and significantly…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ853800.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ853800.pdf"><span id="translatedtitle">Investigating <span class="hlt">Absolute</span> Value: A Real World Application</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Kidd, Margaret; Pagni, David</p> <p>2009-01-01</p> <p>Making connections between various representations is important in mathematics. In this article, the authors discuss the numeric, algebraic, and graphical representations of sums of <span class="hlt">absolute</span> values of linear functions. The initial explanations are accessible to all students who have experience graphing and who understand that <span class="hlt">absolute</span> value simply…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=RHS+AND+15&pg=4&id=EJ249107','ERIC'); return false;" href="http://eric.ed.gov/?q=RHS+AND+15&pg=4&id=EJ249107"><span id="translatedtitle">Preschoolers' Success at Coding <span class="hlt">Absolute</span> Size Values.</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Russell, James</p> <p>1980-01-01</p> <p>Forty-five 2-year-old and forty-five 3-year-old children coded relative and <span class="hlt">absolute</span> sizes using 1.5-inch, 6-inch, and 18-inch cardboard squares. Results indicate that <span class="hlt">absolute</span> coding is possible for children of this age. (Author/RH)</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=standard+AND+deviation&pg=2&id=EJ1050985','ERIC'); return false;" href="http://eric.ed.gov/?q=standard+AND+deviation&pg=2&id=EJ1050985"><span id="translatedtitle">Introducing the Mean <span class="hlt">Absolute</span> Deviation "Effect" Size</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Gorard, Stephen</p> <p>2015-01-01</p> <p>This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean <span class="hlt">absolute</span> deviation as a measure of variation, as opposed to the more complex standard deviation. The mean <span class="hlt">absolute</span> deviation is easier to use and understand, and more tolerant of extreme…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/1261596','DOE-PATENT-XML'); return false;" href="http://www.osti.gov/scitech/servlets/purl/1261596"><span id="translatedtitle">Monolithically integrated <span class="hlt">absolute</span> frequency comb laser system</span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Wanke, Michael C.</p> <p>2016-07-12</p> <p>Rather than down-convert optical frequencies, a QCL laser system directly generates a THz frequency comb in a compact monolithically integrated chip that can be locked to an <span class="hlt">absolute</span> frequency without the need of a frequency-comb synthesizer. The monolithic, <span class="hlt">absolute</span> frequency comb can provide a THz frequency reference and tool for high-resolution broad band spectroscopy.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4490812','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4490812"><span id="translatedtitle">Estimating the <span class="hlt">absolute</span> wealth of households</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gerkey, Drew; Hadley, Craig</p> <p>2015-01-01</p> <p>Abstract Objective To estimate the <span class="hlt">absolute</span> wealth of households using data from demographic and health surveys. Methods We developed a new metric, the <span class="hlt">absolute</span> wealth estimate, based on the rank of each surveyed household according to its material assets and the assumed shape of the distribution of wealth among surveyed households. Using data from 156 demographic and health surveys in 66 countries, we calculated <span class="hlt">absolute</span> wealth estimates for households. We validated the method by comparing the proportion of households defined as poor using our estimates with published World Bank poverty headcounts. We also compared the accuracy of <span class="hlt">absolute</span> versus relative wealth estimates for the prediction of anthropometric measures. Findings The median <span class="hlt">absolute</span> wealth estimates of 1 403 186 households were 2056 international dollars per capita (interquartile range: 723–6103). The proportion of poor households based on <span class="hlt">absolute</span> wealth estimates were strongly correlated with World Bank estimates of populations living on less than 2.00 United States dollars per capita per day (R2 = 0.84). <span class="hlt">Absolute</span> wealth estimates were better predictors of anthropometric measures than relative wealth indexes. Conclusion <span class="hlt">Absolute</span> wealth estimates provide new opportunities for comparative research to assess the effects of economic resources on health and human capital, as well as the long-term health consequences of economic change and inequality. PMID:26170506</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=20060044287&hterms=metrology&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dmetrology','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=20060044287&hterms=metrology&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dmetrology"><span id="translatedtitle"><span class="hlt">Absolute</span> optical metrology : nanometers to kilometers</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Dubovitsky, Serge; Lay, O. P.; Peters, R. D.; Liebe, C. C.</p> <p>2005-01-01</p> <p>We provide and overview of the developments in the field of high-accuracy <span class="hlt">absolute</span> optical metrology with emphasis on space-based applications. Specific work on the Modulation Sideband Technology for <span class="hlt">Absolute</span> Ranging (MSTAR) sensor is described along with novel applications of the sensor.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19880033615&hterms=VALUE+ABSOLUTE&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3DVALUE%2BABSOLUTE','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19880033615&hterms=VALUE+ABSOLUTE&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3DVALUE%2BABSOLUTE"><span id="translatedtitle"><span class="hlt">Absolute</span> instability of the Gaussian wake profile</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hultgren, Lennart S.; Aggarwal, Arun K.</p> <p>1987-01-01</p> <p>Linear parallel-flow stability theory has been used to investigate the effect of viscosity on the local <span class="hlt">absolute</span> instability of a family of wake profiles with a Gaussian velocity distribution. The type of local instability, i.e., convective or <span class="hlt">absolute</span>, is determined by the location of a branch-point singularity with zero group velocity of the complex dispersion relation for the instability waves. The effects of viscosity were found to be weak for values of the wake Reynolds number, based on the center-line velocity defect and the wake half-width, larger than about 400. <span class="hlt">Absolute</span> instability occurs only for sufficiently large values of the center-line wake defect. The critical value of this parameter increases with decreasing wake Reynolds number, thereby indicating a shrinking region of <span class="hlt">absolute</span> instability with decreasing wake Reynolds number. If backflow is not allowed, <span class="hlt">absolute</span> instability does not occur for wake Reynolds numbers smaller than about 38.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016Metro..53...27K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016Metro..53...27K"><span id="translatedtitle">On the effect of distortion and dispersion in fringe signal of the FG5 <span class="hlt">absolute</span> gravimeters</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Křen, Petr; Pálinkáš, Vojtech; Mašika, Pavel</p> <p>2016-02-01</p> <p>The knowledge of <span class="hlt">absolute</span> gravity acceleration at the level of 1  ×  10-9 is needed in geosciences (e.g. for monitoring crustal deformations and mass transports) and in metrology for watt balance experiments related to the new SI definition of the unit of kilogram. The gravity reference, which results from the international comparisons held with the participation of numerous <span class="hlt">absolute</span> gravimeters, is significantly affected by qualities of instruments prevailing in the comparisons (i.e. at present, FG5 gravimeters). Therefore, it is necessary to thoroughly investigate all instrumental (particularly systematic) <span class="hlt">errors</span>. This paper deals with systematic <span class="hlt">errors</span> of the FG5#215 coming from the distorted fringe signal and from the electronic dispersion at several electronic components including cables. In order to investigate these effects, we developed a new experimental system for acquiring and analysing the data parallel to the FG5 built-in system. The new system based on the analogue-to-digital converter with digital waveform processing using the FFT swept band pass filter is developed and tested on the FG5#215 gravimeter equipped with a new fast analogue output. The system is characterized by a low <span class="hlt">timing</span> jitter, digital handling of the distorted swept signal with determination of zero-crossings for the fundamental frequency sweep and also for its harmonics and can be used for any gravimeter based on the laser interferometry. Comparison of the original FG5 system and the experimental systems is provided on g-values, residuals and additional measurements/models. Moreover, advanced approach for the solution of the free-fall motion is presented, which allows to take into account a non-linear gravity change with height.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26682606','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26682606"><span id="translatedtitle">CONTROLLING <span class="hlt">ABSOLUTE</span> FREQUENCY OF FEEDBACK IN A SELF-CONTROLLED SITUATION ENHANCES MOTOR LEARNING.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tsai, Min-Jen; Jwo, Hank</p> <p>2015-12-01</p> <p>The guidance hypothesis suggested that excessive extrinsic feedback facilitates motor performance but blocks the processing of intrinsic information. The present study tested the tenet of guidance hypothesis in self-controlled feedback by controlling the feedback frequency. The motor learning effect of limiting <span class="hlt">absolute</span> feedback frequency was examined. Thirty-six participants (25 men, 11 women; M age=25.1 yr., SD=2.2) practiced a hand-grip force control task on a dynamometer by the non-dominant hand with varying amounts of feedback. They were randomly assigned to: (a) Self-controlled, (b) Yoked with self-controlled, and (c) Limited self-controlled conditions. In acquisition, two-way analysis of variance indicated significantly lower <span class="hlt">absolute</span> <span class="hlt">error</span> in both the yoked and limited self-controlled groups than the self-controlled group. The effect size of <span class="hlt">absolute</span> <span class="hlt">error</span> between trials with feedback and without feedback in the limited self-controlled condition was larger than that of the self-controlled condition. In the retention and transfer tests, the Limited self-controlled feedback group had significantly lower <span class="hlt">absolute</span> <span class="hlt">error</span> than the other two groups. The results indicated an increased motor learning effect of limiting <span class="hlt">absolute</span> frequency of feedback in the self-controlled condition.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17049472','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17049472"><span id="translatedtitle">Human <span class="hlt">error</span> in recreational boating.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>McKnight, A James; Becker, Wayne W; Pettit, Anthony J; McKnight, A Scott</p> <p>2007-03-01</p> <p>Each year over 600 people die and more than 4000 are reported injured in recreational boating accidents. As with most other accidents, human <span class="hlt">error</span> is the major contributor. U.S. Coast Guard reports of 3358 accidents were analyzed to identify <span class="hlt">errors</span> in each of the boat types by which statistics are compiled: auxiliary (motor) sailboats, cabin motorboats, canoes and kayaks, house boats, personal watercraft, open motorboats, pontoon boats, row boats, sail-only boats. The individual <span class="hlt">errors</span> were grouped into categories on the basis of similarities in the behavior involved. Those presented here are the categories accounting for at least 5% of all <span class="hlt">errors</span> when summed across boat types. The most revealing and significant finding is the extent to which the <span class="hlt">errors</span> vary across types. Since boating is carried out with one or two types of boats for long periods of <span class="hlt">time</span>, effective accident prevention measures, including safety instruction, need to be geared to individual boat types.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/25284902','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/25284902"><span id="translatedtitle">Conditional Density Estimation in Measurement <span class="hlt">Error</span> Problems.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Xiao-Feng; Ye, Deping</p> <p>2015-01-01</p> <p>This paper is motivated by a wide range of background correction problems in gene array data analysis, where the raw gene expression intensities are measured with <span class="hlt">error</span>. Estimating a conditional density function from the contaminated expression data is a key aspect of statistical inference and visualization in these studies. We propose re-weighted deconvolution kernel methods to estimate the conditional density function in an additive <span class="hlt">error</span> model, when the <span class="hlt">error</span> distribution is known as well as when it is unknown. Theoretical properties of the proposed estimators are investigated with respect to the mean <span class="hlt">absolute</span> <span class="hlt">error</span> from a "double asymptotic" view. Practical rules are developed for the selection of smoothing-parameters. Simulated examples and an application to an Illumina bead microarray study are presented to illustrate the viability of the methods. PMID:25284902</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AGUFM.A52D..02B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AGUFM.A52D..02B"><span id="translatedtitle">Tropical <span class="hlt">errors</span> and convection</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bechtold, P.; Bauer, P.; Engelen, R. J.</p> <p>2012-12-01</p> <p>Tropical convection is analysed in the ECMWF Integrated Forecast System (IFS) through tropical <span class="hlt">errors</span> and their evolution during the last decade as a function of model resolution and model changes. As the characterization of these <span class="hlt">errors</span> is particularly difficult over tropical oceans due to sparse in situ upper-air data, more weight compared to the middle latitudes is given in the analysis to the underlying forecast model. Therefore, special attention is paid to available near-surface observations and to comparison with analysis from other Centers. There is a systematic lack of low-level wind convergence in the Inner Tropical Convergence Zone (ITCZ) in the IFS, leading to a spindown of the Hadley cell. Critical areas with strong cross-equatorial flow and large wind <span class="hlt">errors</span> are the Indian Ocean with large interannual variations in forecast <span class="hlt">errors</span>, and the East Pacific with persistent systematic <span class="hlt">errors</span> that have evolved little during the last decade. The analysis quality in the East Pacific is affected by observation <span class="hlt">errors</span> inherent to the atmospheric motion vector wind product. The model's tropical climate and its variability and teleconnections are also evaluated, with a particular focus on the Madden-Julian Oscillation (MJO) during the Year of Tropical Convection (YOTC). The model is shown to reproduce the observed tropical large-scale wave spectra and teleconnections, but overestimates the precipitation during the South-East Asian summer monsoon. The recent improvements in tropical precipitation, convectively coupled wave and MJO predictability are shown to be strongly related to improvements in the convection parameterization that realistically represents the convection sensitivity to environmental moisture, and the large-scale forcing due to the use of strong entrainment and a variable adjustment <span class="hlt">time</span>-scale. There is however a remaining slight moistening tendency and low-level wind imbalance in the model that is responsible for the Asian Monsoon bias and for too</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20070022530','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20070022530"><span id="translatedtitle">Human <span class="hlt">Error</span>: A Concept Analysis</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hansen, Frederick D.</p> <p>2007-01-01</p> <p>Human <span class="hlt">error</span> is the subject of research in almost every industry and profession of our <span class="hlt">times</span>. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human <span class="hlt">error</span> s the same. For example, human <span class="hlt">error</span> is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human <span class="hlt">error</span>. The purpose of this article is to explore the specific concept of human <span class="hlt">error</span> using Concept Analysis as described by Walker and Avant (1995). The concept of human <span class="hlt">error</span> is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human <span class="hlt">error</span> are also discussed and a definition of human <span class="hlt">error</span> is offered.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26125394','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26125394"><span id="translatedtitle">Measurement of <span class="hlt">absolute</span> optical thickness of mask glass by wavelength-tuning Fourier analysis.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kim, Yangjin; Hbino, Kenichi; Sugita, Naohiko; Mitsuishi, Mamoru</p> <p>2015-07-01</p> <p>Optical thickness is a fundamental characteristic of an optical component. A measurement method combining discrete Fourier-transform (DFT) analysis and a phase-shifting technique gives an appropriate value for the <span class="hlt">absolute</span> optical thickness of a transparent plate. However, there is a systematic <span class="hlt">error</span> caused by the nonlinearity of the phase-shifting technique. In this research the <span class="hlt">absolute</span> optical-thickness distribution of mask blank glass was measured using DFT and wavelength-tuning Fizeau interferometry without using sensitive phase-shifting techniques. The <span class="hlt">error</span> occurring during the DFT analysis was compensated for by using the unwrapping correlation. The experimental results indicated that the <span class="hlt">absolute</span> optical thickness of mask glass was measured with an accuracy of 5 nm.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27140578','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27140578"><span id="translatedtitle"><span class="hlt">Absolute</span> flatness testing of skip-flat interferometry by matrix analysis in polar coordinates.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Han, Zhi-Gang; Yin, Lu; Chen, Lei; Zhu, Ri-Hong</p> <p>2016-03-20</p> <p>A new method utilizing matrix analysis in polar coordinates has been presented for <span class="hlt">absolute</span> testing of skip-flat interferometry. The retrieval of the <span class="hlt">absolute</span> profile mainly includes three steps: (1) transform the wavefront maps of the two cavity measurements into data in polar coordinates; (2) retrieve the profile of the reflective flat in polar coordinates by matrix analysis; and (3) transform the profile of the reflective flat back into data in Cartesian coordinates and retrieve the profile of the sample. Simulation of synthetic surface data has been provided, showing the capability of the approach to achieve an accuracy of the order of 0.01 nm RMS. The <span class="hlt">absolute</span> profile can be retrieved by a set of closed mathematical formulas without polynomial fitting of wavefront maps or the iterative evaluation of an <span class="hlt">error</span> function, making the new method more efficient for <span class="hlt">absolute</span> testing.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19870024427&hterms=percent+error&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dpercent%2Berror','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19870024427&hterms=percent+error&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dpercent%2Berror"><span id="translatedtitle"><span class="hlt">Error</span> growth in operational ECMWF forecasts</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kalnay, E.; Dalcher, A.</p> <p>1985-01-01</p> <p>A parameterization scheme used at the European Centre for Medium Range Forecasting to model the average growth of the difference between forecasts on consecutive days was extended by including the effect of <span class="hlt">error</span> growth on forecast model deficiencies. <span class="hlt">Error</span> was defined as the difference between the forecast and analysis fields during the verification <span class="hlt">time</span>. Systematic and random <span class="hlt">errors</span> were considered separately in calculating the <span class="hlt">error</span> variance for a 10 day operational forecast. A good fit was obtained with measured forecast <span class="hlt">errors</span> and a satisfactory trend was achieved in the difference between forecasts. Fitting six parameters to forecast <span class="hlt">errors</span> and differences that were performed separately for each wavenumber revealed that the <span class="hlt">error</span> growth rate grew with wavenumber. The saturation <span class="hlt">error</span> decreased with the total wavenumber and the limit of predictability, i.e., when <span class="hlt">error</span> variance reaches 95 percent of saturation, decreased monotonically with the total wavenumber.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19910000565&hterms=covariance&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dcovariance','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19910000565&hterms=covariance&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dcovariance"><span id="translatedtitle">Relative-<span class="hlt">Error</span>-Covariance Algorithms</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bierman, Gerald J.; Wolff, Peter J.</p> <p>1991-01-01</p> <p>Two algorithms compute <span class="hlt">error</span> covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-<span class="hlt">error</span>-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-<span class="hlt">time</span> test of consistency of state estimates based upon recently acquired data.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=organic+AND+carbon&pg=5&id=EJ288694','ERIC'); return false;" href="http://eric.ed.gov/?q=organic+AND+carbon&pg=5&id=EJ288694"><span id="translatedtitle">A New Gimmick for Assigning <span class="hlt">Absolute</span> Configuration.</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Ayorinde, F. O.</p> <p>1983-01-01</p> <p>A five-step procedure is provided to help students in making the assignment <span class="hlt">absolute</span> configuration less bothersome. Examples for both single (2-butanol) and multi-chiral carbon (3-chloro-2-butanol) molecules are included. (JN)</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=husserl&pg=3&id=EJ118696','ERIC'); return false;" href="http://eric.ed.gov/?q=husserl&pg=3&id=EJ118696"><span id="translatedtitle">The Simplicity Argument and <span class="hlt">Absolute</span> Morality</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Mijuskovic, Ben</p> <p>1975-01-01</p> <p>In this paper the author has maintained that there is a similarity of thought to be found in the writings of Cudworth, Emerson, and Husserl in his investigation of an <span class="hlt">absolute</span> system of morality. (Author/RK)</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19790004361','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19790004361"><span id="translatedtitle">Determination and <span class="hlt">error</span> analysis of emittance and spectral emittance measurements by remote sensing</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Dejesusparada, N. (Principal Investigator); Kumar, R.</p> <p>1977-01-01</p> <p>The author has identified the following significant results. From the theory of remote sensing of surface temperatures, an equation of the upper bound of <span class="hlt">absolute</span> <span class="hlt">error</span> of emittance was determined. It showed that the <span class="hlt">absolute</span> <span class="hlt">error</span> decreased with an increase in contact temperature, whereas, it increased with an increase in environmental integrated radiant flux density. Change in emittance had little effect on the <span class="hlt">absolute</span> <span class="hlt">error</span>. A plot of the difference between temperature and band radiance temperature vs. emittance was provided for the wavelength intervals: 4.5 to 5.5 microns, 8 to 13.5 microns, and 10.2 to 12.5 microns.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/biblio/20718314','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/biblio/20718314"><span id="translatedtitle">Continuous quantum <span class="hlt">error</span> correction by cooling</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Sarovar, Mohan; Milburn, G.J.</p> <p>2005-07-15</p> <p>We describe an implementation of quantum <span class="hlt">error</span> correction that operates continuously in <span class="hlt">time</span> and requires no active interventions such as measurements or gates. The mechanism for carrying away the entropy introduced by <span class="hlt">errors</span> is a cooling procedure. We evaluate the effectiveness of the scheme by simulation, and remark on its connections to some recently proposed <span class="hlt">error</span> prevention procedures.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/biblio/22364212','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/biblio/22364212"><span id="translatedtitle">TYPE Ia SUPERNOVA DISTANCE MODULUS BIAS AND DISPERSION FROM K-CORRECTION <span class="hlt">ERRORS</span>: A DIRECT MEASUREMENT USING LIGHT CURVE FITS TO OBSERVED SPECTRAL <span class="hlt">TIME</span> SERIES</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Saunders, C.; Aldering, G.; Aragon, C.; Bailey, S.; Childress, M.; Fakhouri, H. K.; Kim, A. G.; Antilogus, P.; Bongard, S.; Canto, A.; Cellier-Holzem, F.; Guy, J.; Baltay, C.; Buton, C.; Chotard, N.; Copin, Y.; Gangler, E.; and others</p> <p>2015-02-10</p> <p>We estimate systematic <span class="hlt">errors</span> due to K-corrections in standard photometric analyses of high-redshift Type Ia supernovae. <span class="hlt">Errors</span> due to K-correction occur when the spectral template model underlying the light curve fitter poorly represents the actual supernova spectral energy distribution, meaning that the distance modulus cannot be recovered accurately. In order to quantify this effect, synthetic photometry is performed on artificially redshifted spectrophotometric data from 119 low-redshift supernovae from the Nearby Supernova Factory, and the resulting light curves are fit with a conventional light curve fitter. We measure the variation in the standardized magnitude that would be fit for a given supernova if located at a range of redshifts and observed with various filter sets corresponding to current and future supernova surveys. We find significant variation in the measurements of the same supernovae placed at different redshifts regardless of filters used, which causes dispersion greater than ∼0.05 mag for measurements of photometry using the Sloan-like filters and a bias that corresponds to a 0.03 shift in w when applied to an outside data set. To test the result of a shift in supernova population or environment at higher redshifts, we repeat our calculations with the addition of a reweighting of the supernovae as a function of redshift and find that this strongly affects the results and would have repercussions for cosmology. We discuss possible methods to reduce the contribution of the K-correction bias and uncertainty.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001ESASP.464..355T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001ESASP.464..355T"><span id="translatedtitle">On the <span class="hlt">absolute</span> alignment of GONG images</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Toner, C. G.</p> <p>2001-01-01</p> <p>In order to combine data from the six instruments in the GONG network the alignment of all of the images must be known to a fairly high precision (~0°.1 for GONG Classic and ~0°.01 for GONG+). The relative orientation is obtained using the angular cross-correlation method described by (Toner & Harvey, 1998). To obtain the <span class="hlt">absolute</span> orientation the Project periodically records a day of drift scans, where the image of the Sun is allowed to drift across the CCD repeatedly throughout the day. These data are then analyzed to deduce the direction of Terrestrial East-West as a function of hour angle (i.e., <span class="hlt">time</span>) for that instrument. The transit of Mercury on Nov. 15, 1999, which was recorded by three of the GONG instruments, provided an independent check on the current alignment procedures. Here we present a comparison of the alignment of GONG images as deduced from both drift scans and the Mercury transit for two GONG sites: Tucson (GONG+ camera) and Mauna Loa (GONG Classic camera). The agreement is within ~0°.01 for both cameras, however, the scatter is substantially larger for GONG Classic: ~0°.03 compared to ~0°.01 for GONG+.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://pubs.er.usgs.gov/publication/70023929','USGSPUBS'); return false;" href="http://pubs.er.usgs.gov/publication/70023929"><span id="translatedtitle">Landsat-7 ETM+ radiometric stability and <span class="hlt">absolute</span> calibration</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Markham, B.L.; Barker, J.L.; Barsi, J.A.; Kaita, E.; Thome, K.J.; Helder, D.L.; Palluconi, Frank Don; Schott, J.R.; Scaramuzza, P.; ,</p> <p>2002-01-01</p> <p>Launched in April 1999, the Landsat-7 ETM+ instrument is in its fourth year of operation. The quality of the acquired calibrated imagery continues to be high, especially with respect to its three most important radiometric performance parameters: reflective band instrument stability to better than ??1%, reflective band <span class="hlt">absolute</span> calibration to better than ??5%, and thermal band <span class="hlt">absolute</span> calibration to better than ??0.6 K. The ETM+ instrument has been the most stable of any of the Landsat instruments, in both the reflective and thermal channels. To date, the best on-board calibration source for the reflective bands has been the Full Aperture Solar Calibrator, which has indicated changes of at most -1.8% to -2.0% (95% C.I.) change per year in the ETM+ gain (band 4). However, this change is believed to be caused by changes in the solar diffuser panel, as opposed to a change in the instrument's gain. This belief is based partially on ground observations, which bound the changes in gain in band 4 at -0.7% to +1.5%. Also, ETM+ stability is indicated by the monitoring of desert targets. These image-based results for four Saharan and Arabian sites, for a collection of 35 scenes over the three years since launch, bound the gain change at -0.7% to +0.5% in band 4. Thermal calibration from ground observations revealed an offset <span class="hlt">error</span> of +0.31 W/m 2 sr um soon after launch. This offset was corrected within the U. S. ground processing system at EROS Data Center on 21-Dec-00, and since then, the band 6 on-board calibration has indicated changes of at most +0.02% to +0.04% (95% C.I.) per year. The latest ground observations have detected no remaining offset <span class="hlt">error</span> with an RMS <span class="hlt">error</span> of ??0.6 K. The stability and <span class="hlt">absolute</span> calibration of the Landsat-7 ETM+ sensor make it an ideal candidate to be used as a reference source for radiometric cross-calibrating to other land remote sensing satellite systems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003SPIE.4881..308M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003SPIE.4881..308M"><span id="translatedtitle">Landsat-7 ETM+ radiometric stability and <span class="hlt">absolute</span> calibration</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Markham, Brian L.; Barker, John L.; Barsi, Julia A.; Kaita, Ed; Thome, Kurtis J.; Helder, Dennis L.; Palluconi, Frank D.; Schott, John R.; Scaramuzza, Pat</p> <p>2003-04-01</p> <p>Launched in April 1999, the Landsat-7 ETM+ instrument is in its fourth year of operation. The quality of the acquired calibrated imagery continues to be high, especially with respect to its three most important radiometric performance parameters: reflective band instrument stability to better than +/-1%, reflective band <span class="hlt">absolute</span> calibration to better than +/-5%, and thermal band <span class="hlt">absolute</span> calibration to better than +/- 0.6 K. The ETM+ instrument has been the most stable of any of the Landsat instruments, in both the reflective and thermal channels. To date, the best on-board calibration source for the reflective bands has been the Full Aperture Solar Calibrator, which has indicated changes of at most -1.8% to -2.0% (95% C.I.) change per year in the ETM+ gain (band 4). However, this change is believed to be caused by changes in the solar diffuser panel, as opposed to a change in the instrument's gain. This belief is based partially on ground observations, which bound the changes in gain in band 4 at -0.7% to +1.5%. Also, ETM+ stability is indicated by the monitoring of desert targets. These image-based results for four Saharan and Arabian sites, for a collection of 35 scenes over the three years since launch, bound the gain change at -0.7% to +0.5% in band 4. Thermal calibration from ground observations revealed an offset <span class="hlt">error</span> of +0.31 W/m2 sr um soon after launch. This offset was corrected within the U. S. ground processing system at EROS Data Center on 21-Dec-00, and since then, the band 6 on-board calibration has indicated changes of at most +0.02% to +0.04% (95% C.I.) per year. The latest ground observations have detected no remaining offset <span class="hlt">error</span> with an RMS <span class="hlt">error</span> of +/- 0.6 K. The stability and <span class="hlt">absolute</span> calibration of the Landsat-7 ETM+ sensor make it an ideal candidate to be used as a reference source for radiometric cross-calibrating to other land remote sensing satellite systems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011Metro..48..231N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011Metro..48..231N"><span id="translatedtitle">Correction due to the finite speed of light in <span class="hlt">absolute</span> gravimeters Correction due to the finite speed of light in <span class="hlt">absolute</span> gravimeters</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nagornyi, V. D.; Zanimonskiy, Y. M.; Zanimonskiy, Y. Y.</p> <p>2011-06-01</p> <p>Equations (45) and (47) in our paper [1] in this issue have incorrect sign and should read \\tilde T_i=T_i+{b\\mp S_i\\over c},\\cr\\tilde T_i=T_i\\mp {S_i\\over c}. The <span class="hlt">error</span> traces back to our formula (3), inherited from the paper [2]. According to the technical documentation [3, 4], the formula (3) is implemented by several commercially available instruments. An incorrect sign would cause a bias of about 20 µGal not known for these instruments, which probably indicates that the documentation incorrectly reflects the implemented measurement equation. Our attention to the <span class="hlt">error</span> was drawn by the paper [5], also in this issue, where the sign is mentioned correctly. References [1] Nagornyi V D, Zanimonskiy Y M and Zanimonskiy Y Y 2011 Correction due to the finite speed of light in <span class="hlt">absolute</span> gravimeters Metrologia 48 101-13 [2] Niebauer T M, Sasagawa G S, Faller J E, Hilt R and Klopping F 1995 A new generation of <span class="hlt">absolute</span> gravimeters Metrologia 32 159-80 [3] Micro-g LaCoste, Inc. 2006 FG5 <span class="hlt">Absolute</span> Gravimeter Users Manual [4] Micro-g LaCoste, Inc. 2007 g7 Users Manual [5] Niebauer T M, Billson R, Ellis B, Mason B, van Westrum D and Klopping F 2011 Simultaneous gravity and gradient measurements from a recoil-compensated <span class="hlt">absolute</span> gravimeter Metrologia 48 154-63</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120009261','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120009261"><span id="translatedtitle"><span class="hlt">Absolute</span> Position of Targets Measured Through a Chamber Window Using Lidar Metrology Systems</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kubalak, David; Hadjimichael, Theodore; Ohl, Raymond; Slotwinski, Anthony; Telfer, Randal; Hayden, Joseph</p> <p>2012-01-01</p> <p>Lidar is a useful tool for taking metrology measurements without the need for physical contact with the parts under test. Lidar instruments are aimed at a target using azimuth and elevation stages, then focus a beam of coherent, frequency modulated laser energy onto the target, such as the surface of a mechanical structure. Energy from the reflected beam is mixed with an optical reference signal that travels in a fiber path internal to the instrument, and the range to the target is calculated based on the difference in the frequency of the returned and reference signals. In cases when the parts are in extreme environments, additional steps need to be taken to separate the operator and lidar from that environment. A model has been developed that accurately reduces the lidar data to an <span class="hlt">absolute</span> position and accounts for the three media in the testbed air, fused silica, and vacuum but the approach can be adapted for any environment or material. The accuracy of laser metrology measurements depends upon knowing the parameters of the media through which the measurement beam travels. Under normal conditions, this means knowledge of the temperature, pressure, and humidity of the air in the measurement volume. In the past, chamber windows have been used to separate the measuring device from the extreme environment within the chamber and still permit optical measurement, but, so far, only relative changes have been diagnosed. The ability to make accurate measurements through a window presents a challenge as there are a number of factors to consider. In the case of the lidar, the window will increase the <span class="hlt">time</span>-of-flight of the laser beam causing a ranging <span class="hlt">error</span>, and refract the direction of the beam causing angular positioning <span class="hlt">errors</span>. In addition, differences in pressure, temperature, and humidity on each side of the window will cause slight atmospheric index changes and induce deformation and a refractive index gradient within the window. Also, since the window is a</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011WRR....47.7524B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011WRR....47.7524B"><span id="translatedtitle">Multiscale <span class="hlt">error</span> analysis, correction, and predictive uncertainty estimation in a flood forecasting system</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bogner, K.; Pappenberger, F.</p> <p>2011-07-01</p> <p>River discharge predictions often show <span class="hlt">errors</span> that degrade the quality of forecasts. Three different methods of <span class="hlt">error</span> correction are compared, namely, an autoregressive model with and without exogenous input (ARX and AR, respectively), and a method based on wavelet transforms. For the wavelet method, a Vector-Autoregressive model with exogenous input (VARX) is simultaneously fitted for the different levels of wavelet decomposition; after predicting the next <span class="hlt">time</span> steps for each scale, a reconstruction formula is applied to transform the predictions in the wavelet domain back to the original <span class="hlt">time</span> domain. The <span class="hlt">error</span> correction methods are combined with the Hydrological Uncertainty Processor (HUP) in order to estimate the predictive conditional distribution. For three stations along the Danube catchment, and using output from the European Flood Alert System (EFAS), we demonstrate that the method based on wavelets outperforms simpler methods and uncorrected predictions with respect to mean <span class="hlt">absolute</span> <span class="hlt">error</span>, Nash-Sutcliffe efficiency coefficient (and its decomposed performance criteria), informativeness score, and in particular forecast reliability. The wavelet approach efficiently accounts for forecast <span class="hlt">errors</span> with scale properties of unknown source and statistical structure.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19940024091&hterms=beers+law&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dbeers%2Blaw','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19940024091&hterms=beers+law&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dbeers%2Blaw"><span id="translatedtitle"><span class="hlt">Absolute</span> determination of local tropospheric OH concentrations</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Armerding, Wolfgang; Comes, Franz-Josef</p> <p>1994-01-01</p> <p>Long path absorption (LPA) according to Lambert Beer's law is a method to determine <span class="hlt">absolute</span> concentrations of trace gases such as tropospheric OH. We have developed a LPA instrument which is based on a rapid tuning of the light source which is a frequency doubled dye laser. The laser is tuned across two or three OH absorption features around 308 nm with a scanning speed of 0.07 cm(exp -1)/microsecond and a repetition rate of 1.3 kHz. This high scanning speed greatly reduces the fluctuation of the light intensity caused by the atmosphere. To obtain the required high sensitivity the laser output power is additionally made constant and stabilized by an electro-optical modulator. The present sensitivity is of the order of a few <span class="hlt">times</span> 10(exp 5) OH per cm(exp 3) for an acquisition <span class="hlt">time</span> of a minute and an absorption path length of only 1200 meters so that a folding of the optical path in a multireflection cell was possible leading to a lateral dimension of the cell of a few meters. This allows local measurements to be made. Tropospheric measurements have been carried out in 1991 resulting in the determination of OH diurnal variation at specific days in late summer. Comparison with model calculations have been made. Interferences are mainly due to SO2 absorption. The problem of OH self generation in the multireflection cell is of minor extent. This could be shown by using different experimental methods. The minimum-maximum signal to noise ratio is about 8 x 10(exp -4) for a single scan. Due to the small size of the absorption cell the realization of an open air laboratory is possible in which by use of an additional UV light source or by additional fluxes of trace gases the chemistry can be changed under controlled conditions allowing kinetic studies of tropospheric photochemistry to be made in open air.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19831037','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19831037"><span id="translatedtitle">Jasminum flexile flower <span class="hlt">absolute</span> from India--a detailed comparison with three other jasmine <span class="hlt">absolutes</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Braun, Norbert A; Kohlenberg, Birgit; Sim, Sherina; Meier, Manfred; Hammerschmidt, Franz-Josef</p> <p>2009-09-01</p> <p>Jasminum flexile flower <span class="hlt">absolute</span> from the south of India and the corresponding vacuum headspace (VHS) sample of the <span class="hlt">absolute</span> were analyzed using GC and GC-MS. Three other commercially available Indian jasmine <span class="hlt">absolutes</span> from the species: J. sambac, J. officinale subsp. grandiflorum, and J. auriculatum and the respective VHS samples were used for comparison purposes. One hundred and twenty-one compounds were characterized in J. flexile flower <span class="hlt">absolute</span>, with methyl linolate, benzyl salicylate, benzyl benzoate, (2E,6E)-farnesol, and benzyl acetate as the main constituents. A detailed olfactory evaluation was also performed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/3806343','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/3806343"><span id="translatedtitle">Accepting <span class="hlt">error</span> to make less <span class="hlt">error</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Einhorn, H J</p> <p>1986-01-01</p> <p>In this article I argue that the clinical and statistical approaches rest on different assumptions about the nature of random <span class="hlt">error</span> and the appropriate level of accuracy to be expected in prediction. To examine this, a case is made for each approach. The clinical approach is characterized as being deterministic, causal, and less concerned with prediction than with diagnosis and treatment. The statistical approach accepts <span class="hlt">error</span> as inevitable and in so doing makes less <span class="hlt">error</span> in prediction. This is illustrated using examples from probability learning and equal weighting in linear models. Thereafter, a decision analysis of the two approaches is proposed. Of particular importance are the <span class="hlt">errors</span> that characterize each approach: myths, magic, and illusions of control in the clinical; lost opportunities and illusions of the lack of control in the statistical. Each approach represents a gamble with corresponding risks and benefits.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://medlineplus.gov/ency/article/002438.htm','NIH-MEDLINEPLUS'); return false;" href="https://medlineplus.gov/ency/article/002438.htm"><span id="translatedtitle">Inborn <span class="hlt">errors</span> of metabolism</span></a></p> <p><a target="_blank" href="http://medlineplus.gov/">MedlinePlus</a></p> <p></p> <p></p> <p>Metabolism - inborn <span class="hlt">errors</span> of ... Bodamer OA. Approach to inborn <span class="hlt">errors</span> of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/18541951','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/18541951"><span id="translatedtitle">[Paradigm <span class="hlt">errors</span> in the old biomedical science].</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Skurvydas, Albertas</p> <p>2008-01-01</p> <p>The aim of this article was to review the basic drawbacks of the deterministic and reductionistic thinking in biomedical science and to provide ways for dealing with them. The present paradigm of research in biomedical science has not got rid of the <span class="hlt">errors</span> of the old science yet, i.e. the <span class="hlt">errors</span> of <span class="hlt">absolute</span> determinism and reductionism. These <span class="hlt">errors</span> restrict the view and thinking of scholars engaged in the studies of complex and dynamic phenomena and mechanisms. Recently, discussions on science paradigm aimed at spreading the new science paradigm that of complex dynamic systems as well as chaos theory are in progress all over the world. It is for the nearest future to show which of the two, the old or the new science, will be the winner. We have come to the main conclusion that deterministic and reductionistic thinking applied in improper way can cause substantial damage rather than prove benefits for biomedicine science. PMID:18541951</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016ufm..conf..509K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016ufm..conf..509K"><span id="translatedtitle">Universal Cosmic <span class="hlt">Absolute</span> and Modern Science</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kostro, Ludwik</p> <p></p> <p>The official Sciences, especially all natural sciences, respect in their researches the principle of methodic naturalism i.e. they consider all phenomena as entirely natural and therefore in their scientific explanations they do never adduce or cite supernatural entities and forces. The purpose of this paper is to show that Modern Science has its own self-existent, self-acting, and self-sufficient Natural All-in Being or Omni-Being i.e. the entire Nature as a Whole that justifies the scientific methodic naturalism. Since this Natural All-in Being is one and only It should be considered as the own scientifically justified Natural <span class="hlt">Absolute</span> of Science and should be called, in my opinion, the Universal Cosmic <span class="hlt">Absolute</span> of Modern Science. It will be also shown that the Universal Cosmic <span class="hlt">Absolute</span> is ontologically enormously stratified and is in its ultimate i.e. in its most fundamental stratum trans-reistic and trans-personal. It means that in its basic stratum. It is neither a Thing or a Person although It contains in Itself all things and persons with all other sentient and conscious individuals as well, On the turn of the 20th century the Science has begun to look for a theory of everything, for a final theory, for a master theory. In my opinion the natural Universal Cosmic <span class="hlt">Absolute</span> will constitute in such a theory the radical all penetrating Ultimate Basic Reality and will substitute step by step the traditional supernatural personal <span class="hlt">Absolute</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.G51B0368C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.G51B0368C"><span id="translatedtitle"><span class="hlt">Absolute</span> Gravity Datum in the Age of Cold Atom Gravimeters</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Childers, V. A.; Eckl, M. C.</p> <p>2014-12-01</p> <p>The international gravity datum is defined today by the International Gravity Standardization Net of 1971 (IGSN-71). The data supporting this network was measured in the 1950s and 60s using pendulum and spring-based gravimeter ties (plus some new ballistic <span class="hlt">absolute</span> meters) to replace the prior protocol of referencing all gravity values to the earlier Potsdam value. Since this <span class="hlt">time</span>, gravimeter technology has advanced significantly with the development and refinement of the FG-5 (the current standard of the industry) and again with the soon-to-be-available cold atom interferometric <span class="hlt">absolute</span> gravimeters. This latest development is anticipated to provide improvement in the range of two orders of magnitude as compared to the measurement accuracy of technology utilized to develop ISGN-71. In this presentation, we will explore how the IGSN-71 might best be "modernized" given today's requirements and available instruments and resources. The National Geodetic Survey (NGS), along with other relevant US Government agencies, is concerned about establishing gravity control to establish and maintain high order geodetic networks as part of the nation's essential infrastructure. The need to modernize the nation's geodetic infrastructure was highlighted in "Precise Geodetic Infrastructure, National Requirements for a Shared Resource" National Academy of Science, 2010. The NGS mission, as dictated by Congress, is to establish and maintain the National Spatial Reference System, which includes gravity measurements. <span class="hlt">Absolute</span> gravimeters measure the total gravity field directly and do not involve ties to other measurements. Periodic "intercomparisons" of multiple <span class="hlt">absolute</span> gravimeters at reference gravity sites are used to constrain the behavior of the instruments to ensure that each would yield reasonably similar measurements of the same location (i.e. yield a sufficiently consistent datum when measured in disparate locales). New atomic interferometric gravimeters promise a significant</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2900085','PMC'); return false;" href="http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2900085"><span id="translatedtitle">Drug <span class="hlt">Errors</span> in Anaesthesiology</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Jain, Rajnish Kumar; Katiyar, Sarika</p> <p>2009-01-01</p> <p>Summary Medication <span class="hlt">errors</span> are a leading cause of morbidity and mortality in hospitalized patients. The incidence of these drug <span class="hlt">errors</span> during anaesthesia is not certain. They impose a considerable financial burden to health care systems apart from the patient losses. Common causes of these <span class="hlt">errors</span> and their prevention is discussed. PMID:20640103</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/biblio/20863842','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/biblio/20863842"><span id="translatedtitle">Measurement of the <span class="hlt">absolute</span> differential cross section for np elastic scattering at 194 MeV</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Sarsour, M.; Peterson, T.; Planinic, M.; Vigdor, S. E.; Allgower, C.; Hossbach, T.; Jacobs, W. W.; Klyachko, A. V.; Rinckel, T.; Stephenson, E. J.; Wissink, S. W.; Zhou, Y.; Bergenwall, B.; Blomgren, J.; Johansson, C.; Klug, J.; Nadel-Turonski, P.; Nilsson, L.; Olsson, N.; Pomp, S.</p> <p>2006-10-15</p> <p>A tagged medium-energy neutron beam was used in a precise measurement of the <span class="hlt">absolute</span> differential cross section for np backscattering. The results resolve significant discrepancies within the np database concerning the angular dependence in this regime. The experiment has determined the <span class="hlt">absolute</span> normalization with {+-}1.5% uncertainty, suitable to verify constraints of supposedly comparable precision that arise from the rest of the database in partial wave analyses. The analysis procedures, especially those associated with the evaluation of systematic <span class="hlt">errors</span> in the experiment, are described in detail so that systematic uncertainties may be included in a reasonable way in subsequent partial wave analysis fits incorporating the present results.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014RMxAC..44Q.191C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014RMxAC..44Q.191C"><span id="translatedtitle">Morphology and <span class="hlt">Absolute</span> Magnitudes of the SDSS DR7 QSOs</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Coelho, B.; Andrei, A. H.; Antón, S.</p> <p>2014-10-01</p> <p>The ESA mission Gaia will furnish a complete census of the Milky Way, delivering astrometrics, dynamics, and astrophysics information for 1 billion stars. Operating in all-sky repeated survey mode, Gaia will also provide measurements of extra-galactic objects. Among the later there will be at least 500,000 QSOs that will be used to build the reference frame upon which the several independent observations will be combined and interpreted. Not all the QSOs are equally suited to fulfill this role of fundamental, fiducial grid-points. Brightness, morphology, and variability define the astrometric <span class="hlt">error</span> budget for each object. We made use of 3 morphological parameters based on the PSF sharpness, circularity and gaussianity, which enable us to distinguish the "real point-like" QSOs. These parameters are being explored on the spectroscopically certified QSOs of the SDSS DR7, to compare the performance against other morphology classification schemes, as well as to derive properties of the host galaxy. We present a new method, based on the Gaia quasar database, to derive <span class="hlt">absolute</span> magnitudes, on the SDSS filters domain. The method can be extrapolated all over the optical window, including the Gaia filters. We discuss colors derived from SDSS apparent magnitudes and colors based on <span class="hlt">absolute</span> magnitudes that we obtained tanking into account corrections for dust extinction, either intergalactic or from the QSO host, and for the Lyman α forest. In the future we want to further discuss properties of the host galaxies, comparing for e.g. the obtained morphological classification with the color, the apparent and <span class="hlt">absolute</span> magnitudes, and the redshift distributions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19930030955&hterms=global+positioning+satellites&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dglobal%2Bpositioning%2Bsatellites','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19930030955&hterms=global+positioning+satellites&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dglobal%2Bpositioning%2Bsatellites"><span id="translatedtitle"><span class="hlt">Absolute</span> positioning using DORIS tracking of the SPOT-2 satellite</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Watkins, M. M.; Ries, J. C.; Davis, G. W.</p> <p>1992-01-01</p> <p>The ability of the French DORIS system operating on the SPOT-2 satellite to provide <span class="hlt">absolute</span> site positioning at the 20-30-centimeter level using 80 d of data is demonstrated. The accuracy of the vertical component is comparable to that of the horizontal components, indicating that residual troposphere <span class="hlt">error</span> is not a limiting factor. The translation parameters indicate that the DORIS network realizes a geocentric frame to about 50 nm in each component. The considerable amount of data provided by the nearly global, all-weather DORIS network allowed this complex parameterization required to reduce the unmodeled forces acting on the low-earth satellite. Site velocities with accuracies better than 10 mm/yr should certainly be possible using the multiyear span of the SPOT series and Topex/Poseidon missions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/biblio/1045846','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/biblio/1045846"><span id="translatedtitle">Full field imaging based instantaneous hyperspectral <span class="hlt">absolute</span> refractive index measurement</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Baba, Justin S; Boudreaux, Philip R</p> <p>2012-01-01</p> <p>Multispectral refractometers typically measure refractive index (RI) at discrete monochromatic wavelengths via a serial process. We report on the demonstration of a white light full field imaging based refractometer capable of instantaneous multispectral measurement of <span class="hlt">absolute</span> RI of clear liquid/gel samples across the entire visible light spectrum. The broad optical bandwidth refractometer is capable of hyperspectral measurement of RI in the range 1.30 1.70 between 400nm 700nm with a maximum <span class="hlt">error</span> of 0.0036 units (0.24% of actual) at 414nm for a = 1.50 sample. We present system design and calibration method details as well as results from a system validation sample.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/biblio/5392217','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/biblio/5392217"><span id="translatedtitle">Energy expenditures in four men estimated by D/sub 2/ /sup 18/O method at two <span class="hlt">times</span>. III. Calculation methods and sources of <span class="hlt">error</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Seale, J.L.; Miles, C.W.; Conway, J.M.; Bulman, S.D.; Brooks, B.H.; Prather, E.S.; Bodwell, C.E.</p> <p>1986-03-01</p> <p>Three different methods have been used to calculate energy expenditures (EE) from D and /sup 18/O elimination data collected on 4 men as previously described. The three methods included the two-point method, regression analysis, and the Cambridge integration method. Estimates of body composition (initial, final, and/or across 21 days) were obtained by use of total body impedance analysis, D/sub 2/O or D/sub 2/ /sup 18/O dilution, under water weighting, and skinfold measurements and from the intercepts of D or /sup 18/O disappearance rates (log plots). Initial evaluations suggest that the different calculation methods yield different results with some data; these differences are larger for some sets of data compared to others. Variability in estimated total body water (derived from body composition estimates) have significant effects on calculated EE, EE estimates from EI are very sensitive to small changes in body composition (e.g., % body fat) including those changes which are probably within the <span class="hlt">error</span> of measurement for most body composition methods.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhRvA..92f2125A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhRvA..92f2125A"><span id="translatedtitle">Quantum theory allows for <span class="hlt">absolute</span> maximal contextuality</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Amaral, Barbara; Cunha, Marcelo Terra; Cabello, Adán</p> <p>2015-12-01</p> <p>Contextuality is a fundamental feature of quantum theory and a necessary resource for quantum computation and communication. It is therefore important to investigate how large contextuality can be in quantum theory. Linear contextuality witnesses can be expressed as a sum S of n probabilities, and the independence number α and the Tsirelson-like number ϑ of the corresponding exclusivity graph are, respectively, the maximum of S for noncontextual theories and for the theory under consideration. A theory allows for <span class="hlt">absolute</span> maximal contextuality if it has scenarios in which ϑ /α approaches n . Here we show that quantum theory allows for <span class="hlt">absolute</span> maximal contextuality despite what is suggested by the examination of the quantum violations of Bell and noncontextuality inequalities considered in the past. Our proof is not constructive and does not single out explicit scenarios. Nevertheless, we identify scenarios in which quantum theory allows for almost-<span class="hlt">absolute</span>-maximal contextuality.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/6063303','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/6063303"><span id="translatedtitle"><span class="hlt">Absolute</span> calibration in vivo measurement systems</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Kruchten, D.A.; Hickman, D.P.</p> <p>1991-02-01</p> <p>Lawrence Livermore National Laboratory (LLNL) is currently investigating a new method for obtaining <span class="hlt">absolute</span> calibration factors for radiation measurement systems used to measure internally deposited radionuclides in vivo. <span class="hlt">Absolute</span> calibration of in vivo measurement systems will eliminate the need to generate a series of human surrogate structures (i.e., phantoms) for calibrating in vivo measurement systems. The <span class="hlt">absolute</span> calibration of in vivo measurement systems utilizes magnetic resonance imaging (MRI) to define physiological structure, size, and composition. The MRI image provides a digitized representation of the physiological structure, which allows for any mathematical distribution of radionuclides within the body. Using Monte Carlo transport codes, the emission spectrum from the body is predicted. The in vivo measurement equipment is calibrated using the Monte Carlo code and adjusting for the intrinsic properties of the detection system. The calibration factors are verified using measurements of existing phantoms and previously obtained measurements of human volunteers. 8 refs.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24117660','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24117660"><span id="translatedtitle">Quantitative standards for <span class="hlt">absolute</span> linguistic universals.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Piantadosi, Steven T; Gibson, Edward</p> <p>2014-01-01</p> <p><span class="hlt">Absolute</span> linguistic universals are often justified by cross-linguistic analysis: If all observed languages exhibit a property, the property is taken to be a likely universal, perhaps specified in the cognitive or linguistic systems of language learners and users. In many cases, these patterns are then taken to motivate linguistic theory. Here, we show that cross-linguistic analysis will very rarely be able to statistically justify <span class="hlt">absolute</span>, inviolable patterns in language. We formalize two statistical methods--frequentist and Bayesian--and show that in both it is possible to find strict linguistic universals, but that the numbers of independent languages necessary to do so is generally unachievable. This suggests that methods other than typological statistics are necessary to establish <span class="hlt">absolute</span> properties of human language, and thus that many of the purported universals in linguistics have not received sufficient empirical justification.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24322224','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24322224"><span id="translatedtitle"><span class="hlt">Absolute</span> photoacoustic thermometry in deep tissue.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yao, Junjie; Ke, Haixin; Tai, Stephen; Zhou, Yong; Wang, Lihong V</p> <p>2013-12-15</p> <p>Photoacoustic thermography is a promising tool for temperature measurement in deep tissue. Here we propose an <span class="hlt">absolute</span> temperature measurement method based on the dual temperature dependences of the Grüneisen parameter and the speed of sound in tissue. By taking ratiometric measurements at two adjacent temperatures, we can eliminate the factors that are temperature irrelevant but difficult to correct for in deep tissue. To validate our method, <span class="hlt">absolute</span> temperatures of blood-filled tubes embedded ~9 mm deep in chicken tissue were measured in a biologically relevant range from 28°C to 46°C. The temperature measurement accuracy was ~0.6°C. The results suggest that our method can be potentially used for <span class="hlt">absolute</span> temperature monitoring in deep tissue during thermotherapy.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/371207','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/371207"><span id="translatedtitle">Molecular iodine <span class="hlt">absolute</span> frequencies. Final report</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Sansonetti, C.J.</p> <p>1990-06-25</p> <p>Fifty specified lines of {sup 127}I{sub 2} were studied by Doppler-free frequency modulation spectroscopy. For each line the classification of the molecular transition was determined, hyperfine components were identified, and one well-resolved component was selected for precise determination of its <span class="hlt">absolute</span> frequency. In 3 cases, a nearby alternate line was selected for measurement because no well-resolved component was found for the specified line. <span class="hlt">Absolute</span> frequency determinations were made with an estimated uncertainty of 1.1 MHz by locking a dye laser to the selected hyperfine component and measuring its wave number with a high-precision Fabry-Perot wavemeter. For each line results of the <span class="hlt">absolute</span> measurement, the line classification, and a Doppler-free spectrum are given.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140001056','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140001056"><span id="translatedtitle">Evaluation of the <span class="hlt">Absolute</span> Regional Temperature Potential</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Shindell, D. T.</p> <p>2012-01-01</p> <p>The <span class="hlt">Absolute</span> Regional Temperature Potential (ARTP) is one of the few climate metrics that provides estimates of impacts at a sub-global scale. The ARTP presented here gives the <span class="hlt">time</span>-dependent temperature response in four latitude bands (90-28degS, 28degS-28degN, 28-60degN and 60-90degN) as a function of emissions based on the forcing in those bands caused by the emissions. It is based on a large set of simulations performed with a single atmosphere-ocean climate model to derive regional forcing/response relationships. Here I evaluate the robustness of those relationships using the forcing/response portion of the ARTP to estimate regional temperature responses to the historic aerosol forcing in three independent climate models. These ARTP results are in good accord with the actual responses in those models. Nearly all ARTP estimates fall within +/-20%of the actual responses, though there are some exceptions for 90-28degS and the Arctic, and in the latter the ARTP may vary with forcing agent. However, for the tropics and the Northern Hemisphere mid-latitudes in particular, the +/-20% range appears to be roughly consistent with the 95% confidence interval. Land areas within these two bands respond 39-45% and 9-39% more than the latitude band as a whole. The ARTP, presented here in a slightly revised form, thus appears to provide a relatively robust estimate for the responses of large-scale latitude bands and land areas within those bands to inhomogeneous radiative forcing and thus potentially to emissions as well. Hence this metric could allow rapid evaluation of the effects of emissions policies at a finer scale than global metrics without requiring use of a full climate model.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120012063','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120012063"><span id="translatedtitle">Orion <span class="hlt">Absolute</span> Navigation System Progress and Challenge</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Holt, Greg N.; D'Souza, Christopher</p> <p>2012-01-01</p> <p>The <span class="hlt">absolute</span> navigation design of NASA's Orion vehicle is described. It has undergone several iterations and modifications since its inception, and continues as a work-in-progress. This paper seeks to benchmark the current state of the design and some of the rationale and analysis behind it. There are specific challenges to address when preparing a <span class="hlt">timely</span> and effective design for the Exploration Flight Test (EFT-1), while still looking ahead and providing software extensibility for future exploration missions. The primary onboard measurements in a Near-Earth or Mid-Earth environment consist of GPS pseudo-range and delta-range, but for future explorations missions the use of star-tracker and optical navigation sources need to be considered. Discussions are presented for state size and composition, processing techniques, and consider states. A presentation is given for the processing technique using the computationally stable and robust UDU formulation with an Agee-Turner Rank-One update. This allows for computational savings when dealing with many parameters which are modeled as slowly varying Gauss-Markov processes. Preliminary analysis shows up to a 50% reduction in computation versus a more traditional formulation. Several state elements are discussed and evaluated, including position, velocity, attitude, clock bias/drift, and GPS measurement biases in addition to bias, scale factor, misalignment, and non-orthogonalities of the accelerometers and gyroscopes. Another consideration is the initialization of the EKF in various scenarios. Scenarios such as single-event upset, ground command, and cold start are discussed as are strategies for whole and partial state updates as well as covariance considerations. Strategies are given for dealing with latent measurements and high-rate propagation using multi-rate architecture. The details of the rate groups and the data ow between the elements is discussed and evaluated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9903E..1OF','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9903E..1OF"><span id="translatedtitle">Precision evaluation of calibration factor of a superconducting gravimeter using an <span class="hlt">absolute</span> gravimeter</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Feng, Jin-yang; Wu, Shu-qing; Li, Chun-jian; Su, Duo-wu; Xu, Jin-yi; Yu, Mei</p> <p>2016-01-01</p> <p>The precision of the calibration factor of a superconducting gravimeter (SG) using an <span class="hlt">absolute</span> gravimeter (AG) is analyzed based on linear least square fitting and <span class="hlt">error</span> propagation theory and factors affecting the accuracy are discussed. It can improve the accuracy to choose the observation period of solid tide as a significant change or increase the calibration <span class="hlt">time</span>. Simulation is carried out based on synthetic gravity tides calculated with T-soft at observed site from Aug. 14th to Sept. 2nd in 2014. The result indicates that the highest precision using half a day's observation data is below 0.28% and the precision exponentially increases with the increase of peak-to-peak gravity change. The comparison of results obtained from the same observation <span class="hlt">time</span> indicated that using properly selected observation data has more beneficial on the improvement of precision. Finally, the calibration experiment of the SG iGrav-012 is introduced and the calibration factor is determined for the first <span class="hlt">time</span> using AG FG5X-249. With 2.5 days' data properly selected from solid tide period with large tidal amplitude, the determined calibration factor of iGrav-012 is (-92.54423+/-0.13616) μGal/V (1μGal=10-8m/s2), with the relative accuracy of about 0.15%.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/26662613','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/26662613"><span id="translatedtitle">Medical <span class="hlt">Error</span> and Moral Luck.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hubbeling, Dieneke</p> <p>2016-09-01</p> <p>This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical <span class="hlt">error</span>, especially an <span class="hlt">error</span> of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this <span class="hlt">error</span> may have previously occurred many <span class="hlt">times</span> with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical <span class="hlt">errors</span>, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future <span class="hlt">errors</span>. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical <span class="hlt">error</span>: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome. PMID:26662613</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9817E..08C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9817E..08C"><span id="translatedtitle"><span class="hlt">Error</span> image aware content restoration</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee</p> <p>2015-12-01</p> <p>As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces <span class="hlt">errors</span> such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such <span class="hlt">errors</span> require a substantial amount of <span class="hlt">time</span> and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated <span class="hlt">error</span> restoration algorithm which can be applied to different types of classic <span class="hlt">errors</span> by utilizing adjacent images while preserving the undamaged parts of an <span class="hlt">error</span> image as much as possible. We tested our method to <span class="hlt">error</span> images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PASJ...67...55U','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PASJ...67...55U"><span id="translatedtitle">Variable selection for modeling the <span class="hlt">absolute</span> magnitude at maximum of Type Ia supernovae</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Uemura, Makoto; Kawabata, Koji S.; Ikeda, Shiro; Maeda, Keiichi</p> <p>2015-06-01</p> <p>We discuss what is an appropriate set of explanatory variables in order to predict the <span class="hlt">absolute</span> magnitude at the maximum of Type Ia supernovae. In order to have a good prediction, the <span class="hlt">error</span> for future data, which is called the "generalization <span class="hlt">error</span>," should be small. We use cross-validation in order to control the generalization <span class="hlt">error</span> and a LASSO-type estimator in order to choose the set of variables. This approach can be used even in the case that the number of samples is smaller than the number of candidate variables. We studied the Berkeley supernova database with our approach. Candidates for the explanatory variables include normalized spectral data, variables about lines, and previously proposed flux ratios, as well as the color and light-curve widths. As a result, we confirmed the past understanding about Type Ia supernovae: (i) The <span class="hlt">absolute</span> magnitude at maximum depends on the color and light-curve width. (ii) The light-curve width depends on the strength of Si II. Recent studies have suggested adding more variables in order to explain the <span class="hlt">absolute</span> magnitude. However, our analysis does not support adding any other variables in order to have a better generalization <span class="hlt">error</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014ApWS....4..425P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014ApWS....4..425P"><span id="translatedtitle">Water quality management using statistical analysis and <span class="hlt">time</span>-series prediction model</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Parmar, Kulwinder Singh; Bhardwaj, Rashmi</p> <p>2014-12-01</p> <p>This paper deals with water quality management using statistical analysis and <span class="hlt">time</span>-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square <span class="hlt">error</span>, mean <span class="hlt">absolute</span> percentage <span class="hlt">error</span>, maximum <span class="hlt">absolute</span> percentage <span class="hlt">error</span>, mean <span class="hlt">absolute</span> <span class="hlt">error</span>, maximum <span class="hlt">absolute</span> <span class="hlt">error</span>, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4276538','PMC'); return false;" href="http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4276538"><span id="translatedtitle">Novel isotopic N, N-dimethyl leucine (iDiLeu) reagents enable <span class="hlt">absolute</span> quantification of peptides and proteins using a standard curve approach</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun</p> <p>2014-01-01</p> <p><span class="hlt">Absolute</span> quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for <span class="hlt">absolute</span> quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive due to the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using Mass Differential Tags for Relative and <span class="hlt">Absolute</span> Quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N,N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective due to their synthetic simplicity, and have increased throughput compared to previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention <span class="hlt">time</span> shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median <span class="hlt">errors</span> <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% <span class="hlt">error</span>) while the second enables standard curve creation and analyte quantification in one run (<8% <span class="hlt">error</span>). PMID:25377360</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4922566','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4922566"><span id="translatedtitle"><span class="hlt">Absolute</span> Cerebral Blood Flow Infarction Threshold for 3-Hour Ischemia <span class="hlt">Time</span> Determined with CT Perfusion and 18F-FFMZ-PET Imaging in a Porcine Model of Cerebral Ischemia</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Cockburn, Neil; Kovacs, Michael</p> <p>2016-01-01</p> <p>CT Perfusion (CTP) derived cerebral blood flow (CBF) thresholds have been proposed as the optimal parameter for distinguishing the infarct core prior to reperfusion. Previous threshold-derivation studies have been limited by uncertainties introduced by infarct expansion between the acute phase of stroke and follow-up imaging, or DWI lesion reversibility. In this study a model is proposed for determining infarction CBF thresholds at 3hr ischemia <span class="hlt">time</span> by comparing contemporaneously acquired CTP derived CBF maps to 18F-FFMZ-PET imaging, with the objective of deriving a CBF threshold for infarction after 3 hours of ischemia. Endothelin-1 (ET-1) was injected into the brain of Duroc-Cross pigs (n = 11) through a burr hole in the skull. CTP images were acquired 10 and 30 minutes post ET-1 injection and then every 30 minutes for 150 minutes. 370 MBq of 18F-FFMZ was injected ~120 minutes post ET-1 injection and PET images were acquired for 25 minutes starting ~155–180 minutes post ET-1 injection. CBF maps from each CTP acquisition were co-registered and converted into a median CBF map. The median CBF map was co-registered to blood volume maps for vessel exclusion, an average CT image for grey/white matter segmentation, and 18F-FFMZ-PET images for infarct delineation. Logistic regression and ROC analysis were performed on infarcted and non-infarcted pixel CBF values for each animal that developed infarct. Six of the eleven animals developed infarction. The mean CBF value corresponding to the optimal operating point of the ROC curves for the 6 animals was 12.6 ± 2.8 mL·min-1·100g-1 for infarction after 3 hours of ischemia. The porcine ET-1 model of cerebral ischemia is easier to implement then other large animal models of stroke, and performs similarly as long as CBF is monitored using CTP to prevent reperfusion. PMID:27347877</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/27347877','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/27347877"><span id="translatedtitle"><span class="hlt">Absolute</span> Cerebral Blood Flow Infarction Threshold for 3-Hour Ischemia <span class="hlt">Time</span> Determined with CT Perfusion and 18F-FFMZ-PET Imaging in a Porcine Model of Cerebral Ischemia.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wright, Eric A; d'Esterre, Christopher D; Morrison, Laura B; Cockburn, Neil; Kovacs, Michael; Lee, Ting-Yim</p> <p>2016-01-01</p> <p>CT Perfusion (CTP) derived cerebral blood flow (CBF) thresholds have been proposed as the optimal parameter for distinguishing the infarct core prior to reperfusion. Previous threshold-derivation studies have been limited by uncertainties introduced by infarct expansion between the acute phase of stroke and follow-up imaging, or DWI lesion reversibility. In this study a model is proposed for determining infarction CBF thresholds at 3hr ischemia <span class="hlt">time</span> by comparing contemporaneously acquired CTP derived CBF maps to 18F-FFMZ-PET imaging, with the objective of deriving a CBF threshold for infarction after 3 hours of ischemia. Endothelin-1 (ET-1) was injected into the brain of Duroc-Cross pigs (n = 11) through a burr hole in the skull. CTP images were acquired 10 and 30 minutes post ET-1 injection and then every 30 minutes for 150 minutes. 370 MBq of 18F-FFMZ was injected ~120 minutes post ET-1 injection and PET images were acquired for 25 minutes starting ~155-180 minutes post ET-1 injection. CBF maps from each CTP acquisition were co-registered and converted into a median CBF map. The median CBF map was co-registered to blood volume maps for vessel exclusion, an average CT image for grey/white matter segmentation, and 18F-FFMZ-PET images for infarct delineation. Logistic regression and ROC analysis were performed on infarcted and non-infarcted pixel CBF values for each animal that developed infarct. Six of the eleven animals developed infarction. The mean CBF value corresponding to the optimal operating point of the ROC curves for the 6 animals was 12.6 ± 2.8 mL·min-1·100g-1 for infarction after 3 hours of ischemia. The porcine ET-1 model of cerebral ischemia is easier to implement then other large animal models of stroke, and performs similarly as long as CBF is monitored using CTP to prevent reperfusion. PMID:27347877</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JNEng..13b6008Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JNEng..13b6008Z"><span id="translatedtitle">Partially supervised P300 speller adaptation for eventual stimulus <span class="hlt">timing</span> optimization: target confidence is superior to <span class="hlt">error</span>-related potential score as an uncertain label</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zeyl, Timothy; Yin, Erwei; Keightley, Michelle; Chau, Tom</p> <p>2016-04-01</p> <p>Objective. <span class="hlt">Error</span>-related potentials (ErrPs) have the potential to guide classifier adaptation in BCI spellers, for addressing non-stationary performance as well as for online optimization of system parameters, by providing imperfect or partial labels. However, the usefulness of ErrP-based labels for BCI adaptation has not been established in comparison to other partially supervised methods. Our objective is to make this comparison by retraining a two-step P300 speller on a subset of confident online trials using naïve labels taken from speller output, where confidence is determined either by (i) ErrP scores, (ii) posterior target scores derived from the P300 potential, or (iii) a hybrid of these scores. We further wish to evaluate the ability of partially supervised adaptation and retraining methods to adjust to a new stimulus-onset asynchrony (SOA), a necessary step towards online SOA optimization. Approach. Eleven consenting able-bodied adults attended three online spelling sessions on separate days with feedback in which SOAs were set at 160 ms (sessions 1 and 2) and 80 ms (session 3). A post hoc offline analysis and a simulated online analysis were performed on sessions two and three to compare multiple adaptation methods. Area under the curve (AUC) and symbols spelled per minute (SPM) were the primary outcome measures. Main results. Retraining using supervised labels confirmed improvements of 0.9 percentage points (session 2, p < 0.01) and 1.9 percentage points (session 3, p < 0.05) in AUC using same-day training data over using data from a previous day, which supports classifier adaptation in general. Significance. Using posterior target score alone as a confidence measure resulted in the highest SPM of the partially supervised methods, indicating that ErrPs are not necessary to boost the performance of partially supervised adaptive classification. Partial supervision significantly improved SPM at a novel SOA, showing promise for eventual online SOA</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19840013049','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19840013049"><span id="translatedtitle">Beta systems <span class="hlt">error</span> analysis</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>1984-01-01</p> <p>The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a <span class="hlt">time</span> appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other <span class="hlt">errors</span> in the results of the two methods are examined.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/6526329','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/6526329"><span id="translatedtitle">[Medical <span class="hlt">errors</span> in obstetrics].</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Marek, Z</p> <p>1984-08-01</p> <p><span class="hlt">Errors</span> in medicine may fall into 3 main categories: 1) medical <span class="hlt">errors</span> made only by physicians, 2) technical <span class="hlt">errors</span> made by physicians and other health care specialists, and 3) organizational <span class="hlt">errors</span> associated with mismanagement of medical facilities. This classification of medical <span class="hlt">errors</span>, as well as the definition and treatment of them, fully applies to obstetrics. However, the difference between obstetrics and other fields of medicine stems from the fact that an obstetrician usually deals with healthy women. Conversely, professional risk in obstetrics is very high, as <span class="hlt">errors</span> and malpractice can lead to very serious complications. Observations show that the most frequent obstetrical <span class="hlt">errors</span> occur in induced abortions, diagnosis of pregnancy, selection of optimal delivery techniques, treatment of hemorrhages, and other complications. Therefore, the obstetrician should be prepared to use intensive care procedures similar to those used for resuscitation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27228765','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27228765"><span id="translatedtitle">[<span class="hlt">Errors</span> Analysis and Correction in Atmospheric Methane Retrieval Based on Greenhouse Gases Observing Satellite Data].</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bu, Ting-ting; Wang, Xian-hua; Ye, Han-han; Jiang, Xin-hua</p> <p>2016-01-01</p> <p>High precision retrieval of atmospheric CH4 is influenced by a variety of factors. The uncertainties of ground properties and atmospheric conditions are important factors, such as surface reflectance, temperature profile, humidity profile and pressure profile. Surface reflectance is affected by many factors so that it is difficult to get the precise value. The uncertainty of surface reflectance will cause large <span class="hlt">error</span> to retrieval result. The uncertainties of temperature profile, humidity profile and pressure profile are also important sources of retrieval <span class="hlt">error</span> and they will cause unavoidable systematic <span class="hlt">error</span>. This <span class="hlt">error</span> is hard to eliminate only using CH4 band. In this paper, ratio spectrometry method and CO2 band correction method are proposed to reduce the <span class="hlt">error</span> caused by these factors. Ratio spectrometry method can decrease the effect of surface reflectance in CH4 retrieval by converting <span class="hlt">absolute</span> radiance spectrometry into ratio spectrometry. CO2 band correction method converts column amounts of CH4 into column averaged mixing ratio by using CO2 1.61 μm band and it can correct the systematic <span class="hlt">error</span> caused by temperature profile, humidity profile and pressure profile. The combination of these two correction methods will decrease the effect caused by surface reflectance, temperature profile, humidity profile and pressure profile at the same <span class="hlt">time</span> and reduce the retrieval <span class="hlt">error</span>. GOSAT data were used to retrieve atmospheric CH4 to test and validate the two correction methods. The results showed that CH4 column averaged mixing ratio retrieved after correction was close to GOSAT Level2 product and the retrieval precision was up to -0.24%. The studies suggest that the <span class="hlt">error</span> of CH4 retrieval caused by the uncertainties of ground properties and atmospheric conditions can be significantly reduced and the retrieval precision can be highly improved by using ratio spectrometry method and CO2 hand correction method.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/27228765','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/27228765"><span id="translatedtitle">[<span class="hlt">Errors</span> Analysis and Correction in Atmospheric Methane Retrieval Based on Greenhouse Gases Observing Satellite Data].</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bu, Ting-ting; Wang, Xian-hua; Ye, Han-han; Jiang, Xin-hua</p> <p>2016-01-01</p> <p>High precision retrieval of atmospheric CH4 is influenced by a variety of factors. The uncertainties of ground properties and atmospheric conditions are important factors, such as surface reflectance, temperature profile, humidity profile and pressure profile. Surface reflectance is affected by many factors so that it is difficult to get the precise value. The uncertainty of surface reflectance will cause large <span class="hlt">error</span> to retrieval result. The uncertainties of temperature profile, humidity profile and pressure profile are also important sources of retrieval <span class="hlt">error</span> and they will cause unavoidable systematic <span class="hlt">error</span>. This <span class="hlt">error</span> is hard to eliminate only using CH4 band. In this paper, ratio spectrometry method and CO2 band correction method are proposed to reduce the <span class="hlt">error</span> caused by these factors. Ratio spectrometry method can decrease the effect of surface reflectance in CH4 retrieval by converting <span class="hlt">absolute</span> radiance spectrometry into ratio spectrometry. CO2 band correction method converts column amounts of CH4 into column averaged mixing ratio by using CO2 1.61 μm band and it can correct the systematic <span class="hlt">error</span> caused by temperature profile, humidity profile and pressure profile. The combination of these two correction methods will decrease the effect caused by surface reflectance, temperature profile, humidity profile and pressure profile at the same <span class="hlt">time</span> and reduce the retrieval <span class="hlt">error</span>. GOSAT data were used to retrieve atmospheric CH4 to test and validate the two correction methods. The results showed that CH4 column averaged mixing ratio retrieved after correction was close to GOSAT Level2 product and the retrieval precision was up to -0.24%. The studies suggest that the <span class="hlt">error</span> of CH4 retrieval caused by the uncertainties of ground properties and atmospheric conditions can be significantly reduced and the retrieval precision can be highly improved by using ratio spectrometry method and CO2 hand correction method. PMID:27228765</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26729134','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26729134"><span id="translatedtitle">Bio-Inspired Stretchable <span class="hlt">Absolute</span> Pressure Sensor Network.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Guo, Yue; Li, Yu-Hung; Guo, Zhiqiang; Kim, Kyunglok; Chang, Fu-Kuo; Wang, Shan X</p> <p>2016-01-02</p> <p>A bio-inspired <span class="hlt">absolute</span> pressure sensor network has been developed. <span class="hlt">Absolute</span> pressure sensors, distributed on multiple silicon islands, are connected as a network by stretchable polyimide wires. This sensor network, made on a 4'' wafer, has 77 nodes and can be mounted on various curved surfaces to cover an area up to 0.64 m × 0.64 m, which is 100 <span class="hlt">times</span> larger than its original size. Due to Micro Electro-Mechanical system (MEMS) surface micromachining technology, ultrathin sensing nodes can be realized with thicknesses of less than 100 µm. Additionally, good linearity and high sensitivity (~14 mV/V/bar) have been achieved. Since the MEMS sensor process has also been well integrated with a flexible polymer substrate process, the entire sensor network can be fabricated in a <span class="hlt">time</span>-efficient and cost-effective manner. Moreover, an accurate pressure contour can be obtained from the sensor network. Therefore, this <span class="hlt">absolute</span> pressure sensor network holds significant promise for smart vehicle applications, especially for unmanned aerial vehicles.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4732088','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4732088"><span id="translatedtitle">Bio-Inspired Stretchable <span class="hlt">Absolute</span> Pressure Sensor Network</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Guo, Yue; Li, Yu-Hung; Guo, Zhiqiang; Kim, Kyunglok; Chang, Fu-Kuo; Wang, Shan X.</p> <p>2016-01-01</p> <p>A bio-inspired <span class="hlt">absolute</span> pressure sensor network has been developed. <span class="hlt">Absolute</span> pressure sensors, distributed on multiple silicon islands, are connected as a network by stretchable polyimide wires. This sensor network, made on a 4’’ wafer, has 77 nodes and can be mounted on various curved surfaces to cover an area up to 0.64 m × 0.64 m, which is 100 <span class="hlt">times</span> larger than its original size. Due to Micro Electro-Mechanical system (MEMS) surface micromachining technology, ultrathin sensing nodes can be realized with thicknesses of less than 100 µm. Additionally, good linearity and high sensitivity (~14 mV/V/bar) have been achieved. Since the MEMS sensor process has also been well integrated with a flexible polymer substrate process, the entire sensor network can be fabricated in a <span class="hlt">time</span>-efficient and cost-effective manner. Moreover, an accurate pressure contour can be obtained from the sensor network. Therefore, this <span class="hlt">absolute</span> pressure sensor network holds significant promise for smart vehicle applications, especially for unmanned aerial vehicles. PMID:26729134</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/26729134','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/26729134"><span id="translatedtitle">Bio-Inspired Stretchable <span class="hlt">Absolute</span> Pressure Sensor Network.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Guo, Yue; Li, Yu-Hung; Guo, Zhiqiang; Kim, Kyunglok; Chang, Fu-Kuo; Wang, Shan X</p> <p>2016-01-01</p> <p>A bio-inspired <span class="hlt">absolute</span> pressure sensor network has been developed. <span class="hlt">Absolute</span> pressure sensors, distributed on multiple silicon islands, are connected as a network by stretchable polyimide wires. This sensor network, made on a 4'' wafer, has 77 nodes and can be mounted on various curved surfaces to cover an area up to 0.64 m × 0.64 m, which is 100 <span class="hlt">times</span> larger than its original size. Due to Micro Electro-Mechanical system (MEMS) surface micromachining technology, ultrathin sensing nodes can be realized with thicknesses of less than 100 µm. Additionally, good linearity and high sensitivity (~14 mV/V/bar) have been achieved. Since the MEMS sensor process has also been well integrated with a flexible polymer substrate process, the entire sensor network can be fabricated in a <span class="hlt">time</span>-efficient and cost-effective manner. Moreover, an accurate pressure contour can be obtained from the sensor network. Therefore, this <span class="hlt">absolute</span> pressure sensor network holds significant promise for smart vehicle applications, especially for unmanned aerial vehicles. PMID:26729134</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24951433','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24951433"><span id="translatedtitle">Correlated measurement <span class="hlt">error</span> hampers association network inference.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kaduk, Mateusz; Hoefsloot, Huub C J; Vis, Daniel J; Reijmers, Theo; van der Greef, Jan; Smilde, Age K; Hendriks, Margriet M W B</p> <p>2014-09-01</p> <p>Modern chromatography-based metabolomics measurements generate large amounts of data in the form of abundances of metabolites. An increasingly popular way of representing and analyzing such data is by means of association networks. Ideally, such a network can be interpreted in terms of the underlying biology. A property of chromatography-based metabolomics data is that the measurement <span class="hlt">error</span> structure is complex: apart from the usual (random) instrumental <span class="hlt">error</span> there is also correlated measurement <span class="hlt">error</span>. This is intrinsic to the way the samples are prepared and the analyses are performed and cannot be avoided. The impact of correlated measurement <span class="hlt">errors</span> on (partial) correlation networks can be large and is not always predictable. The interplay between relative amounts of uncorrelated measurement <span class="hlt">error</span>, correlated measurement <span class="hlt">error</span> and biological variation defines this impact. Using chromatography-based <span class="hlt">time</span>-resolved lipidomics data obtained from a human intervention study we show how partial correlation based association networks are influenced by correlated measurement <span class="hlt">error</span>. We show how the effect of correlated measurement <span class="hlt">error</span> on partial correlations is different for direct and indirect associations. For direct associations the correlated measurement <span class="hlt">error</span> usually has no negative effect on the results, while for indirect associations, depending on the relative size of the correlated measurement <span class="hlt">error</span>, results can become unreliable. The aim of this paper is to generate awareness of the existence of correlated measurement <span class="hlt">errors</span> and their influence on association networks. <span class="hlt">Time</span> series lipidomics data is used for this purpose, as it makes it possible to visually distinguish the correlated measurement <span class="hlt">error</span> from a biological response. Underestimating the phenomenon of correlated measurement <span class="hlt">error</span> will result in the suggestion of biologically meaningful results that in reality rest solely on complicated <span class="hlt">error</span> structures. Using proper experimental designs that allow</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JMagR.261..121M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JMagR.261..121M"><span id="translatedtitle"><span class="hlt">Absolute</span> phase effects on CPMG-type pulse sequences</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mandal, Soumyajit; Oh, Sangwon; Hürlimann, Martin D.</p> <p>2015-12-01</p> <p>We describe and analyze the effects of transients within radio-frequency (RF) pulses on multiple-pulse NMR measurements such as the well-known Carr-Purcell-Meiboom-Gill (CPMG) sequence. These transients are functions of the <span class="hlt">absolute</span> RF phases at the beginning and end of the pulse, and are thus affected by the <span class="hlt">timing</span> of the pulse sequence with respect to the period of the RF waveform. Changes in transients between refocusing pulses in CPMG-type sequences can result in signal decay, persistent oscillations, changes in echo shape, and other effects. We have explored such effects by performing experiments in two different low-frequency NMR systems. The first uses a conventional tuned-and-matched probe circuit, while the second uses an ultra-broadband un-tuned or non-resonant probe circuit. We show that there are distinct differences between the <span class="hlt">absolute</span> phase effects in these two systems, and present simple models that explain these differences.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120016375','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120016375"><span id="translatedtitle">Aircraft system modeling <span class="hlt">error</span> and control <span class="hlt">error</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)</p> <p>2012-01-01</p> <p>A method for modeling <span class="hlt">error</span>-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking <span class="hlt">error</span> e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the <span class="hlt">error</span> component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=cell+AND+size&pg=4&id=EJ753903','ERIC'); return false;" href="http://eric.ed.gov/?q=cell+AND+size&pg=4&id=EJ753903"><span id="translatedtitle"><span class="hlt">Absolute</span> Points for Multiple Assignment Problems</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Adlakha, V.; Kowalski, K.</p> <p>2006-01-01</p> <p>An algorithm is presented to solve multiple assignment problems in which a cost is incurred only when an assignment is made at a given cell. The proposed method recursively searches for single/group <span class="hlt">absolute</span> points to identify cells that must be loaded in any optimal solution. Unlike other methods, the first solution is the optimal solution. The…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/biblio/927741','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/biblio/927741"><span id="translatedtitle"><span class="hlt">Absolute</span> partial photoionization cross sections of ozone.</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Berkowitz, J.; Chemistry</p> <p>2008-04-01</p> <p>Despite the current concerns about ozone, <span class="hlt">absolute</span> partial photoionization cross sections for this molecule in the vacuum ultraviolet (valence) region have been unavailable. By eclectic re-evaluation of old/new data and plausible assumptions, such cross sections have been assembled to fill this void.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=mathematics+AND+inequalities&pg=5&id=EJ945042','ERIC'); return false;" href="http://eric.ed.gov/?q=mathematics+AND+inequalities&pg=5&id=EJ945042"><span id="translatedtitle">Teaching <span class="hlt">Absolute</span> Value Inequalities to Mature Students</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Sierpinska, Anna; Bobos, Georgeana; Pruncut, Andreea</p> <p>2011-01-01</p> <p>This paper gives an account of a teaching experiment on <span class="hlt">absolute</span> value inequalities, whose aim was to identify characteristics of an approach that would realize the potential of the topic to develop theoretical thinking in students enrolled in prerequisite mathematics courses at a large, urban North American university. The potential is…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=VALUE+AND+ABSOLUTE&pg=2&id=EJ726176','ERIC'); return false;" href="http://eric.ed.gov/?q=VALUE+AND+ABSOLUTE&pg=2&id=EJ726176"><span id="translatedtitle">Solving <span class="hlt">Absolute</span> Value Equations Algebraically and Geometrically</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Shiyuan, Wei</p> <p>2005-01-01</p> <p>The way in which students can improve their comprehension by understanding the geometrical meaning of algebraic equations or solving algebraic equation geometrically is described. Students can experiment with the conditions of the <span class="hlt">absolute</span> value equation presented, for an interesting way to form an overall understanding of the concept.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=limit+AND+articles&pg=2&id=EJ933808','ERIC'); return false;" href="http://eric.ed.gov/?q=limit+AND+articles&pg=2&id=EJ933808"><span id="translatedtitle">Increasing Capacity: Practice Effects in <span class="hlt">Absolute</span> Identification</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Dodds, Pennie; Donkin, Christopher; Brown, Scott D.; Heathcote, Andrew</p> <p>2011-01-01</p> <p>In most of the long history of the study of <span class="hlt">absolute</span> identification--since Miller's (1956) seminal article--a severe limit on performance has been observed, and this limit has resisted improvement even by extensive practice. In a startling result, Rouder, Morey, Cowan, and Pfaltz (2004) found substantially improved performance with practice in the…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1986SPIE..660....2S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1986SPIE..660....2S"><span id="translatedtitle"><span class="hlt">Absolute</span> Radiometric Calibration Of The Thematic Mapper</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Slater, P. N.; Biggar, S. F.; Holm, R. G.; Jackson, R. D.; Mao, Y.; Moran, M. S.; Palmer, J. M.; Yuan, B.</p> <p>1986-11-01</p> <p>The results are presented of five in-flight <span class="hlt">absolute</span> radiometric calibrations, made in the period July 1984 to November 1985, at White Sands, New Mexico, of the solar reflective bands of the Landsat-5 Thematic Mapper (TM) . The 23 bandcalibrations made on the five dates show a ± 2.8% RMS variation from the mean as a percentage of the mean.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=mathematics+AND+education&pg=5&id=EJ1070973','ERIC'); return false;" href="http://eric.ed.gov/?q=mathematics+AND+education&pg=5&id=EJ1070973"><span id="translatedtitle">On Relative and <span class="hlt">Absolute</span> Conviction in Mathematics</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Weber, Keith; Mejia-Ramos, Juan Pablo</p> <p>2015-01-01</p> <p>Conviction is a central construct in mathematics education research on justification and proof. In this paper, we claim that it is important to distinguish between <span class="hlt">absolute</span> conviction and relative conviction. We argue that researchers in mathematics education frequently have not done so and this has lead to researchers making unwarranted claims…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/27074005','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/27074005"><span id="translatedtitle">Picoliter Well Array Chip-Based Digital Recombinase Polymerase Amplification for <span class="hlt">Absolute</span> Quantification of Nucleic Acids.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Zhao; Liu, Yong; Wei, Qingquan; Liu, Yuanjie; Liu, Wenwen; Zhang, Xuelian; Yu, Yude</p> <p>2016-01-01</p> <p><span class="hlt">Absolute</span>, precise quantification methods expand the scope of nucleic acids research and have many practical applications. Digital polymerase chain reaction (dPCR) is a powerful method for nucleic acid detection and <span class="hlt">absolute</span> quantification. However, it requires thermal cycling and accurate temperature control, which are difficult in resource-limited conditions. Accordingly, isothermal methods, such as recombinase polymerase amplification (RPA), are more attractive. We developed a picoliter well array (PWA) chip with 27,000 consistently sized picoliter reactions (314 pL) for isothermal DNA quantification using digital RPA (dRPA) at 39°C. Sample loading using a scraping liquid blade was simple, fast, and required small reagent volumes (i.e., <20 μL). Passivating the chip surface using a methoxy-PEG-silane agent effectively eliminated cross-contamination during dRPA. Our creative optical design enabled wide-field fluorescence imaging in situ and both end-point and real-<span class="hlt">time</span> analyses of picoliter wells in a 6-cm(2) area. It was not necessary to use scan shooting and stitch serial small images together. Using this method, we quantified serial dilutions of a Listeria monocytogenes gDNA stock solution from 9 × 10(-1) to 4 × 10(-3) copies per well with an average <span class="hlt">error</span> of less than 11% (N = 15). Overall dRPA-on-chip processing required less than 30 min, which was a 4-fold decrease compared to dPCR, requiring approximately 2 h. dRPA on the PWA chip provides a simple and highly sensitive method to quantify nucleic acids without thermal cycling or precise micropump/microvalve control. It has applications in fast field analysis and critical clinical diagnostics under resource-limited settings. PMID:27074005</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27074005','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27074005"><span id="translatedtitle">Picoliter Well Array Chip-Based Digital Recombinase Polymerase Amplification for <span class="hlt">Absolute</span> Quantification of Nucleic Acids.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Zhao; Liu, Yong; Wei, Qingquan; Liu, Yuanjie; Liu, Wenwen; Zhang, Xuelian; Yu, Yude</p> <p>2016-01-01</p> <p><span class="hlt">Absolute</span>, precise quantification methods expand the scope of nucleic acids research and have many practical applications. Digital polymerase chain reaction (dPCR) is a powerful method for nucleic acid detection and <span class="hlt">absolute</span> quantification. However, it requires thermal cycling and accurate temperature control, which are difficult in resource-limited conditions. Accordingly, isothermal methods, such as recombinase polymerase amplification (RPA), are more attractive. We developed a picoliter well array (PWA) chip with 27,000 consistently sized picoliter reactions (314 pL) for isothermal DNA quantification using digital RPA (dRPA) at 39°C. Sample loading using a scraping liquid blade was simple, fast, and required small reagent volumes (i.e., <20 μL). Passivating the chip surface using a methoxy-PEG-silane agent effectively eliminated cross-contamination during dRPA. Our creative optical design enabled wide-field fluorescence imaging in situ and both end-point and real-<span class="hlt">time</span> analyses of picoliter wells in a 6-cm(2) area. It was not necessary to use scan shooting and stitch serial small images together. Using this method, we quantified serial dilutions of a Listeria monocytogenes gDNA stock solution from 9 × 10(-1) to 4 × 10(-3) copies per well with an average <span class="hlt">error</span> of less than 11% (N = 15). Overall dRPA-on-chip processing required less than 30 min, which was a 4-fold decrease compared to dPCR, requiring approximately 2 h. dRPA on the PWA chip provides a simple and highly sensitive method to quantify nucleic acids without thermal cycling or precise micropump/microvalve control. It has applications in fast field analysis and critical clinical diagnostics under resource-limited settings.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4830604','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4830604"><span id="translatedtitle">Picoliter Well Array Chip-Based Digital Recombinase Polymerase Amplification for <span class="hlt">Absolute</span> Quantification of Nucleic Acids</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Li, Zhao; Liu, Yong; Wei, Qingquan; Liu, Yuanjie; Liu, Wenwen; Zhang, Xuelian; Yu, Yude</p> <p>2016-01-01</p> <p><span class="hlt">Absolute</span>, precise quantification methods expand the scope of nucleic acids research and have many practical applications. Digital polymerase chain reaction (dPCR) is a powerful method for nucleic acid detection and <span class="hlt">absolute</span> quantification. However, it requires thermal cycling and accurate temperature control, which are difficult in resource-limited conditions. Accordingly, isothermal methods, such as recombinase polymerase amplification (RPA), are more attractive. We developed a picoliter well array (PWA) chip with 27,000 consistently sized picoliter reactions (314 pL) for isothermal DNA quantification using digital RPA (dRPA) at 39°C. Sample loading using a scraping liquid blade was simple, fast, and required small reagent volumes (i.e., <20 μL). Passivating the chip surface using a methoxy-PEG-silane agent effectively eliminated cross-contamination during dRPA. Our creative optical design enabled wide-field fluorescence imaging in situ and both end-point and real-<span class="hlt">time</span> analyses of picoliter wells in a 6-cm2 area. It was not necessary to use scan shooting and stitch serial small images together. Using this method, we quantified serial dilutions of a Listeria monocytogenes gDNA stock solution from 9 × 10-1 to 4 × 10-3 copies per well with an average <span class="hlt">error</span> of less than 11% (N = 15). Overall dRPA-on-chip processing required less than 30 min, which was a 4-fold decrease compared to dPCR, requiring approximately 2 h. dRPA on the PWA chip provides a simple and highly sensitive method to quantify nucleic acids without thermal cycling or precise micropump/microvalve control. It has applications in fast field analysis and critical clinical diagnostics under resource-limited settings. PMID:27074005</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19920035043&hterms=iyer&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Diyer','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19920035043&hterms=iyer&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Diyer"><span id="translatedtitle"><span class="hlt">Error</span> latency measurements in symbolic architectures</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Young, L. T.; Iyer, R. K.</p> <p>1991-01-01</p> <p><span class="hlt">Error</span> latency, the <span class="hlt">time</span> that elapses between the occurrence of an <span class="hlt">error</span> and its detection, has a significant effect on reliability. In computer systems, failure rates can be elevated during a burst of system activity due to increased detection of latent <span class="hlt">errors</span>. A hybrid monitoring environment is developed to measure the <span class="hlt">error</span> latency distribution of <span class="hlt">errors</span> occurring in main memory. The objective of this study is to develop a methodology for gauging the dependability of individual data categories within a real-<span class="hlt">time</span> application. The hybrid monitoring technique is novel in that it selects and categorizes a specific subset of the available blocks of memory to monitor. The precise <span class="hlt">times</span> of reads and writes are collected, so no actual faults need be injected. Unlike previous monitoring studies that rely on a periodic sampling approach or on statistical approximation, this new approach permits continuous monitoring of referencing activity and precise measurement of <span class="hlt">error</span> latency.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19660000479','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19660000479"><span id="translatedtitle">Modified McLeod pressure gage eliminates measurement <span class="hlt">errors</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kells, M. C.</p> <p>1966-01-01</p> <p>Modification of a McLeod gage eliminates <span class="hlt">errors</span> in measuring <span class="hlt">absolute</span> pressure of gases in the vacuum range. A valve which is internal to the gage and is magnetically actuated is positioned between the mercury reservoir and the sample gas chamber.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26502162','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26502162"><span id="translatedtitle">Correcting electrode modelling <span class="hlt">errors</span> in EIT on realistic 3D head models.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jehl, Markus; Avery, James; Malone, Emma; Holder, David; Betcke, Timo</p> <p>2015-12-01</p> <p>Electrical impedance tomography (EIT) is a promising medical imaging technique which could aid differentiation of haemorrhagic from ischaemic stroke in an ambulance. One challenge in EIT is the ill-posed nature of the image reconstruction, i.e., that small measurement or modelling <span class="hlt">errors</span> can result in large image artefacts. It is therefore important that reconstruction algorithms are improved with regard to stability to modelling <span class="hlt">errors</span>. We identify that wrongly modelled electrode positions constitute one of the biggest sources of image artefacts in head EIT. Therefore, the use of the Fréchet derivative on the electrode boundaries in a realistic three-dimensional head model is investigated, in order to reconstruct electrode movements simultaneously to conductivity changes. We show a fast implementation and analyse the performance of electrode position reconstructions in <span class="hlt">time</span>-difference and <span class="hlt">absolute</span> imaging for simulated and experimental voltages. Reconstructing the electrode positions and conductivities simultaneously increased the image quality significantly in the presence of electrode movement.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19850006210','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19850006210"><span id="translatedtitle">Mean and Random <span class="hlt">Errors</span> of Visual Roll Rate Perception from Central and Peripheral Visual Displays</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Vandervaart, J. C.; Hosman, R. J. A. W.</p> <p>1984-01-01</p> <p>A large number of roll rate stimuli, covering rates from zero to plus or minus 25 deg/sec, were presented to subjects in random order at 2 sec intervals. Subjects were to make estimates of magnitude of perceived roll rate stimuli presented on either a central display, on displays in the peripheral ield of vision, or on all displays simultaneously. Response was by way of a digital keyboard device, stimulus exposition <span class="hlt">times</span> were varied. The present experiment differs from earlier perception tasks by the same authors in that mean rate perception <span class="hlt">error</span> (and standard deviation) was obtained as a function of rate stimulus magnitude, whereas the earlier experiments only yielded mean <span class="hlt">absolute</span> <span class="hlt">error</span> magnitude. Moreover, in the present experiment, all stimulus rates had an equal probability of occurrence, whereas the earlier tests featured a Gaussian stimulus probability density function. Results yield a ood illustration of the nonlinear functions relating rate presented to rate perceived by human observers or operators.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=272220','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=272220"><span id="translatedtitle">Sensitivity of disease management decision aids to temperature input <span class="hlt">errors</span> associated with out-of-canopy and reduced <span class="hlt">time</span>-resolution measurements</span></a></p> <p><a target="_blank" href="http://www.ars.usda.gov/services/TekTran.htm">Technology Transfer Automated Retrieval System (TEKTRAN)</a></p> <p></p> <p></p> <p>Plant disease management decision aids typically require inputs of weather elements such as air temperature. Whereas many disease models are created based on weather elements at the crop canopy, and with relatively fine <span class="hlt">time</span> resolution, the decision aids commonly are implemented with hourly weather...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3569517','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3569517"><span id="translatedtitle">A Simplified Confinement Method (SCM) for Calculating <span class="hlt">Absolute</span> Free Energies and Free Energy and Entropy Differences</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ovchinnikov, Victor; Cecchini, Marco; Karplus, Martin</p> <p>2013-01-01</p> <p>A simple and robust formulation of the path-independent confinement method for the calculation of free energies is presented. The simplified confinement method (SCM) does not require matrix diagonalization or switching off the molecular force field, and has a simple convergence criterion. The method can be readily implemented in molecular dynamics programs with minimal or no code modifications. Because the confinement method is a special case of thermodynamic integration, it is trivially parallel over the integration variable. The accuracy of the method is demonstrated using a model diatomic molecule, for which exact results can be computed analytically. The method is then applied to the alanine dipeptide in vacuum, and to the α-helix ↔ β-sheet transition in a sixteen-residue peptide modeled in implicit solvent. The SCM requires less effort for the calculation of free energy differences than previous formulations because it does not require computing normal modes. The SCM has a diminished advantage for determining <span class="hlt">absolute</span> free energy values, because it requires decreasing the MD integration step to obtain accurate results. An approximate confinement procedure is introduced, which can be used to estimate directly the configurational entropy difference between two macrostates, without the need for additional computation of the difference in the free energy or enthalpy. The approximation has similar convergence properties as the standard confinement method for the calculation of free energies. The use of the approximation requires about five <span class="hlt">times</span> less wall-clock simulation <span class="hlt">time</span> than that needed to compute enthalpy differences to similar precision from an MD trajectory. For the biomolecular systems considered in this study, the <span class="hlt">errors</span> in the entropy approximation are under 10%. The approximation will therefore be most useful for cases in which the dominant source of <span class="hlt">error</span> is insufficient sampling in the estimation of enthalpies, as arises in simulations of large</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19790022108','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19790022108"><span id="translatedtitle">Radar <span class="hlt">error</span> statistics for the space shuttle</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lear, W. M.</p> <p>1979-01-01</p> <p>Radar <span class="hlt">error</span> statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias <span class="hlt">error</span> statistics, using the subscript B, and high frequency <span class="hlt">error</span> statistics, using the subscript q. Bias <span class="hlt">errors</span> may be slowly varying to constant. High frequency random <span class="hlt">errors</span> (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias <span class="hlt">errors</span> were mainly due to hardware defects and to <span class="hlt">errors</span> in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first <span class="hlt">time</span> that horizontal and line of sight scintillations were identified.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/1084198','DOE-PATENT-XML'); return false;" href="http://www.osti.gov/scitech/servlets/purl/1084198"><span id="translatedtitle"><span class="hlt">Error</span> detection method</span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Olson, Eric J.</p> <p>2013-06-11</p> <p>An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware <span class="hlt">error</span> as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware <span class="hlt">error</span> from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware <span class="hlt">errors</span> to manifest, and the hardware <span class="hlt">error</span> is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware <span class="hlt">errors</span>, and the output of the algorithm may be compared at the end of the run to detect a hardware <span class="hlt">error</span> that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3902048','PMC'); return false;" href="http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3902048"><span id="translatedtitle">The <span class="hlt">Error</span> in Total <span class="hlt">Error</span> Reduction</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.</p> <p>2013-01-01</p> <p>Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total <span class="hlt">error</span> across a stimulus compound). This total <span class="hlt">error</span> reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total <span class="hlt">error</span> signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local <span class="hlt">error</span> reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1998BAAS...30Q1055K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1998BAAS...30Q1055K"><span id="translatedtitle">An Investigation of Mars NIR Spectral Features using <span class="hlt">Absolutely</span> Calibrated Images</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Klassen, D. R.; Bell, J. F., III</p> <p>1998-09-01</p> <p>We used the NSFCAM 256x256 InSb array camera at the NASA Infrared Telescope Facility to gather near-infrared (NIR) spectral image sets of Mars through the 1995 opposition. In previous studies with these data [1-6] we noted several interesting spectral features, some of which are diagnostic volatile absorption bands that allow the discrimination between CO_2 or H_2O ices. Band depth maps of these regions show polar and morning and evening limb ices composed of water and some indication of polar CO_2 ices. Other features, near 3.33 and 3.4\\micron, appear to be confined to particular geographic regions; specifically Syrtis Major. However, the images used in these previous studies were calibrated to either the disk average or only to a rough scaled reflectance by simple division by solar-type star data gathered at the same <span class="hlt">time</span> as the images. This only allowed determinations of spectral features either relative to some global average of the feature, or to some unit not directly comparable to other published data. For at least three of our observation nights the conditions and data are sufficient to <span class="hlt">absolutely</span> calibrate the images to radiance factors. For this work we reinvestigate the spectra and band depth mapping results using these <span class="hlt">absolutely</span> calibrated images. In general we find that bright regions have peak radiance factors of 0.5 to 0.6 at 2.25\\micron\\ and 0.3 to 0.4 at 3.5\\micron; dark regions have radiance factors of 0.2 to 0.25 at 2.25\\micron\\ and 0.1 to 0.15 at 3.5\\micron. Overall, precision <span class="hlt">errors</span> are about 0.025 in radiance factor and <span class="hlt">absolute</span> <span class="hlt">errors</span> are at the 10-15% level. These results are consistent with previous studies that found radiance factors of 0.35 in Tharsis, 0.47 in Elysium, and 0.26 in dark regions at 2.25\\micron\\ [7,8] and 0.3 in bright regions and 0.1 in dark regions at 3.5\\micron\\ [8]. These <span class="hlt">absolute</span> flux values will allow direct comparison of these results to radiative transfer models of the behavior of the surface and</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <center> <div class="footer-extlink text-muted"><small>Some links on this page may take you to non-federal websites. Their policies may differ from this site.</small> </div> </center> <div id="footer-wrapper"> <div class="footer-content"> <div id="footerOSTI" class=""> <div class="row"> <div class="col-md-4 text-center col-md-push-4 footer-content-center"><small><a href="http://www.science.gov/disclaimer.html">Privacy and Security</a></small> <div class="visible-sm visible-xs push_footer"></div> </div> <div class="col-md-4 text-center col-md-pull-4 footer-content-left"> <img src="https://www.osti.gov/images/DOE_SC31.png" alt="U.S. Department of Energy" usemap="#doe" height="31" width="177"><map style="display:none;" name="doe" id="doe"><area shape="rect" coords="1,3,107,30" href="http://www.energy.gov" alt="U.S. Deparment of Energy"><area shape="rect" coords="114,3,165,30" href="http://www.science.energy.gov" alt="Office of Science"></map> <a ref="http://www.osti.gov" style="margin-left: 15px;"><img src="https://www.osti.gov/images/footerimages/ostigov53.png" alt="Office of Scientific and Technical Information" height="31" width="53"></a> <div class="visible-sm visible-xs push_footer"></div> </div> <div class="col-md-4 text-center footer-content-right"> <a href="http://www.osti.gov/nle"><img src="https://www.osti.gov/images/footerimages/NLElogo31.png" alt="National Library of Energy" height="31" width="79"></a> <a href="http://www.science.gov"><img src="https://www.osti.gov/images/footerimages/scigov77.png" alt="science.gov" height="31" width="98"></a> <a href="http://worldwidescience.org"><img src="https://www.osti.gov/images/footerimages/wws82.png" alt="WorldWideScience.org" height="31" width="90"></a> </div> </div> </div> </div> </div> <p><br></p> </div><!-- container --> </body> </html>