Relative errors can cue absolute visuomotor mappings.
van Dam, Loes C J; Ernst, Marc O
2015-12-01
When repeatedly switching between two visuomotor mappings, e.g. in a reaching or pointing task, adaptation tends to speed up over time. That is, when the error in the feedback corresponds to a mapping switch, fast adaptation occurs. Yet, what is learned, the relative error or the absolute mappings? When switching between mappings, errors with a size corresponding to the relative difference between the mappings will occur more often than other large errors. Thus, we could learn to correct more for errors with this familiar size (Error Learning). On the other hand, it has been shown that the human visuomotor system can store several absolute visuomotor mappings (Mapping Learning) and can use associated contextual cues to retrieve them. Thus, when contextual information is present, no error feedback is needed to switch between mappings. Using a rapid pointing task, we investigated how these two types of learning may each contribute when repeatedly switching between mappings in the absence of task-irrelevant contextual cues. After training, we examined how participants changed their behaviour when a single error probe indicated either the often-experienced error (Error Learning) or one of the previously experienced absolute mappings (Mapping Learning). Results were consistent with Mapping Learning despite the relative nature of the error information in the feedback. This shows that errors in the feedback can have a double role in visuomotor behaviour: they drive the general adaptation process by making corrections possible on subsequent movements, as well as serve as contextual cues that can signal a learned absolute mapping. PMID:26280315
Clock time is absolute and universal
NASA Astrophysics Data System (ADS)
Shen, Xinhang
2015-09-01
A critical error is found in the Special Theory of Relativity (STR): mixing up the concepts of the STR abstract time of a reference frame and the displayed time of a physical clock, which leads to use the properties of the abstract time to predict time dilation on physical clocks and all other physical processes. Actually, a clock can never directly measure the abstract time, but can only record the result of a physical process during a period of the abstract time such as the number of cycles of oscillation which is the multiplication of the abstract time and the frequency of oscillation. After Lorentz Transformation, the abstract time of a reference frame expands by a factor gamma, but the frequency of a clock decreases by the same factor gamma, and the resulting multiplication i.e. the displayed time of a moving clock remains unchanged. That is, the displayed time of any physical clock is an invariant of Lorentz Transformation. The Lorentz invariance of the displayed times of clocks can further prove within the framework of STR our earth based standard physical time is absolute, universal and independent of inertial reference frames as confirmed by both the physical fact of the universal synchronization of clocks on the GPS satellites and clocks on the earth, and the theoretical existence of the absolute and universal Galilean time in STR which has proved that time dilation and space contraction are pure illusions of STR. The existence of the absolute and universal time in STR has directly denied that the reference frame dependent abstract time of STR is the physical time, and therefore, STR is wrong and all its predictions can never happen in the physical world.
NASA Astrophysics Data System (ADS)
Myers, S.; Johannesson, G.
2012-12-01
Arrival time measurements based on waveform cross correlation are becoming more common as advanced signal processing methods are applied to seismic data archives and real-time data streams. Waveform correlation can precisely measure the time difference between the arrival of two phases, and differential time data can be used to constrain relative location of events. Absolute locations are needed for many applications, which generally requires the use of absolute time data. Current methods for measuring absolute time data are approximately two orders of magnitude less precise than differential time measurements. To exploit the strengths of both absolute and differential time data, we extend our multiple-event location method Bayesloc, which previously used absolute time data only, to include the use of differential time measurements that are based on waveform cross correlation. Fundamentally, Bayesloc is a formulation of the joint probability over all parameters comprising the multiple event location system. The Markov-Chain Monte Carlo method is used to sample from the joint probability distribution given arrival data sets. The differential time component of Bayesloc includes scaling a stochastic estimate of differential time measurement precision based the waveform correlation coefficient for each datum. For a regional-distance synthetic data set with absolute and differential time measurement error of 0.25 seconds and 0.01 second, respectively, epicenter location accuracy is improved from and average of 1.05 km when solely absolute time data are used to 0.28 km when absolute and differential time data are used jointly (73% improvement). The improvement in absolute location accuracy is the result of conditionally limiting absolute location probability regions based on the precise relative position with respect to neighboring events. Bayesloc estimates of data precision are found to be accurate for the synthetic test, with absolute and differential time measurement
On the Error Sources in Absolute Individual Antenna Calibrations
NASA Astrophysics Data System (ADS)
Aerts, Wim; Baire, Quentin; Bilich, Andria; Bruyninx, Carine; Legrand, Juliette
2013-04-01
field) multi path errors, both during calibration and later on at the station, absolute sub-millimeter positioning with GPS is not (yet) possible. References [1] G. Wübbena, M. Schmitz, G. Boettcher, C. Schumann, "Absolute GNSS Antenna Calibration with a Robot: Repeatability of Phase Variations, Calibration of GLONASS and Determination of Carrier-to-Noise Pattern", International GNSS Service: Analysis Center workshop, 8-12 May 2006, Darmstadt, Germany. [2] P. Zeimetz, H. Kuhlmann, "On the Accuracy of Absolute GNSS Antenna Calibration and the Conception of a New Anechoic Chamber", FIG Working Week 2008, 14-19 June 2008, Stockholm, Sweden. [3] P. Zeimetz, H. Kuhlmann, L. Wanninger, V. Frevert, S. Schön and K. Strauch, "Ringversuch 2009", 7th GNSS-Antennen-Workshop, 19-20 March 2009, Dresden, Germany.
Absolute vs. relative error characterization of electromagnetic tracking accuracy
NASA Astrophysics Data System (ADS)
Matinfar, Mohammad; Narayanasamy, Ganesh; Gutierrez, Luis; Chan, Raymond; Jain, Ameet
2010-02-01
Electromagnetic (EM) tracking systems are often used for real time navigation of medical tools in an Image Guided Therapy (IGT) system. They are specifically advantageous when the medical device requires tracking within the body of a patient where line of sight constraints prevent the use of conventional optical tracking. EM tracking systems are however very sensitive to electromagnetic field distortions. These distortions, arising from changes in the electromagnetic environment due to the presence of conductive ferromagnetic surgical tools or other medical equipment, limit the accuracy of EM tracking, in some cases potentially rendering tracking data unusable. We present a mapping method for the operating region over which EM tracking sensors are used, allowing for characterization of measurement errors, in turn providing physicians with visual feedback about measurement confidence or reliability of localization estimates. In this instance, we employ a calibration phantom to assess distortion within the operating field of the EM tracker and to display in real time the distribution of measurement errors, as well as the location and extent of the field associated with minimal spatial distortion. The accuracy is assessed relative to successive measurements. Error is computed for a reference point and consecutive measurement errors are displayed relative to the reference in order to characterize the accuracy in near-real-time. In an initial set-up phase, the phantom geometry is calibrated by registering the data from a multitude of EM sensors in a non-ferromagnetic ("clean") EM environment. The registration results in the locations of sensors with respect to each other and defines the geometry of the sensors in the phantom. In a measurement phase, the position and orientation data from all sensors are compared with the known geometry of the sensor spacing, and localization errors (displacement and orientation) are computed. Based on error thresholds provided by the
Absolute Plate Velocities from Seismic Anisotropy: Importance of Correlated Errors
NASA Astrophysics Data System (ADS)
Gordon, R. G.; Zheng, L.; Kreemer, C.
2014-12-01
The orientation of seismic anisotropy inferred beneath the interiors of plates may provide a means to estimate the motions of the plate relative to the deeper mantle. Here we analyze a global set of shear-wave splitting data to estimate plate motions and to better understand the dispersion of the data, correlations in the errors, and their relation to plate speed. The errors in plate motion azimuths inferred from shear-wave splitting beneath any one tectonic plate are shown to be correlated with the errors of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. Our preferred set of angular velocities, SKS-MORVEL, is determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25±0.11º Ma-1 (95% confidence limits) right-handed about 57.1ºS, 68.6ºE. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ=19.2°) differs insignificantly from that for continental lithosphere (σ=21.6°). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ=7.4°) than for continental lithosphere (σ=14.7°). Two of the slowest-moving plates, Antarctica (vRMS=4 mm a-1, σ=29°) and Eurasia (vRMS=3 mm a-1, σ=33°), have two of the largest within-plate dispersions, which may indicate that a plate must move faster than ≈5 mm a-1 to result in seismic anisotropy useful for estimating plate motion.
NASA Astrophysics Data System (ADS)
Gao, J.
2014-12-01
Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a
Absolute plate velocities from seismic anisotropy: Importance of correlated errors
NASA Astrophysics Data System (ADS)
Zheng, Lin; Gordon, Richard G.; Kreemer, Corné
2014-09-01
The errors in plate motion azimuths inferred from shear wave splitting beneath any one tectonic plate are shown to be correlated with the errors of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. Our preferred set of angular velocities, SKS-MORVEL, is determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25 ± 0.11° Ma-1 (95% confidence limits) right handed about 57.1°S, 68.6°E. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ = 19.2°) differs insignificantly from that for continental lithosphere (σ = 21.6°). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ = 7.4°) than for continental lithosphere (σ = 14.7°). Two of the slowest-moving plates, Antarctica (vRMS = 4 mm a-1, σ = 29°) and Eurasia (vRMS = 3 mm a-1, σ = 33°), have two of the largest within-plate dispersions, which may indicate that a plate must move faster than ≈ 5 mm a-1 to result in seismic anisotropy useful for estimating plate motion. The tendency of observed azimuths on the Arabia plate to be counterclockwise of plate motion may provide information about the direction and amplitude of superposed asthenospheric flow or about anisotropy in the lithospheric mantle.
Students' Mathematical Work on Absolute Value: Focusing on Conceptions, Errors and Obstacles
ERIC Educational Resources Information Center
Elia, Iliada; Özel, Serkan; Gagatsis, Athanasios; Panaoura, Areti; Özel, Zeynep Ebrar Yetkiner
2016-01-01
This study investigates students' conceptions of absolute value (AV), their performance in various items on AV, their errors in these items and the relationships between students' conceptions and their performance and errors. The Mathematical Working Space (MWS) is used as a framework for studying students' mathematical work on AV and the…
Absolute Timing of the Crab Pulsar with RXTE
NASA Technical Reports Server (NTRS)
Rots, Arnold H.; Jahoda, Keith; Lyne, Andrew G.
2004-01-01
We have monitored the phase of the main X-ray pulse of the Crab pulsar with the Rossi X-ray Timing Explorer (RXTE) for almost eight years, since the start of the mission in January 1996. The absolute time of RXTE's clock is sufficiently accurate to allow this phase to be compared directly with the radio profile. Our monitoring observations of the pulsar took place bi-weekly (during the periods when it was at least 30 degrees from the Sun) and we correlated the data with radio timing ephemerides derived from observations made at Jodrell Bank. We have determined the phase of the X-ray main pulse for each observation with a typical error in the individual data points of 50 microseconds. The total ensemble is consistent with a phase that is constant over the monitoring period, with the X-ray pulse leading the radio pulse by 0.01025 plus or minus 0.00120 period in phase, or 344 plus or minus 40 microseconds in time. The error estimate is dominated by a systematic error of 40 microseconds, most likely constant, arising from uncertainties in the instrumental calibration of the radio data. The statistical error is 0.00015 period, or 5 microseconds. The separation of the main pulse and interpulse appears to be unchanging at time scales of a year or less, with an average value of 0.4001 plus or minus 0.0002 period. There is no apparent variation in these values with energy over the 2-30 keV range. The lag between the radio and X-ray pulses ma be constant in phase (i.e., rotational in nature) or constant in time (i.e., due to a pathlength difference). We are not (yet) able to distinguish between these two interpretations.
Assessing Suturing Skills in a Self-Guided Learning Setting: Absolute Symmetry Error
ERIC Educational Resources Information Center
Brydges, Ryan; Carnahan, Heather; Dubrowski, Adam
2009-01-01
Directed self-guidance, whereby trainees independently practice a skill-set in a structured setting, may be an effective technique for novice training. Currently, however, most evaluation methods require an expert to be present during practice. The study aim was to determine if absolute symmetry error, a clinically important measure that can be…
The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...
Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation
ERIC Educational Resources Information Center
Prentice, J. S. C.
2012-01-01
An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…
NASA Technical Reports Server (NTRS)
Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan
2013-01-01
A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.
Time-resolved Absolute Velocity Quantification with Projections
Langham, Michael C.; Jain, Varsha; Magland, Jeremy F.; Wehrli, Felix W.
2010-01-01
Quantitative information on time-resolved blood velocity along the femoral/popliteal artery can provide clinical information on peripheral arterial disease and complement MR angiography since not all stenoses are hemodynamically significant. The key disadvantages of the most widely used approach to time-resolve pulsatile blood flow by cardiac-gated velocity-encoded gradient-echo imaging are gating errors and long acquisition time. Here we demonstrate a rapid non-triggered method that quantifies absolute velocity on the basis of phase difference between successive velocity-encoded projections after selectively removing the background static tissue signal via a reference image. The tissue signal from the reference image’s center k-space line is isolated by masking out the vessels in the image domain. The performance of the technique, in terms of reproducibility and agreement with results obtained with conventional phase contrast (PC)-MRI was evaluated at 3T field strength with a variable-flow rate phantom and in vivo of the triphasic velocity waveforms at several segments along the femoral and popliteal arteries. Additionally, time-resolved flow velocity was quantified in five healthy subjects and compared against gated PC-MRI results. To illustrate clinical feasibility the proposed method was shown to be able to identify hemodynamic abnormalities and impaired reactivity in a diseased femoral artery. For both phantom and in vivo studies, velocity measurements were within 1.5 cm/s and the coefficient of variation was less than 5% in an in vivo reproducibility study. In five healthy subjects, the average differences in mean peak velocities and their temporal locations were within 1 cm/s and 10 ms compared to gated PC-MRI. In conclusion, the proposed method provides temporally-resolved arterial velocity with a temporal resolution of 20 ms with minimal post-processing. PMID:20677235
NASA Technical Reports Server (NTRS)
Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan
2016-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.
NASA Astrophysics Data System (ADS)
Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan
2015-09-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a testbed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.
NASA Technical Reports Server (NTRS)
Thome, Kurtis; Gubbels, Timothy; Barnes, Robert
2011-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) plans to observe climate change trends over decadal time scales to determine the accuracy of climate projections. The project relies on spaceborne earth observations of SI-traceable variables sensitive to key decadal change parameters. The mission includes a reflected solar instrument retrieving at-sensor reflectance over the 320 to 2300 nm spectral range with 500-m spatial resolution and 100-km swath. Reflectance is obtained from the ratio of measurements of the earth s surface to those while viewing the sun relying on a calibration approach that retrieves reflectance with uncertainties less than 0.3%. The calibration is predicated on heritage hardware, reduction of sensor complexity, adherence to detector-based calibration standards, and an ability to simulate in the laboratory on-orbit sources in both size and brightness to provide the basis of a transfer to orbit of the laboratory calibration including a link to absolute solar irradiance measurements. The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe high-accuracy, long-term climate change trends and to use decadal change observations as the most critical method to determine the accuracy of climate change projections such as those in the IPCC Report. A rigorously known accuracy of both decadal change observations as well as climate projections is critical in order to enable sound policy decisions. The CLARREO Project will implement a spaceborne earth observation mission designed to provide rigorous SI traceable observations (i.e., radiance, reflectance, and refractivity) that are sensitive to a wide range of key decadal change variables, including: 1) Surface temperature and atmospheric temperature profile 2) Atmospheric water vapor profile 3) Far infrared water vapor greenhouse 4) Aerosol properties and anthropogenic aerosol direct radiative forcing 5) Total and spectral solar
Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations...
Henry More and the development of absolute time.
Thomas, Emily
2015-12-01
This paper explores the nature, development and influence of the first English account of absolute time, put forward in the mid-seventeenth century by the 'Cambridge Platonist' Henry More. Against claims in the literature that More does not have an account of time, this paper sets out More's evolving account and shows that it reveals the lasting influence of Plotinus. Further, this paper argues that More developed his views on time in response to his adoption of Descartes' vortex cosmology and cosmogony, providing new evidence of More's wider project to absorb Cartesian natural philosophy into his Platonic metaphysics. Finally, this paper argues that More should be added to the list of sources that later English thinkers - including Newton and Samuel Clarke - drew on in constructing their absolute accounts of time. PMID:26568082
Interactions of timing and prediction error learning.
Kirkpatrick, Kimberly
2014-01-01
Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields.
Interactions of timing and prediction error learning.
Kirkpatrick, Kimberly
2014-01-01
Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields. PMID:23962670
Effective connectivity associated with auditory error detection in musicians with absolute pitch
Parkinson, Amy L.; Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Larson, Charles R.; Robin, Donald A.
2014-01-01
It is advantageous to study a wide range of vocal abilities in order to fully understand how vocal control measures vary across the full spectrum. Individuals with absolute pitch (AP) are able to assign a verbal label to musical notes and have enhanced abilities in pitch identification without reliance on an external referent. In this study we used dynamic causal modeling (DCM) to model effective connectivity of ERP responses to pitch perturbation in voice auditory feedback in musicians with relative pitch (RP), AP, and non-musician controls. We identified a network compromising left and right hemisphere superior temporal gyrus (STG), primary motor cortex (M1), and premotor cortex (PM). We specified nine models and compared two main factors examining various combinations of STG involvement in feedback pitch error detection/correction process. Our results suggest that modulation of left to right STG connections are important in the identification of self-voice error and sensory motor integration in AP musicians. We also identify reduced connectivity of left hemisphere PM to STG connections in AP and RP groups during the error detection and corrections process relative to non-musicians. We suggest that this suppression may allow for enhanced connectivity relating to pitch identification in the right hemisphere in those with more precise pitch matching abilities. Musicians with enhanced pitch identification abilities likely have an improved auditory error detection and correction system involving connectivity of STG regions. Our findings here also suggest that individuals with AP are more adept at using feedback related to pitch from the right hemisphere. PMID:24634644
Improved Strategies and Optimization of Calibration Models for Real-time PCR Absolute Quantification
Real-time PCR absolute quantification applications rely on the use of standard curves to make estimates of DNA target concentrations in unknown samples. Traditional absolute quantification approaches dictate that a standard curve must accompany each experimental run. However, t...
An analysis of spacecraft data time tagging errors
NASA Technical Reports Server (NTRS)
Fang, A. C.
1975-01-01
An indepth examination of the timing and telemetry in just one spacecraft points out the genesis of various types of timing errors and serves as a guide in the design of future timing/telemetry systems. The principal sources of timing errors are examined carefully and are described in detail. Estimates of these errors are also made and presented. It is found that the timing errors within the telemetry system are larger than the total timing errors resulting from all other sources.
Gustafson, William I.; Yu, Shaocai
2012-10-23
Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations of these metrics are only valid for datasets with positive means. This paper presents a methodology to use and interpret the metrics with datasets that have negative means. The updated formulations give identical results compared to the original formulations for the case of positive means, so researchers are encouraged to use the updated formulations going forward without introducing ambiguity.
75 FR 15371 - Time Error Correction Reliability Standard
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-29
... Energy Regulatory Commission 18 CFR Part 40 Time Error Correction Reliability Standard March 18, 2010... section 215 of the Federal Power Act, the Commission proposes to remand the proposed revised Time Error... Commission proposes to remand the Time Error Correction Reliability Standard (BAL-004-1) developed by...
System Measures Errors Between Time-Code Signals
NASA Technical Reports Server (NTRS)
Cree, David; Venkatesh, C. N.
1993-01-01
System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.
Alterations in Error-Related Brain Activity and Post-Error Behavior over Time
ERIC Educational Resources Information Center
Themanson, Jason R.; Rosen, Peter J.; Pontifex, Matthew B.; Hillman, Charles H.; McAuley, Edward
2012-01-01
This study examines the relation between the error-related negativity (ERN) and post-error behavior over time in healthy young adults (N = 61). Event-related brain potentials were collected during two sessions of an identical flanker task. Results indicated changes in ERN and post-error accuracy were related across task sessions, with more…
NASA Astrophysics Data System (ADS)
Dykowski, Przemyslaw; Krynski, Jan
2015-04-01
The establishment of modern gravity control with the use of exclusively absolute method of gravity determination has significant advantages as compared to the one established mostly with relative gravity measurements (e.g. accuracy, time efficiency). The newly modernized gravity control in Poland consists of 28 fundamental stations (laboratory) and 168 base stations (PBOG14 - located in the field). Gravity at the fundamental stations was surveyed with the FG5-230 gravimeter of the Warsaw University of Technology, and at the base stations - with the A10-020 gravimeter of the Institute of Geodesy and Cartography, Warsaw. This work concerns absolute gravity determinations at the base stations. Although free of common relative measurement errors (e.g. instrumental drift) and effects of network adjustment, absolute gravity determinations for the establishment of gravity control require advanced corrections due to time dependent factors, i.e. tidal and ocean loading corrections, atmospheric corrections and hydrological corrections that were not taken into account when establishing the previous gravity control in Poland. Currently available services and software allow to determine high accuracy and high temporal resolution corrections for atmospheric (based on digital weather models, e.g. ECMWF) and hydrological (based on hydrological models, e.g. GLDAS/Noah) gravitational and loading effects. These corrections are mostly used for processing observations with Superconducting Gravimeters in the Global Geodynamics Project. For the area of Poland the atmospheric correction based on weather models can differ from standard atmospheric correction by even ±2 µGal. The hydrological model shows the annual variability of ±8 µGal. In addition the standard tidal correction may differ from the one obtained from the local tidal model (based on tidal observations). Such difference at Borowa Gora Observatory reaches the level of ±1.5 µGal. Overall the sum of atmospheric and
NASA Astrophysics Data System (ADS)
Long, Jiale; Xi, Jiangtao; Zhang, Jianmin; Zhu, Ming; Cheng, Wenqing; Li, Zhongwei; Shi, Yusheng
2016-09-01
In a recent published work, we proposed a technique to recover the absolute phase maps of fringe patterns with two selected fringe wavelengths. To achieve higher anti-error capability, the proposed method requires employing the fringe patterns with longer wavelengths; however, longer wavelength may lead to the degradation of the signal-to-noise ratio (SNR) in the surface measurement. In this paper, we propose a new approach to unwrap the phase maps from their wrapped versions based on the use of fringes with three different wavelengths which is characterized by improved anti-error capability and SNR. Therefore, while the previous method works on the two-phase maps obtained from six-step phase-shifting profilometry (PSP) (thus 12 fringe patterns are needed), the proposed technique performs very well on three-phase maps from three steps PSP, requiring only nine fringe patterns and hence more efficient. Moreover, the advantages of the two-wavelength method in simple implementation and flexibility in the use of fringe patterns are also reserved. Theoretical analysis and experiment results are presented to confirm the effectiveness of the proposed method.
Multi-channel data acquisition system with absolute time synchronization
NASA Astrophysics Data System (ADS)
Włodarczyk, Przemysław; Pustelny, Szymon; Budker, Dmitry; Lipiński, Marcin
2014-11-01
We present a low-cost, stand-alone global-time-synchronized data acquisition system. Our prototype allows recording up to four analog signals with a 16-bit resolution in variable ranges and a maximum sampling rate of 1000 S/s. The system simultaneously acquires readouts of external sensors e.g. magnetometer or thermometer. A complete data set, including a header containing timestamp, is stored on a Secure Digital (SD) card or transmitted to a computer using Universal Serial Bus (USB). The estimated time accuracy of the data acquisition is better than ±200 ns. The device is intended for use in a global network of optical magnetometers (the Global Network of Optical Magnetometers for Exotic physics - GNOME), which aims to search for signals heralding physics beyond the Standard Model, that can be generated by ordinary spin coupling to exotic particles or anomalous spin interactions.
Absolute GPS Time Event Generation and Capture for Remote Locations
NASA Astrophysics Data System (ADS)
HIRES Collaboration
The HiRes experiment operates fixed location and portable lasers at remote desert locations to generate calibration events. One physics goal of HiRes is to search for unusual showers. These may appear similar to upward or horizontally pointing laser tracks used for atmospheric calibration. It is therefore necessary to remove all of these calibration events from the HiRes detector data stream in a physics blind manner. A robust and convenient "tagging" method is to generate the calibration events at precisely known times. To facilitate this tagging method we have developed the GPSY (Global Positioning System YAG) module. It uses a GPS receiver, an embedded processor and additional timing logic to generate laser triggers at arbitrary programmed times and frequencies with better than 100nS accuracy. The GPSY module has two trigger outputs (one microsecond resolution) to trigger the laser flash-lamp and Q-switch and one event capture input (25nS resolution). The GPSY module can be programmed either by a front panel menu based interface or by a host computer via an RS232 serial interface. The latter also allows for computer logging of generated and captured event times. Details of the design and the implementation of these devices will be presented. 1 Motivation Air Showers represent a small fraction, much less than a percent, of the total High Resolution Fly's Eye data sample. The bulk of the sample is calibration data. Most of this calibration data is generated by two types of systems that use lasers. One type sends light directly to the detectors via optical fibers to monitor detector gains (Girard 2001). The other sends a beam of light into the sky and the scattered light that reaches the detectors is used to monitor atmospheric effects (Wiencke 1998). It is important that these calibration events be cleanly separated from the rest of the sample both to provide a complete set of monitoring information, and more
Vieira, Márcio M; Ugrinowitsch, Herbert; Oliveira, Fernanda S; Gallo, Lívia G; Benda, Rodolfo N
2012-10-01
The interaction between the amount of practice and frequency of Knowledge of Results (KR) was investigated in a timing skill. In the acquisition phase the task involved 90 trials of releasing a knob and transporting three tennis balls from three near recipients to three far ones in a specific sequence and target time. The retention test performed 24 hr. later had the same sequence of transport but a new target time was required. In both phases, absolute error and standard deviation plus constant error was measured. The five groups differed in relation to frequency of KR and amount of practice. The results showed that intermediate frequencies as well as higher frequencies of KR elicited better performance during the retention test. PMID:23265002
Vieira, Márcio M; Ugrinowitsch, Herbert; Oliveira, Fernanda S; Gallo, Lívia G; Benda, Rodolfo N
2012-10-01
The interaction between the amount of practice and frequency of Knowledge of Results (KR) was investigated in a timing skill. In the acquisition phase the task involved 90 trials of releasing a knob and transporting three tennis balls from three near recipients to three far ones in a specific sequence and target time. The retention test performed 24 hr. later had the same sequence of transport but a new target time was required. In both phases, absolute error and standard deviation plus constant error was measured. The five groups differed in relation to frequency of KR and amount of practice. The results showed that intermediate frequencies as well as higher frequencies of KR elicited better performance during the retention test.
A Mechanism for Error Detection in Speeded Response Time Tasks
ERIC Educational Resources Information Center
Holroyd, Clay B.; Yeung, Nick; Coles, Michael G. H.; Cohen, Jonathan D.
2005-01-01
The concept of error detection plays a central role in theories of executive control. In this article, the authors present a mechanism that can rapidly detect errors in speeded response time tasks. This error monitor assigns values to the output of cognitive processes involved in stimulus categorization and response generation and detects errors…
Overproduction timing errors in expert dancers.
Minvielle-Moncla, Joëlle; Audiffren, Michel; Macar, Françoise; Vallet, Cécile
2008-07-01
The authors investigated how expert dancers achieve accurate timing under various conditions. They designed the conditions to interfere with the dancers' attention to time and to test the explanation of the interference effect provided in the attentional model of time processing. Participants were 17 expert contemporary dancers who performed a freely chosen duration while walking and executing a bilateral cyclic arm movement over a given distance. The dancers reproduced that duration in different situations of interference. The process yielded temporal overproductions, validating the attentional model and extending its application to expert populations engaged in complex motor situations. The finding that the greatest overproduction occurred in the transfer-with-improvisation condition suggests that improvisation within a time deadline requires specific training.
Overcoming time-integration errors in SINDA's FWDBCK solution routine
NASA Technical Reports Server (NTRS)
Skladany, J. T.; Costello, F. A.
1984-01-01
The FWDBCK time step, which is usually chosen intuitively to achieve adequate accuracy at reasonable computational costs, can in fact lead to large errors. NASA observed such errors in solving cryogenic problems on the COBE spacecraft, but a similar error is also demonstrated for a single node radiating to space. An algorithm has been developed for selecting the time step during the course of the simulation. The error incurred when the time derivative is replaced by the FWDBCK time difference can be estimated from the Taylor-Series expression for the temperature. The algorithm selects the time step to keep this error small. The efficacy of the method is demonstrated on the COBE and single-node problems.
Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A.; Larson, Charles R.
2014-01-01
The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing. PMID:24355545
Modeling error analysis of stationary linear discrete-time filters
NASA Technical Reports Server (NTRS)
Patel, R.; Toda, M.
1977-01-01
The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.
Error Representation in Time For Compressible Flow Calculations
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2010-01-01
Time plays an essential role in most real world fluid mechanics problems, e.g. turbulence, combustion, acoustic noise, moving geometries, blast waves, etc. Time dependent calculations now dominate the computational landscape at the various NASA Research Centers but the accuracy of these computations is often not well understood. In this presentation, we investigate error representation (and error control) for time-periodic problems as a prelude to the investigation of feasibility of error control for stationary statistics and space-time averages. o These statistics and averages (e.g. time-averaged lift and drag forces) are often the output quantities sought by engineers. o For systems such as the Navier-Stokes equations, pointwise error estimates deteriorate rapidly which increasing Reynolds number while statistics and averages may remain well behaved.
Disentangling timing and amplitude errors in streamflow simulations
NASA Astrophysics Data System (ADS)
Seibert, Simon Paul; Ehret, Uwe; Zehe, Erwin
2016-09-01
This article introduces an improvement in the Series Distance (SD) approach for the improved discrimination and visualization of timing and magnitude uncertainties in streamflow simulations. SD emulates visual hydrograph comparison by distinguishing periods of low flow and periods of rise and recession in hydrological events. Within these periods, it determines the distance of two hydrographs not between points of equal time but between points that are hydrologically similar. The improvement comprises an automated procedure to emulate visual pattern matching, i.e. the determination of an optimal level of generalization when comparing two hydrographs, a scaled error model which is better applicable across large discharge ranges than its non-scaled counterpart, and "error dressing", a concept to construct uncertainty ranges around deterministic simulations or forecasts. Error dressing includes an approach to sample empirical error distributions by increasing variance contribution, which can be extended from standard one-dimensional distributions to the two-dimensional distributions of combined time and magnitude errors provided by SD. In a case study we apply both the SD concept and a benchmark model (BM) based on standard magnitude errors to a 6-year time series of observations and simulations from a small alpine catchment. Time-magnitude error characteristics for low flow and rising and falling limbs of events were substantially different. Their separate treatment within SD therefore preserves useful information which can be used for differentiated model diagnostics, and which is not contained in standard criteria like the Nash-Sutcliffe efficiency. Construction of uncertainty ranges based on the magnitude of errors of the BM approach and the combined time and magnitude errors of the SD approach revealed that the BM-derived ranges were visually narrower and statistically superior to the SD ranges. This suggests that the combined use of time and magnitude errors to
A mechanism for error detection in speeded response time tasks.
Holroyd, Clay B; Yeung, Nick; Coles, Michael G H; Cohen, Jonathan D
2005-05-01
The concept of error detection plays a central role in theories of executive control. In this article, the authors present a mechanism that can rapidly detect errors in speeded response time tasks. This error monitor assigns values to the output of cognitive processes involved in stimulus categorization and response generation and detects errors by identifying states of the system associated with negative value. The mechanism is formalized in a computational model based on a recent theoretical framework for understanding error processing in humans (C. B. Holroyd & M. G. H. Coles, 2002). The model is used to simulate behavioral and event-related brain potential data in a speeded response time task, and the results of the simulation are compared with empirical data.
Sources of error in picture naming under time pressure.
Lloyd-Jones, Toby J; Nettlemill, Mandy
2007-06-01
We used a deadline procedure to investigate how time pressure may influence the processes involved in picture naming. The deadline exaggerated errors found under naming without deadline. There were also category differences in performance between living and nonliving things and, in particular, for animals versus fruit and vegetables. The majority of errors were visuallyand semantically related to the target (e. celery-asparagus), and there was a greater proportion of these errors made to living things. Importantly, there were also more visual-semantic errors to animals than to fruit and vegetables. In addition, there were a smaller number of pure semantic errors (e.g., nut-bolt), which were made predominantly to nonliving things. The different kinds of error were correlated with different variables. Overall, visual-semantic errors were associated with visual complexity and visual similarity, whereas pure semantic errors were associated with imageability and age of acquisition. However, for animals, visual-semantic errors were associated with visual complexity, whereas for fruit and vegetables they were associated with visual similarity. We discuss these findings in terms of theories of category-specific semantic impairment and models of picture naming. PMID:17848037
On the Time Step Error of the DSMC
NASA Astrophysics Data System (ADS)
Hokazono, Tomokuni; Kobayashi, Seijiro; Ohsawa, Tomoki; Ohwada, Taku
2003-05-01
The time step truncation error of the DSMC is examined numerically. Contrary to the claim of [S.V. Bogomolov, U.S.S.R. Comput. Math. Math. Phys., Vol. 28, 79 (1988)] and in agreement with that of [T. Ohwada, J. Compt. Phys., Vol. 139, 1 (1998)], it is demonstrated that the error of the conventional DSMC per time step Δt is not O(Δt3) but O(Δt2). Further, it is shown that the error of the DSMC is reduced to O(Δt3) by applying Strang's splitting for the partial differential equations to the Boltzmann equation. The error resulting from the boundary condition, which is not studied in the abovementioned theoretical studies, is also discussed.
Absolute frequency measurement at 10-16 level based on the international atomic time
NASA Astrophysics Data System (ADS)
Hachisu, H.; Fujieda, M.; Kumagai, M.; Ido, T.
2016-06-01
Referring to International Atomic Time (TAI), we measured the absolute frequency of the 87Sr lattice clock with its uncertainty of 1.1 x 10-15. Unless an optical clock is continuously operated for the five days of the TAI grid, it is required to evaluate dead time uncertainty in order to use the available five-day average of the local frequency reference. We homogeneously distributed intermittent measurements over the five-day grid of TAI, by which the dead time uncertainty was reduced to low 10-16 level. Three campaigns of the five (or four)-day consecutive measurements have resulted in the absolute frequency of the 87Sr clock transition of 429 228 004 229 872.85 (47) Hz, where the systematic uncertainty of the 87Sr optical frequency standard amounts to 8.6 x 10-17.
Perturbative approach to continuous-time quantum error correction
NASA Astrophysics Data System (ADS)
Ippoliti, Matteo; Mazza, Leonardo; Rizzi, Matteo; Giovannetti, Vittorio
2015-04-01
We present a discussion of the continuous-time quantum error correction introduced by J. P. Paz and W. H. Zurek [Proc. R. Soc. A 454, 355 (1998), 10.1098/rspa.1998.0165]. We study the general Lindbladian which describes the effects of both noise and error correction in the weak-noise (or strong-correction) regime through a perturbative expansion. We use this tool to derive quantitative aspects of the continuous-time dynamics both in general and through two illustrative examples: the three-qubit and five-qubit stabilizer codes, which can be independently solved by analytical and numerical methods and then used as benchmarks for the perturbative approach. The perturbatively accessible time frame features a short initial transient in which error correction is ineffective, followed by a slow decay of the information content consistent with the known facts about discrete-time error correction in the limit of fast operations. This behavior is explained in the two case studies through a geometric description of the continuous transformation of the state space induced by the combined action of noise and error correction.
Absolute value optimization to estimate phase properties of stochastic time series
NASA Technical Reports Server (NTRS)
Scargle, J. D.
1977-01-01
Most existing deconvolution techniques are incapable of determining phase properties of wavelets from time series data; to assure a unique solution, minimum phase is usually assumed. It is demonstrated, for moving average processes of order one, that deconvolution filtering using the absolute value norm provides an estimate of the wavelet shape that has the correct phase character when the random driving process is nonnormal. Numerical tests show that this result probably applies to more general processes.
Characterizing Complex Time Series from the Scaling of Prediction Error.
NASA Astrophysics Data System (ADS)
Hinrichs, Brant Eric
This thesis concerns characterizing complex time series from the scaling of prediction error. We use the global modeling technique of radial basis function approximation to build models from a state-space reconstruction of a time series that otherwise appears complicated or random (i.e. aperiodic, irregular). Prediction error as a function of prediction horizon is obtained from the model using the direct method. The relationship between the underlying dynamics of the time series and the logarithmic scaling of prediction error as a function of prediction horizon is investigated. We use this relationship to characterize the dynamics of both a model chaotic system and physical data from the optic tectum of an attentive pigeon exhibiting the important phenomena of nonstationary neuronal oscillations in response to visual stimuli.
The Impact of Medical Interpretation Method on Time and Errors
Kapelusznik, Luciano; Prakash, Kavitha; Gonzalez, Javier; Orta, Lurmag Y.; Tseng, Chi-Hong; Changrani, Jyotsna
2007-01-01
Background Twenty-two million Americans have limited English proficiency. Interpreting for limited English proficient patients is intended to enhance communication and delivery of quality medical care. Objective Little is known about the impact of various interpreting methods on interpreting speed and errors. This investigation addresses this important gap. Design Four scripted clinical encounters were used to enable the comparison of equivalent clinical content. These scripts were run across four interpreting methods, including remote simultaneous, remote consecutive, proximate consecutive, and proximate ad hoc interpreting. The first 3 methods utilized professional, trained interpreters, whereas the ad hoc method utilized untrained staff. Measurements Audiotaped transcripts of the encounters were coded, using a prespecified algorithm to determine medical error and linguistic error, by coders blinded to the interpreting method. Encounters were also timed. Results Remote simultaneous medical interpreting (RSMI) encounters averaged 12.72 vs 18.24 minutes for the next fastest mode (proximate ad hoc) (p = 0.002). There were 12 times more medical errors of moderate or greater clinical significance among utterances in non-RSMI encounters compared to RSMI encounters (p = 0.0002). Conclusions Whereas limited by the small number of interpreters involved, our study found that RSMI resulted in fewer medical errors and was faster than non-RSMI methods of interpreting. PMID:17957418
Heat conduction errors and time lag in cryogenic thermometer installations
NASA Technical Reports Server (NTRS)
Warshawsky, I.
1973-01-01
Installation practices are recommended that will increase rate of heat exchange between the thermometric sensing element and the cryogenic fluid and that will reduce the rate of undesired heat transfer to higher-temperature objects. Formulas and numerical data are given that help to estimate the magnitude of heat-conduction errors and of time lag in response.
Real-Time Minimization of Tracking Error for Aircraft Systems
NASA Technical Reports Server (NTRS)
Garud, Sumedha; Kaneshige, John T.; Krishnakumar, Kalmanje S.; Kulkarni, Nilesh V.; Burken, John
2013-01-01
This technology presents a novel, stable, discrete-time adaptive law for flight control in a Direct adaptive control (DAC) framework. Where errors are not present, the original control design has been tuned for optimal performance. Adaptive control works towards achieving nominal performance whenever the design has modeling uncertainties/errors or when the vehicle suffers substantial flight configuration change. The baseline controller uses dynamic inversion with proportional-integral augmentation. On-line adaptation of this control law is achieved by providing a parameterized augmentation signal to a dynamic inversion block. The parameters of this augmentation signal are updated to achieve the nominal desired error dynamics. If the system senses that at least one aircraft component is experiencing an excursion and the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, then the neural network (NN) modeling of aircraft operation may be changed.
NASA Technical Reports Server (NTRS)
Beck, S. M.
1975-01-01
A mobile self-contained Faraday cup system for beam current measurments of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 + or - 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV.
Comparison of different standards for real-time PCR-based absolute quantification.
Dhanasekaran, S; Doherty, T Mark; Kenneth, John
2010-03-31
Quantitative real-time PCR (qPCR) is a powerful tool used for both research and diagnostic, which has the advantage, compared to relative quantification, of providing an absolute copy number for a particular target. However, reliable standards are essential for qPCR. In this study, we have compared four types of commonly-used standards--PCR products (with and without purification) and cloned target sequences (circular and linear plasmid) for their stability during storage (using percentage of variance in copy numbers, PCR efficiency and regression curve correlation coefficient (R(2))) using hydrolysis probe (TaqMan) chemistry. Results, expressed as copy numbers/microl, are presented from a sample human system in which absolute levels of HuPO (reference gene) and the cytokine gene IFN-gamma were measured. To ensure the suitability and stability of the four standards, the experiments were performed at 0, 7 and 14 day intervals and repeated 6 times. We have found that the copy numbers vary (due to degradation of standards) over the period of time during storage at 4 degrees C and -20 degrees C, which affected PCR efficiency significantly. The cloned target sequences were noticeably more stable than the PCR product, which could lead to substantial variance in results using standards constructed by different routes. Standard quality and stability should be routinely tested for assays using qPCR.
Evaluation of absolute quantitation by nonlinear regression in probe-based real-time PCR
Goll, Rasmus; Olsen, Trine; Cui, Guanglin; Florholmen, Jon
2006-01-01
Background In real-time PCR data analysis, the cycle threshold (CT) method is currently the gold standard. This method is based on an assumption of equal PCR efficiency in all reactions, and precision may suffer if this condition is not met. Nonlinear regression analysis (NLR) or curve fitting has therefore been suggested as an alternative to the cycle threshold method for absolute quantitation. The advantages of NLR are that the individual sample efficiency is simulated by the model and that absolute quantitation is possible without a standard curve, releasing reaction wells for unknown samples. However, the calculation method has not been evaluated systematically and has not previously been applied to a TaqMan platform. Aim: To develop and evaluate an automated NLR algorithm capable of generating batch production regression analysis. Results Total RNA samples extracted from human gastric mucosa were reverse transcribed and analysed for TNFA, IL18 and ACTB by TaqMan real-time PCR. Fluorescence data were analysed by the regular CT method with a standard curve, and by NLR with a positive control for conversion of fluorescence intensity to copy number, and for this purpose an automated algorithm was written in SPSS syntax. Eleven separate regression models were tested, and the output data was subjected to Altman-Bland analysis. The Altman-Bland analysis showed that the best regression model yielded quantitative data with an intra-assay variation of 58% vs. 24% for the CT derived copy numbers, and with a mean inter-method deviation of × 0.8. Conclusion NLR can be automated for batch production analysis, but the CT method is more precise for absolute quantitation in the present setting. The observed inter-method deviation is an indication that assessment of the fluorescence conversion factor used in the regression method can be improved. However, the versatility depends on the level of precision required, and in some settings the increased cost effectiveness of NLR
NASA Astrophysics Data System (ADS)
Ye, Liming; Yang, Guixia; Van Ranst, Eric; Tang, Huajun
2013-03-01
A generalized, structural, time series modeling framework was developed to analyze the monthly records of absolute surface temperature, one of the most important environmental parameters, using a deterministicstochastic combined (DSC) approach. Although the development of the framework was based on the characterization of the variation patterns of a global dataset, the methodology could be applied to any monthly absolute temperature record. Deterministic processes were used to characterize the variation patterns of the global trend and the cyclic oscillations of the temperature signal, involving polynomial functions and the Fourier method, respectively, while stochastic processes were employed to account for any remaining patterns in the temperature signal, involving seasonal autoregressive integrated moving average (SARIMA) models. A prediction of the monthly global surface temperature during the second decade of the 21st century using the DSC model shows that the global temperature will likely continue to rise at twice the average rate of the past 150 years. The evaluation of prediction accuracy shows that DSC models perform systematically well against selected models of other authors, suggesting that DSC models, when coupled with other ecoenvironmental models, can be used as a supplemental tool for short-term (˜10-year) environmental planning and decision making.
Boswell, Paul G.; Abate-Pella, Daniel; Hewitt, Joshua T.
2015-01-01
Compound identification by liquid chromatography-mass spectrometry (LC-MS) is a tedious process, mainly because authentic standards must be run on a user’s system to be able to confidently reject a potential identity from its retention time and mass spectral properties. Instead, it would be preferable to use shared retention time/index data to narrow down the identity, but shared data cannot be used to reject candidates with an absolute level of confidence because the data are strongly affected by differences between HPLC systems and experimental conditions. However, a technique called “retention projection” was recently shown to account for many of the differences. In this manuscript, we discuss an approach to calculate appropriate retention time tolerance windows for projected retention times, potentially making it possible to exclude candidates with an absolute level of confidence, without needing to have authentic standards of each candidate on hand. In a range of multi-segment gradients and flow rates run among seven different labs, the new approach calculated tolerance windows that were significantly more appropriate for each retention projection than global tolerance windows calculated for retention projections or linear retention indices. Though there were still some small differences between the labs that evidently were not taken into account, the calculated tolerance windows only needed to be relaxed by 50% to make them appropriate for all labs. Even then, 42% of the tolerance windows calculated in this study without standards were narrower than those required by WADA for positive identification, where standards must be run contemporaneously. PMID:26292624
Real-Time Parameter Estimation Using Output Error
NASA Technical Reports Server (NTRS)
Grauer, Jared A.
2014-01-01
Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.
Quack, Martin
2001-03-21
The questions of the absolute directions of space and time or the “observability” of absolute time direction as well as absolute handedness-left or right- are related to the fundamental symmetries of physics C, P, T as well as their combinations, in particular CPT, and their violations, such as parity violation. At the same time there is a relation to certain still open questions in chemistry concerning the fundamental physical- chemical principles of molecular chirality and in biochemistry concerning the selection of homochirality in evolution. In the lecture we shall introduce the concepts and then report new theoretical results from our work on parity violation in chiral molecules, showing order of magnitude increases with respect to previously accepted values. We discus as well our current experimental efforts. We shall briefly mention the construction of an absolute molecular clock.
Quack, Martin
2001-03-21
The questions of the absolute directions of space and time or the 'observability' of absolute time direction as well as absolute handedness - left or right - are related to the fundamental symmetries of physics C, P, T as well as their combinations, in particular CPT, and their violations, such as parity violation. At the same time there is a relation to certain still open questions in chemistry concerning the fundamental physical-chemical principles of molecular chirality and in biochemistry concerning the selection of homochirality in evolution. In the lecture we shall introduce the concepts and then report new theoretical results from our work on parity violation in chiral molecules, showing order of magnitude increases with respect to previously accepted values. We discuss as well our current experimental efforts. We shall briefly mention the construction of an absolute molecular clock.
Relationship between Brazilian airline pilot errors and time of day.
de Mello, M T; Esteves, A M; Pires, M L N; Santos, D C; Bittencourt, L R A; Silva, R S; Tufik, S
2008-12-01
Flight safety is one of the most important and frequently discussed issues in aviation. Recent accident inquiries have raised questions as to how the work of flight crews is organized and the extent to which these conditions may have been contributing factors to accidents. Fatigue is based on physiologic limitations, which are reflected in performance deficits. The purpose of the present study was to provide an analysis of the periods of the day in which pilots working for a commercial airline presented major errors. Errors made by 515 captains and 472 co-pilots were analyzed using data from flight operation quality assurance systems. To analyze the times of day (shifts) during which incidents occurred, we divided the light-dark cycle (24:00) in four periods: morning, afternoon, night, and early morning. The differences of risk during the day were reported as the ratio of morning to afternoon, morning to night and morning to early morning error rates. For the purposes of this research, level 3 events alone were taken into account, since these were the most serious in which company operational limits were exceeded or when established procedures were not followed. According to airline flight schedules, 35% of flights take place in the morning period, 32% in the afternoon, 26% at night, and 7% in the early morning. Data showed that the risk of errors increased by almost 50% in the early morning relative to the morning period (ratio of 1:1.46). For the period of the afternoon, the ratio was 1:1.04 and for the night a ratio of 1:1.05 was found. These results showed that the period of the early morning represented a greater risk of attention problems and fatigue.
Alignment between seafloor spreading directions and absolute plate motions through time
NASA Astrophysics Data System (ADS)
Williams, Simon E.; Flament, Nicolas; Müller, R. Dietmar
2016-02-01
The history of seafloor spreading in the ocean basins provides a detailed record of relative motions between Earth's tectonic plates since Pangea breakup. Determining how tectonic plates have moved relative to the Earth's deep interior is more challenging. Recent studies of contemporary plate motions have demonstrated links between relative plate motion and absolute plate motion (APM), and with seismic anisotropy in the upper mantle. Here we explore the link between spreading directions and APM since the Early Cretaceous. We find a significant alignment between APM and spreading directions at mid-ocean ridges; however, the degree of alignment is influenced by geodynamic setting, and is strongest for mid-Atlantic spreading ridges between plates that are not directly influenced by time-varying slab pull. In the Pacific, significant mismatches between spreading and APM direction may relate to a major plate-mantle reorganization. We conclude that spreading fabric can be used to improve models of APM.
Lin, G.; Thurber, C.H.; Zhang, H.; Hauksson, E.; Shearer, P.M.; Waldhauser, F.; Brocher, T.M.; Hardebeck, J.
2010-01-01
We obtain a seismic velocity model of the California crust and uppermost mantle using a regional-scale double-difference tomography algorithm. We begin by using absolute arrival-time picks to solve for a coarse three-dimensional (3D) P velocity (VP) model with a uniform 30 km horizontal node spacing, which we then use as the starting model for a finer-scale inversion using double-difference tomography applied to absolute and differential pick times. For computational reasons, we split the state into 5 subregions with a grid spacing of 10 to 20 km and assemble our final statewide VP model by stitching together these local models. We also solve for a statewide S-wave model using S picks from both the Southern California Seismic Network and USArray, assuming a starting model based on the VP results and a VP=VS ratio of 1.732. Our new model has improved areal coverage compared with previous models, extending 570 km in the SW-NE directionand 1320 km in the NW-SE direction. It also extends to greater depth due to the inclusion of substantial data at large epicentral distances. Our VP model generally agrees with previous separate regional models for northern and southern California, but we also observe some new features, such as high-velocity anomalies at shallow depths in the Klamath Mountains and Mount Shasta area, somewhat slow velocities in the northern Coast Ranges, and slow anomalies beneath the Sierra Nevada at midcrustal and greater depths. This model can be applied to a variety of regional-scale studies in California, such as developing a unified statewide earthquake location catalog and performing regional waveform modeling.
NASA Astrophysics Data System (ADS)
Johnston, Mark D.; Oliver, Bryan V.; Droemer, Darryl W.; Frogget, Brent; Crain, Marlon D.; Maron, Yitzhak
2012-08-01
This paper describes a convenient and accurate method to calibrate fast (<1 ns resolution) streaked, fiber optic light collection, spectroscopy systems. Such systems are inherently difficult to calibrate due to the lack of sufficiently intense, calibrated light sources. Such a system is used to collect spectral data on plasmas generated in electron beam diodes fielded on the RITS-6 accelerator (8-12MV, 140-200kA) at Sandia National Laboratories. On RITS, plasma light is collected through a small diameter (200 μm) optical fiber and recorded on a fast streak camera at the output of a 1 meter Czerny-Turner monochromator. For this paper, a 300 W xenon short arc lamp (Oriel Model 6258) was used as the calibration source. Since the radiance of the xenon arc varies from cathode to anode, just the area around the tip of the cathode ("hotspot") was imaged onto the fiber, to produce the highest intensity output. To compensate for chromatic aberrations, the signal was optimized at each wavelength measured. Output power was measured using 10 nm bandpass interference filters and a calibrated photodetector. These measurements give power at discrete wavelengths across the spectrum, and when linearly interpolated, provide a calibration curve for the lamp. The shape of the spectrum is determined by the collective response of the optics, monochromator, and streak tube across the spectral region of interest. The ratio of the spectral curve to the measured bandpass filter curve at each wavelength produces a correction factor (Q) curve. This curve is then applied to the experimental data and the resultant spectra are given in absolute intensity units (photons/sec/cm2/steradian/nm). Error analysis shows this method to be accurate to within +/- 20%, which represents a high level of accuracy for this type of measurement.
Johnston, Mark D; Oliver, Bryan V; Droemer, Darryl W; Frogget, Brent; Crain, Marlon D; Maron, Yitzhak
2012-08-01
This paper describes a convenient and accurate method to calibrate fast (<1 ns resolution) streaked, fiber optic light collection, spectroscopy systems. Such systems are inherently difficult to calibrate due to the lack of sufficiently intense, calibrated light sources. Such a system is used to collect spectral data on plasmas generated in electron beam diodes fielded on the RITS-6 accelerator (8-12MV, 140-200kA) at Sandia National Laboratories. On RITS, plasma light is collected through a small diameter (200 μm) optical fiber and recorded on a fast streak camera at the output of a 1 meter Czerny-Turner monochromator. For this paper, a 300 W xenon short arc lamp (Oriel Model 6258) was used as the calibration source. Since the radiance of the xenon arc varies from cathode to anode, just the area around the tip of the cathode ("hotspot") was imaged onto the fiber, to produce the highest intensity output. To compensate for chromatic aberrations, the signal was optimized at each wavelength measured. Output power was measured using 10 nm bandpass interference filters and a calibrated photodetector. These measurements give power at discrete wavelengths across the spectrum, and when linearly interpolated, provide a calibration curve for the lamp. The shape of the spectrum is determined by the collective response of the optics, monochromator, and streak tube across the spectral region of interest. The ratio of the spectral curve to the measured bandpass filter curve at each wavelength produces a correction factor (Q) curve. This curve is then applied to the experimental data and the resultant spectra are given in absolute intensity units (photons/sec/cm(2)/steradian/nm). Error analysis shows this method to be accurate to within +∕- 20%, which represents a high level of accuracy for this type of measurement. PMID:22938275
An Integrated Model of Choices and Response Times in Absolute Identification
ERIC Educational Resources Information Center
Brown, Scott D.; Marley, A. A. J.; Donkin, Christopher; Heathcote, Andrew
2008-01-01
Recent theoretical developments in the field of absolute identification have stressed differences between relative and absolute processes, that is, whether stimulus magnitudes are judged relative to a shorter term context provided by recently presented stimuli or a longer term context provided by the entire set of stimuli. The authors developed a…
Frequency-domain analysis of absolute gravimeters
NASA Astrophysics Data System (ADS)
Svitlov, S.
2012-12-01
An absolute gravimeter is analysed as a linear time-invariant system in the frequency domain. Frequency responses of absolute gravimeters are derived analytically based on the propagation of the complex exponential signal through their linear measurement functions. Depending on the model of motion and the number of time-distance coordinates, an absolute gravimeter is considered as a second-order (three-level scheme) or third-order (multiple-level scheme) low-pass filter. It is shown that the behaviour of an atom absolute gravimeter in the frequency domain corresponds to that of the three-level corner-cube absolute gravimeter. Theoretical results are applied for evaluation of random and systematic measurement errors and optimization of an experiment. The developed theory agrees with known results of an absolute gravimeter analysis in the time and frequency domains and can be used for measurement uncertainty analyses, building of vibration-isolation systems and synthesis of digital filtering algorithms.
Easy Absolute Values? Absolutely
ERIC Educational Resources Information Center
Taylor, Sharon E.; Mittag, Kathleen Cage
2015-01-01
The authors teach a problem-solving course for preservice middle-grades education majors that includes concepts dealing with absolute-value computations, equations, and inequalities. Many of these students like mathematics and plan to teach it, so they are adept at symbolic manipulations. Getting them to think differently about a concept that they…
Supercontinent cycles and the calculation of absolute palaeolongitude in deep time.
Mitchell, Ross N; Kilian, Taylor M; Evans, David A D
2012-02-08
Traditional models of the supercontinent cycle predict that the next supercontinent--'Amasia'--will form either where Pangaea rifted (the 'introversion' model) or on the opposite side of the world (the 'extroversion' models). Here, by contrast, we develop an 'orthoversion' model whereby a succeeding supercontinent forms 90° away, within the great circle of subduction encircling its relict predecessor. A supercontinent aggregates over a mantle downwelling but then influences global-scale mantle convection to create an upwelling under the landmass. We calculate the minimum moment of inertia about which oscillatory true polar wander occurs owing to the prolate shape of the non-hydrostatic Earth. By fitting great circles to each supercontinent's true polar wander legacy, we determine that the arc distances between successive supercontinent centres (the axes of the respective minimum moments of inertia) are 88° for Nuna to Rodinia and 87° for Rodinia to Pangaea--as predicted by the orthoversion model. Supercontinent centres can be located back into Precambrian time, providing fixed points for the calculation of absolute palaeolongitude over billion-year timescales. Palaeogeographic reconstructions additionally constrained in palaeolongitude will provide increasingly accurate estimates of ancient plate motions and palaeobiogeographic affinities.
Supercontinent cycles and the calculation of absolute palaeolongitude in deep time.
Mitchell, Ross N; Kilian, Taylor M; Evans, David A D
2012-02-01
Traditional models of the supercontinent cycle predict that the next supercontinent--'Amasia'--will form either where Pangaea rifted (the 'introversion' model) or on the opposite side of the world (the 'extroversion' models). Here, by contrast, we develop an 'orthoversion' model whereby a succeeding supercontinent forms 90° away, within the great circle of subduction encircling its relict predecessor. A supercontinent aggregates over a mantle downwelling but then influences global-scale mantle convection to create an upwelling under the landmass. We calculate the minimum moment of inertia about which oscillatory true polar wander occurs owing to the prolate shape of the non-hydrostatic Earth. By fitting great circles to each supercontinent's true polar wander legacy, we determine that the arc distances between successive supercontinent centres (the axes of the respective minimum moments of inertia) are 88° for Nuna to Rodinia and 87° for Rodinia to Pangaea--as predicted by the orthoversion model. Supercontinent centres can be located back into Precambrian time, providing fixed points for the calculation of absolute palaeolongitude over billion-year timescales. Palaeogeographic reconstructions additionally constrained in palaeolongitude will provide increasingly accurate estimates of ancient plate motions and palaeobiogeographic affinities. PMID:22318605
ABSOLUTE TIMING OF THE CRAB PULSAR WITH THE INTEGRAL/SPI TELESCOPE
Molkov, S.; Jourdain, E.; Roques, J. P.
2010-01-01
We have investigated the pulse shape evolution of the Crab pulsar emission in the hard X-ray domain of the electromagnetic spectrum. In particular, we have studied the alignment of the Crab pulsar phase profiles measured in the hard X-rays and in other wavebands. To obtain the hard X-ray pulse profiles, we have used six years (2003-2009, with a total exposure of about 4 Ms) of publicly available data of the SPI telescope on-board the International Gamma-Ray Astrophysics Laboratory observatory, folded with the pulsar time solution derived from the Jodrell Bank Crab Pulsar Monthly Ephemeris. We found that the main pulse in the hard X-ray 20-100 keV energy band leads the radio one by 8.18 +- 0.46 milliperiods in phase, or 275 +- 15 mus in time. Quoted errors represent only statistical uncertainties. Our systematic error is estimated to be approx40 mus and is mainly caused by the radio measurement uncertainties. In hard X-rays, the average distance between the main pulse and interpulse on the phase plane is 0.3989 +- 0.0009. To compare our findings in hard X-rays with the soft 2-20 keV X-ray band, we have used data of quasi-simultaneous Crab observations with the proportional counter array monitor on-board the Rossi X-Ray Timing Explorer mission. The time lag and the pulses separation values measured in the 3-20 keV band are 0.00933 +- 0.00016 (corresponding to 310 +- 6 mus) and 0.40016 +- 0.00028 parts of the cycle, respectively. While the pulse separation values measured in soft X-rays and hard X-rays agree, the time lags are statistically different. Additional analysis show that the delay between the radio and X-ray signals varies with energy in the 2-300 keV energy range. We explain such a behavior as due to the superposition of two independent components responsible for the Crab pulsed emission in this energy band.
Rammsayer, T; Wittkowski, K M
1990-01-01
In comparison judgments of two successively presented time intervals ranging from 30 to 70 msec a time-order error (TOE) as well as a systematic effect depending on the constant position error (CPE) were demonstrated. The effects proved to be independent. Contrary to Vierordt's law, a negative TOE was found. When presenting the standard interval first, an increased hit rate resulting in a positive CPE was established. Furthermore, a test statistic is introduced that allows analysis of experiments utilizing all available information of a subject's psychometric function.
Method for quantum-jump continuous-time quantum error correction
NASA Astrophysics Data System (ADS)
Hsu, Kung-Chuan; Brun, Todd A.
2016-02-01
Continuous-time quantum error correction (CTQEC) is a technique for protecting quantum information against decoherence, where both the decoherence and error correction processes are considered continuous in time. Given any [[n ,k ,d
5 CFR 1605.16 - Claims for correction of employing agency errors; time limitations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... of employing agency errors; time limitations. (a) Agency's discovery of error. Upon discovery of an... it, but, in any event, the agency must act promptly in doing so. (b) Participant's discovery of error. If an agency fails to discover an error of which a participant has knowledge involving the correct...
In real-time quantitative PCR studies using absolute plasmid DNA standards, a calibration curve is developed to estimate an unknown DNA concentration. However, potential differences in the amplification performance of plasmid DNA compared to genomic DNA standards are often ignore...
Using Graphs for Fast Error Term Approximation of Time-varying Datasets
Nuber, C; LaMar, E C; Pascucci, V; Hamann, B; Joy, K I
2003-02-27
We present a method for the efficient computation and storage of approximations of error tables used for error estimation of a region between different time steps in time-varying datasets. The error between two time steps is defined as the distance between the data of these time steps. Error tables are used to look up the error between different time steps of a time-varying dataset, especially when run time error computation is expensive. However, even the generation of error tables itself can be expensive. For n time steps, the exact error look-up table (which stores the error values for all pairs of time steps in a matrix) has a memory complexity and pre-processing time complexity of O(n2), and O(1) for error retrieval. Our approximate error look-up table approach uses trees, where the leaf nodes represent original time steps, and interior nodes contain an average (or best-representative) of the children nodes. The error computed on an edge of a tree describes the distance between the two nodes on that edge. Evaluating the error between two different time steps requires traversing a path between the two leaf nodes, and accumulating the errors on the traversed edges. For n time steps, this scheme has a memory complexity and pre-processing time complexity of O(nlog(n)), a significant improvement over the exact scheme; the error retrieval complexity is O(log(n)). As we do not need to calculate all possible n2 error terms, our approach is a fast way to generate the approximation.
Mapping the Origins of Time: Scalar Errors in Infant Time Estimation
ERIC Educational Resources Information Center
Addyman, Caspar; Rocha, Sinead; Mareschal, Denis
2014-01-01
Time is central to any understanding of the world. In adults, estimation errors grow linearly with the length of the interval, much faster than would be expected of a clock-like mechanism. Here we present the first direct demonstration that this is also true in human infants. Using an eye-tracking paradigm, we examined 4-, 6-, 10-, and…
Waugh, C J; Rosenberg, M J; Zylstra, A B; Frenje, J A; Séguin, F H; Petrasso, R D; Glebov, V Yu; Sangster, T C; Stoeckl, C
2015-05-01
Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition, comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule. PMID:26026524
Waugh, C. J. Zylstra, A. B.; Frenje, J. A.; Séguin, F. H.; Petrasso, R. D.; Rosenberg, M. J.; Glebov, V. Yu.; Sangster, T. C.; Stoeckl, C.
2015-05-15
Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition, comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.
Waugh, C. J.; Rosenberg, M. J.; Zylstra, A. B.; Frenje, J. A.; Seguin, F. H.; Petrasso, R. D.; Glebov, V. Yu.; Sangster, T. C.; Stoeckl, C.
2015-05-27
Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition,more » comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.« less
Waugh, C. J.; Rosenberg, M. J.; Zylstra, A. B.; Frenje, J. A.; Seguin, F. H.; Petrasso, R. D.; Glebov, V. Yu.; Sangster, T. C.; Stoeckl, C.
2015-05-27
Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition, comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.
Repeated quantum error correction on a continuously encoded qubit by real-time feedback
NASA Astrophysics Data System (ADS)
Cramer, J.; Kalb, N.; Rol, M. A.; Hensen, B.; Blok, M. S.; Markham, M.; Twitchen, D. J.; Hanson, R.; Taminiau, T. H.
2016-05-01
Reliable quantum information processing in the face of errors is a major fundamental and technological challenge. Quantum error correction protects quantum states by encoding a logical quantum bit (qubit) in multiple physical qubits. To be compatible with universal fault-tolerant computations, it is essential that states remain encoded at all times and that errors are actively corrected. Here we demonstrate such active error correction on a continuously protected logical qubit using a diamond quantum processor. We encode the logical qubit in three long-lived nuclear spins, repeatedly detect phase errors by non-destructive measurements, and apply corrections by real-time feedback. The actively error-corrected qubit is robust against errors and encoded quantum superposition states are preserved beyond the natural dephasing time of the best physical qubit in the encoding. These results establish a powerful platform to investigate error correction under different types of noise and mark an important step towards fault-tolerant quantum information processing.
Error Correction for Foot Clearance in Real-Time Measurement
NASA Astrophysics Data System (ADS)
Wahab, Y.; Bakar, N. A.; Mazalan, M.
2014-04-01
Mobility performance level, fall related injuries, unrevealed disease and aging stage can be detected through examination of gait pattern. The gait pattern is normally directly related to the lower limb performance condition in addition to other significant factors. For that reason, the foot is the most important part for gait analysis in-situ measurement system and thus directly affects the gait pattern. This paper reviews the development of ultrasonic system with error correction using inertial measurement unit for gait analysis in real life measurement of foot clearance. This paper begins with the related literature where the necessity of measurement is introduced. Follow by the methodology section, problem and solution. Next, this paper explains the experimental setup for the error correction using the proposed instrumentation, results and discussion. Finally, this paper shares the planned future works.
Method and apparatus for detecting timing errors in a system oscillator
Gliebe, Ronald J.; Kramer, William R.
1993-01-01
A method of detecting timing errors in a system oscillator for an electronic device, such as a power supply, includes the step of comparing a system oscillator signal with a delayed generated signal and generating a signal representative of the timing error when the system oscillator signal is not identical to the delayed signal. An LED indicates to an operator that a timing error has occurred. A hardware circuit implements the above-identified method.
Jaworski, Maciej; Pludowski, Pawel
2013-01-01
Dual-energy X-ray absorptiometry (DXA) method is widely used in pediatrics in the study of bone density and body composition. However, there is a limit to how precise DXA can estimate bone and body composition measures in children. The study was aimed to (1) evaluate precision errors for bone mineral density, bone mass and bone area, body composition, and mechanostat parameters, (2) assess the relationships between precision errors and anthropometric parameters, and (3) calculate a "least significant change" and "monitoring time interval" values for DXA measures in children of wide age range (5-18yr) using GE Lunar Prodigy densitometer. It is observed that absolute precision error values were different for thin and standard technical modes of DXA measures and depended on age, body weight, and height. In contrast, relative precision error values expressed in percentages were similar for thin and standard modes (except total body bone mineral density [TBBMD]) and were not related to anthropometric variables (except TBBMD). Concluding, due to stability of percentage coefficient of variation values in wide range of age, the use of precision error expressed in percentages, instead of absolute error, appeared as convenient in pediatric population.
Period, epoch, and prediction errors of ephemerides from continuous sets of timing measurements
NASA Astrophysics Data System (ADS)
Deeg, H. J.
2015-06-01
Space missions such as Kepler and CoRoT have led to large numbers of eclipse or transit measurements in nearly continuous time series. This paper shows how to obtain the period error in such measurements from a basic linear least-squares fit, and how to correctly derive the timing error in the prediction of future transit or eclipse events. Assuming strict periodicity, a formula for the period error of these time series is derived, σP = σT (12 / (N3-N))1 / 2, where σP is the period error, σT the timing error of a single measurement, and N the number of measurements. Compared to the iterative method for period error estimation by Mighell & Plavchan (2013), this much simpler formula leads to smaller period errors, whose correctness has been verified through simulations. For the prediction of times of future periodic events, usual linear ephemeris were epoch errors are quoted for the first time measurement, are prone to an overestimation of the error of that prediction. This may be avoided by a correction for the duration of the time series. An alternative is the derivation of ephemerides whose reference epoch and epoch error are given for the centre of the time series. For long continuous or near-continuous time series whose acquisition is completed, such central epochs should be the preferred way for the quotation of linear ephemerides. While this work was motivated from the analysis of eclipse timing measures in space-based light curves, it should be applicable to any other problem with an uninterrupted sequence of discrete timings for which the determination of a zero point, of a constant period and of the associated errors is needed.
Space-Time Error Representation and Estimation in Navier-Stokes Calculations
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2006-01-01
The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.
Absolute nuclear material assay
Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.
2012-05-15
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Absolute nuclear material assay
Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.
2010-07-13
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Ambient Temperature Changes and the Impact to Time Measurement Error
NASA Astrophysics Data System (ADS)
Ogrizovic, V.; Gucevic, J.; Delcev, S.
2012-12-01
Measurements in Geodetic Astronomy are mainly outdoors and performed during a night, when the temperature often decreases very quickly. The time-keeping during a measuring session is provided by collecting UTC time ticks from a GPS receiver and transferring them to a laptop computer. An interrupt handler routine processes received UTC impulses in real-time and calculates the clock parameters. The characteristics of the computer quartz clock are influenced by temperature changes of the environment. We exposed the laptop to different environmental temperature conditions, and calculate the clock parameters for each environmental model. The results show that the laptop used for time-keeping in outdoor measurements should be kept in a stable temperature environment, at temperatures near 20° C.
Exposure measurement error in time-series studies of air pollution: concepts and consequences.
Zeger, S L; Thomas, D; Dominici, F; Samet, J M; Schwartz, J; Dockery, D; Cohen, A
2000-01-01
Misclassification of exposure is a well-recognized inherent limitation of epidemiologic studies of disease and the environment. For many agents of interest, exposures take place over time and in multiple locations; accurately estimating the relevant exposures for an individual participant in epidemiologic studies is often daunting, particularly within the limits set by feasibility, participant burden, and cost. Researchers have taken steps to deal with the consequences of measurement error by limiting the degree of error through a study's design, estimating the degree of error using a nested validation study, and by adjusting for measurement error in statistical analyses. In this paper, we address measurement error in observational studies of air pollution and health. Because measurement error may have substantial implications for interpreting epidemiologic studies on air pollution, particularly the time-series analyses, we developed a systematic conceptual formulation of the problem of measurement error in epidemiologic studies of air pollution and then considered the consequences within this formulation. When possible, we used available relevant data to make simple estimates of measurement error effects. This paper provides an overview of measurement errors in linear regression, distinguishing two extremes of a continuum-Berkson from classical type errors, and the univariate from the multivariate predictor case. We then propose one conceptual framework for the evaluation of measurement errors in the log-linear regression used for time-series studies of particulate air pollution and mortality and identify three main components of error. We present new simple analyses of data on exposures of particulate matter < 10 microm in aerodynamic diameter from the Particle Total Exposure Assessment Methodology Study. Finally, we summarize open questions regarding measurement error and suggest the kind of additional data necessary to address them. Images Figure 1 Figure 2
Gold, Raymond; Roberts, James H.
1989-01-01
A solid state track recording type dosimeter is disclosed to measure the time dependence of the absolute fission rates of nuclides or neutron fluence over a period of time. In a primary species an inner recording drum is rotatably contained within an exterior housing drum that defines a series of collimating slit apertures overlying windows defined in the stationary drum through which radiation can enter. Film type solid state track recorders are positioned circumferentially about the surface of the internal recording drum to record such radiation or its secondary products during relative rotation of the two elements. In another species both the recording element and the aperture element assume the configuration of adjacent disks. Based on slit size of apertures and relative rotational velocity of the inner drum, radiation parameters within a test area may be measured as a function of time and spectra deduced therefrom.
Obeid, Layal; Deman, Pierre; Tessier, Alexandre; Balosso, Jacques; Estève, François; Adam, Jean-François
2014-04-01
Contrast-enhanced radiotherapy is an innovative treatment that combines the selective accumulation of heavy elements in tumors with stereotactic irradiations using medium energy X-rays. The radiation dose enhancement depends on the absolute amount of iodine reached in the tumor and its time course. Quantitative, postinfusion iodine biodistribution and associated brain perfusion parameters were studied in human brain metastasis as key parameters for treatment feasibility and quality. Twelve patients received an intravenous bolus of iodinated contrast agent (CA) (40 mL, 4 mL/s), followed by a steady-state infusion (160 mL, 0.5 mL/s) to ensure stable intratumoral amounts of iodine during the treatment. Absolute iodine concentrations and quantitative perfusion maps were derived from 40 multislice dynamic computed tomography (CT) images of the brain. The postinfusion mean intratumoral iodine concentration (over 30 minutes) reached 1.94 ± 0.12 mg/mL. Reasonable correlations were obtained between these concentrations and the permeability surface area product and the cerebral blood volume. To our knowledge, this is the first quantitative study of CA biodistribution versus time in brain metastasis. The study shows that suitable and stable amounts of iodine can be reached for contrast-enhanced radiotherapy. Moreover, the associated perfusion measurements provide useful information for the patient recruitment and management processes.
Mahler, Anna-Britt; Diner, David J; Chipman, Russell A
2011-05-10
Multiangle Spectropolarimetric Imager (MSPI) sensitivity to static and time-varying polarization errors is examined. For a system without noise, static polarization errors are accurately represented by the calibration coefficients, and therefore do not impede correct mapping of measured to input Stokes vectors. But noise is invariably introduced during the detection process, and static polarization errors reduce the system's signal-to-noise ratio (SNR) by increasing noise sensitivity. Noise sensitivity is minimized by minimizing the condition number of the system data reduction matrix [Appl. Opt.41, 619 (2002)]. The sensitivity of condition numbers to static polarization errors is presented. The condition number of the nominal MSPI data reduction matrix is approximately 1.1 or less for all fields. The increase in the condition number above 1 results primarily from a quarter wave plate and mirror coating retardance magnitude errors. Sensitivity of the degree of linear polarization (DoLP) error with respect to time-varying diattenuation and retardance error was used to set a time-varying diattenuation magnitude tolerance of 0.005 and a time-varying retardance magnitude tolerance of ±0.2°. A Monte Carlo simulation of the calibration and measurements using anticipated static and time-varying errors indicates that MSPI has a probability of 0.9 of meeting its 0.005 DoLP uncertainty requirement.
Correlated errors in geodetic time series: Implications for time-dependent deformation
Langbein, J.; Johnson, H.
1997-01-01
addition, the seasonal noise can be as large as 3 mm in amplitude but typically is less than 0.5 mm. Because of the presence of random-walk noise in these time series, modeling and interpretation of the geodetic data must account for this source of error. By way of example we show that estimating the time-varying strain tensor (a form of spatial averaging) from geodetic data having both random-walk and white noise error components results in seemingly significant variations in the rate of strain accumulation; spatial averaging does reduce the size of both noise components but not their relative influence on the resulting strain accumulation model. Copyright 1997 by the American Geophysical Union.
Reward prediction error signals associated with a modified time estimation task.
Holroyd, Clay B; Krigolson, Olave E
2007-11-01
The feedback error-related negativity (fERN) is a component of the human event-related brain potential (ERP) elicited by feedback stimuli. A recent theory holds that the fERN indexes a reward prediction error signal associated with the adaptive modification of behavior. Here we present behavioral and ERP data recorded from participants engaged in a modified time estimation task. As predicted by the theory, our results indicate that fERN amplitude reflects a reward prediction error signal and that the size of this error signal is correlated across participants with changes in task performance.
Keep calm and be patient: The influence of anxiety and time on post-error adaptations.
Van der Borght, Liesbet; Braem, Senne; Stevens, Michaël; Notebaert, Wim
2016-02-01
Individual differences in anxiety and punishment sensitivity have an impact on electrophysiological markers of error processing and the orienting of attention to threatening information. However, it remains unclear how these individual differences influence behavioral adaptations to errors. Therefore, we set out to investigate the influence of anxiety and punishment sensitivity on post-error adaptations, and whether this influence depends on the time people get to adapt. We tested 99 participants using a Simon task with randomized inter-trial intervals. Significant post-error slowing (PES) was found at all time intervals. However, in line with previous research, PES reduced over time. While PES did not interact with anxiety, or punishment sensitivity, the pattern of post-error accuracy depended on anxiety. There is clear post-error accuracy decrease at the shortest interval, but individuals with a low score on trait anxiety showed a reversed effect (i.e., post-error accuracy increase) at a longer interval. These results suggest that people have trouble to disengage attention from an error, which can be overcome with time and low anxiety.
Keep calm and be patient: The influence of anxiety and time on post-error adaptations.
Van der Borght, Liesbet; Braem, Senne; Stevens, Michaël; Notebaert, Wim
2016-02-01
Individual differences in anxiety and punishment sensitivity have an impact on electrophysiological markers of error processing and the orienting of attention to threatening information. However, it remains unclear how these individual differences influence behavioral adaptations to errors. Therefore, we set out to investigate the influence of anxiety and punishment sensitivity on post-error adaptations, and whether this influence depends on the time people get to adapt. We tested 99 participants using a Simon task with randomized inter-trial intervals. Significant post-error slowing (PES) was found at all time intervals. However, in line with previous research, PES reduced over time. While PES did not interact with anxiety, or punishment sensitivity, the pattern of post-error accuracy depended on anxiety. There is clear post-error accuracy decrease at the shortest interval, but individuals with a low score on trait anxiety showed a reversed effect (i.e., post-error accuracy increase) at a longer interval. These results suggest that people have trouble to disengage attention from an error, which can be overcome with time and low anxiety. PMID:26720098
Yang, Yana; Hua, Changchun; Guan, Xinping
2016-03-01
Due to the cognitive limitations of the human operator and lack of complete information about the remote environment, the work performance of such teleoperation systems cannot be guaranteed in most cases. However, some practical tasks conducted by the teleoperation system require high performances, such as tele-surgery needs satisfactory high speed and more precision control results to guarantee patient' health status. To obtain some satisfactory performances, the error constrained control is employed by applying the barrier Lyapunov function (BLF). With the constrained synchronization errors, some high performances, such as, high convergence speed, small overshoot, and an arbitrarily predefined small residual constrained synchronization error can be achieved simultaneously. Nevertheless, like many classical control schemes only the asymptotic/exponential convergence, i.e., the synchronization errors converge to zero as time goes infinity can be achieved with the error constrained control. It is clear that finite time convergence is more desirable. To obtain a finite-time synchronization performance, the terminal sliding mode (TSM)-based finite time control method is developed for teleoperation system with position error constrained in this paper. First, a new nonsingular fast terminal sliding mode (NFTSM) surface with new transformed synchronization errors is proposed. Second, adaptive neural network system is applied for dealing with the system uncertainties and the external disturbances. Third, the BLF is applied to prove the stability and the nonviolation of the synchronization errors constraints. Finally, some comparisons are conducted in simulation and experiment results are also presented to show the effectiveness of the proposed method.
Sigaud, L; de Jesus, V L B; Ferreira, Natalia; Montenegro, E C
2016-08-01
In this work, the inclusion of an Einzel-like lens inside the time-of-flight drift tube of a standard mass spectrometer coupled to a gas cell-to study ionization of atoms and molecules by electron impact-is described. Both this lens and a conical collimator are responsible for further focalization of the ions and charged molecular fragments inside the spectrometer, allowing a much better resolution at the time-of-flight spectra, leading to a separation of a single mass-to-charge unit up to 100 a.m.u. The procedure to obtain the overall absolute efficiency of the spectrometer and micro-channel plate detector is also discussed. PMID:27587105
Sigaud, L; de Jesus, V L B; Ferreira, Natalia; Montenegro, E C
2016-08-01
In this work, the inclusion of an Einzel-like lens inside the time-of-flight drift tube of a standard mass spectrometer coupled to a gas cell-to study ionization of atoms and molecules by electron impact-is described. Both this lens and a conical collimator are responsible for further focalization of the ions and charged molecular fragments inside the spectrometer, allowing a much better resolution at the time-of-flight spectra, leading to a separation of a single mass-to-charge unit up to 100 a.m.u. The procedure to obtain the overall absolute efficiency of the spectrometer and micro-channel plate detector is also discussed.
NASA Astrophysics Data System (ADS)
Sigaud, L.; de Jesus, V. L. B.; Ferreira, Natalia; Montenegro, E. C.
2016-08-01
In this work, the inclusion of an Einzel-like lens inside the time-of-flight drift tube of a standard mass spectrometer coupled to a gas cell—to study ionization of atoms and molecules by electron impact—is described. Both this lens and a conical collimator are responsible for further focalization of the ions and charged molecular fragments inside the spectrometer, allowing a much better resolution at the time-of-flight spectra, leading to a separation of a single mass-to-charge unit up to 100 a.m.u. The procedure to obtain the overall absolute efficiency of the spectrometer and micro-channel plate detector is also discussed.
Automatic Time Stepping with Global Error Control for Groundwater Flow Models
Tang, Guoping
2008-09-01
An automatic time stepping with global error control is proposed for the time integration of the diffusion equation to simulate groundwater flow in confined aquifers. The scheme is based on an a posteriori error estimate for the discontinuous Galerkin (dG) finite element methods. A stability factor is involved in the error estimate and it is used to adapt the time step and control the global temporal error for the backward difference method. The stability factor can be estimated by solving a dual problem. The stability factor is not sensitive to the accuracy of the dual solution and the overhead computational cost can be minimized by solving the dual problem using large time steps. Numerical experiments are conducted to show the application and the performance of the automatic time stepping scheme. Implementation of the scheme can lead to improvement in accuracy and efficiency for groundwater flow models.
Repeated quantum error correction by real-time feedback on continuously encoded qubits
NASA Astrophysics Data System (ADS)
Cramer, Julia; Kalb, Norbert; Rol, M. Adriaan; Hensen, Bas; Blok, Machiel S.; Markham, Matthew; Twitchen, Daniel J.; Hanson, Ronald; Taminiau, Tim H.
Because quantum information is extremely fragile, large-scale quantum information processing requires constant error correction. To be compatible with universal fault-tolerant computations, it is essential that quantum states remain encoded at all times and that errors are actively corrected. I will present such active quantum error correction in a hybrid quantum system based on the nitrogen vacancy (NV) center in diamond. We encode a logical qubit in three long-lived nuclear spins, detect errors by multiple non-destructive measurements using the optically active NV electron spin and correct them by real-time feedback. By combining these new capabilities with recent advances in spin control, multiple cycles of error correction can be performed within the dephasing time. We investigate both coherent and incoherent errors and show that the error-corrected logical qubit can indeed store quantum states longer than the best spin used in the encoding. Furthermore, I will present our latest results on increasing the number of qubits in the encoding, required for quantum error correction for both phase- and bit-flip.
Repeated quantum error correction on a continuously encoded qubit by real-time feedback.
Cramer, J; Kalb, N; Rol, M A; Hensen, B; Blok, M S; Markham, M; Twitchen, D J; Hanson, R; Taminiau, T H
2016-01-01
Reliable quantum information processing in the face of errors is a major fundamental and technological challenge. Quantum error correction protects quantum states by encoding a logical quantum bit (qubit) in multiple physical qubits. To be compatible with universal fault-tolerant computations, it is essential that states remain encoded at all times and that errors are actively corrected. Here we demonstrate such active error correction on a continuously protected logical qubit using a diamond quantum processor. We encode the logical qubit in three long-lived nuclear spins, repeatedly detect phase errors by non-destructive measurements, and apply corrections by real-time feedback. The actively error-corrected qubit is robust against errors and encoded quantum superposition states are preserved beyond the natural dephasing time of the best physical qubit in the encoding. These results establish a powerful platform to investigate error correction under different types of noise and mark an important step towards fault-tolerant quantum information processing. PMID:27146630
Repeated quantum error correction on a continuously encoded qubit by real-time feedback
Cramer, J.; Kalb, N.; Rol, M. A.; Hensen, B.; Blok, M. S.; Markham, M.; Twitchen, D. J.; Hanson, R.; Taminiau, T. H.
2016-01-01
Reliable quantum information processing in the face of errors is a major fundamental and technological challenge. Quantum error correction protects quantum states by encoding a logical quantum bit (qubit) in multiple physical qubits. To be compatible with universal fault-tolerant computations, it is essential that states remain encoded at all times and that errors are actively corrected. Here we demonstrate such active error correction on a continuously protected logical qubit using a diamond quantum processor. We encode the logical qubit in three long-lived nuclear spins, repeatedly detect phase errors by non-destructive measurements, and apply corrections by real-time feedback. The actively error-corrected qubit is robust against errors and encoded quantum superposition states are preserved beyond the natural dephasing time of the best physical qubit in the encoding. These results establish a powerful platform to investigate error correction under different types of noise and mark an important step towards fault-tolerant quantum information processing. PMID:27146630
Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error
NASA Astrophysics Data System (ADS)
Jung, Insung; Koo, Lockjo; Wang, Gi-Nam
2008-11-01
The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.
Castle, Philip E.; Rodríguez, Ana C.; Burk, Robert D.; Herrero, Rolando; Hildesheim, Allan; Solomon, Diane; Sherman, Mark E.; Jeronimo, Jose; Alfaro, Mario; Morales, Jorge; Guillén, Diego; Hutchinson, Martha L.; Wacholder, Sholom; Schiffman, Mark
2009-01-01
A population sample of 10,049 women living in Guanacaste, Costa Rica was recruited into a natural history of human papillomavirus (HPV) and cervical neoplasia study in 1993–4. At the enrollment visit, we applied multiple state-of-the-art cervical cancer screening methods to detect prevalent cervical cancer and to prevent subsequent cervical cancers by the timely detection and treatment of precancerous lesions. Women were screened at enrollment with 3 kinds of cytology (often reviewed by more than one pathologist), visual inspection, and Cervicography. Any positive screening test led to colposcopic referral and biopsy and/or excisional treatment of CIN2 or worse. We retrospectively tested stored specimens with an early HPV test (Hybrid Capture Tube Test) and for >40 HPV genotypes using a research PCR assay. We followed women typically 5–7 years and some up to 11 years. Nonetheless, sixteen cases of invasive cervical cancer were diagnosed during follow-up. Six cancer cases were failures at enrollment to detect abnormalities by cytology screening; three of the six were also negative at enrollment by sensitive HPV DNA testing. Seven cancers represent failures of colposcopy to diagnose cancer or a precancerous lesion in screen-positive women. Finally, three cases arose despite attempted excisional treatment of precancerous lesions. Based on this evidence, we suggest that no current secondary cervical cancer prevention technologies applied once in a previously under-screened population is likely to be 100% efficacious in preventing incident diagnoses of invasive cervical cancer. PMID:19569231
NASA Astrophysics Data System (ADS)
Jiang, Jia-jia; Duan, Fa-jie; Chen, Jin; Zhang, Chao; Wang, Kai; Chang, Zong-jie
2012-08-01
Time synchronization is very important in a distributed chained seismic acquisition system with a large number of data acquisition nodes (DANs). The time synchronization error has two causes. On the one hand, there is a large accumulated propagation delay when commands propagate from the analysis and control system to multiple distant DANs, which makes it impossible for different DANs to receive the same command synchronously. Unfortunately, the propagation delay of commands (PDCs) varies in different application environments. On the other hand, the phase jitter of both the master clock and the clock recovery phase-locked loop, which is designed to extract the timing signal, may also cause the time synchronization error. In this paper, in order to achieve accurate time synchronization, a novel calibration method is proposed which can align the PDCs of all of the DANs in real time and overcome the time synchronization error caused by the phase jitter. Firstly, we give a quantitative analysis of the time synchronization error caused by both the PDCs and the phase jitter. Secondly, we propose a back and forth model (BFM) and a transmission delay measurement method (TDMM) to overcome these difficulties. Furthermore, the BFM is designed as the hardware configuration to measure the PDCs and calibrate the time synchronization error. The TDMM is used to measure the PDCs accurately. Thirdly, in order to overcome the time synchronization error caused by the phase jitter, a compression and mapping algorithm (CMA) is presented. Finally, based on the proposed BFM, TDMM and CMA, a united calibration algorithm is developed to overcome the time synchronization error caused by both the PDCs and the phase jitter. The simulation experiment results show the effectiveness of the calibration method proposed in this paper.
NASA Technical Reports Server (NTRS)
Davidson, J. A.; Sadowski, C. M.; Schiff, H. I.; Howard, C. J.; Schmeltekopf, A. L.; Jennings, D. A.; Streit, G. E.
1976-01-01
Absolute rate constants for the deactivation of O(1D) atoms by some atmospheric gases have been determined by observing the time-resolved emission of O(1D) at 630 nm. O(1D) atoms were produced by the dissociation of ozone via repetitive laser pulses at 266 nm. Absolute rate constants for the relaxation of O(1D) at 298 K are reported for N2, O2, CO2, O3, H2, D2, CH4, HCl, NH3, H2O, N2O, and Ne. The results obtained are compared with previous relative and absolute measurements reported in the literature.
Spectral characteristics of time-dependent orbit errors in altimeter height measurements
NASA Technical Reports Server (NTRS)
Chelton, Dudley B.; Schlax, Michael G.
1993-01-01
A mean reference surface and time-dependent orbit errors are estimated simultaneously for each exact-repeat ground track from the first two years of Geosat sea level estimates based on the Goddard Earth model (GEM)-T2 orbits. Motivated by orbit theory and empirical analysis of Geosat data, the time-dependent orbit errors are modeled as 1 cycle per revolution (cpr) sinusoids with slowly varying amplitude and phase. The method recovers the known 'bow tie effect' introduced by the existence of force model errors within the precision orbit determination (POD) procedure used to generate the GEM-T2 orbits. The bow tie pattern of 1-cpr orbit errors is characterized by small amplitudes near the middle and larger amplitudes (up to 160 cm in the 2 yr of data considered here) near the ends of each 5- to 6-day orbit arc over which the POD force model is integrated. A detailed examination of these bow tie patterns reveals the existence of daily modulations of the amplitudes of the 1-cpr sinusoid orbit errors with typical and maximum peak-to-peak ranges of about 14 cm and 30 cm, respectively. The method also identifies a daily variation in the mean orbit error with typical and maximum peak-to-peak ranges of about 6 and 30 cm, respectively, that is unrelated to the predominant 1-cpr orbit error. Application of the simultaneous solution method to the much less accurate Geosat height estimates based on the Naval Astronautics Group orbits concludes that the accuracy of POD is not important for collinear altimetric studies of time-dependent mesoscale variability (wavelengths shorter than 1000 km), as long as the time-dependent orbit errors are dominated by 1-cpr variability and a long-arc (several orbital periods) orbit error estimation scheme such as that presented here is used.
Diop, Mamadou; Verdecchia, Kyle; Lee, Ting-Yim; St Lawrence, Keith
2011-01-01
A primary focus of neurointensive care is the prevention of secondary brain injury, mainly caused by ischemia. A noninvasive bedside technique for continuous monitoring of cerebral blood flow (CBF) could improve patient management by detecting ischemia before brain injury occurs. A promising technique for this purpose is diffuse correlation spectroscopy (DCS) since it can continuously monitor relative perfusion changes in deep tissue. In this study, DCS was combined with a time-resolved near-infrared technique (TR-NIR) that can directly measure CBF using indocyanine green as a flow tracer. With this combination, the TR-NIR technique can be used to convert DCS data into absolute CBF measurements. The agreement between the two techniques was assessed by concurrent measurements of CBF changes in piglets. A strong correlation between CBF changes measured by TR-NIR and changes in the scaled diffusion coefficient measured by DCS was observed (R2 = 0.93) with a slope of 1.05 ± 0.06 and an intercept of 6.4 ± 4.3% (mean ± standard error). PMID:21750781
Ball, Hope C; Holmes, Robert K; Londraville, Richard L; Thewissen, Johannes G M; Duff, Robert Joel
2013-01-01
Leptin is the primary hormone in mammals that regulates adipose stores. Arctic adapted cetaceans maintain enormous adipose depots, suggesting possible modifications of leptin or receptor function. Determining expression of these genes is the first step to understanding the extreme physiology of these animals, and the uniqueness of these animals presents special challenges in estimating and comparing expression levels of mRNA transcripts. Here, we compare expression of two model genes, leptin and leptin-receptor gene-related product (OB-RGRP), using two quantitative real-time PCR (qPCR) methods: "relative" and "absolute". To assess the expression of leptin and OB-RGRP in cetacean tissues, we first examined how relative expression of those genes might differ when normalized to four common endogenous control genes. We performed relative expression qPCR assays measuring the amplification of these two model target genes relative to amplification of 18S ribosomal RNA (18S), ubiquitously expressed transcript (Uxt), ribosomal protein 9 (Rs9) and ribosomal protein 15 (Rs15) endogenous controls. Results demonstrated significant differences in the expression of both genes when different control genes were employed; emphasizing a limitation of relative qPCR assays, especially in studies where differences in physiology and/or a lack of knowledge regarding levels and patterns of expression of common control genes may possibly affect data interpretation. To validate the absolute quantitative qPCR methods, we evaluated the effects of plasmid structure, the purity of the plasmid standard preparation and the influence of type of qPCR "background" material on qPCR amplification efficiencies and copy number determination of both model genes, in multiple tissues from one male bowhead whale. Results indicate that linear plasmids are more reliable than circular plasmid standards, no significant differences in copy number estimation based upon background material used, and that the use of
An Error Model for High-Time Resolution Satellite Precipitation Products
NASA Astrophysics Data System (ADS)
Maggioni, V.; Sapiano, M.; Adler, R. F.; Huffman, G. J.; Tian, Y.
2013-12-01
A new error scheme (PUSH: Precipitation Uncertainties for Satellite Hydrology) is presented to provide global estimates of errors for high time resolution, merged precipitation products. Errors are estimated for the widely used Tropical Rainfall Monitoring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) 3B42 product at daily/0.25° resolution, using the high quality NOAA CPC-UNI gauge analysis as the benchmark. Each of the following four scenarios is explored and explicitly modeled: correct no-precipitation detection (both satellite and gauges detect no precipitation), missed precipitation (satellite records a zero, but it is incorrect), false alarm (satellite detects precipitation, but the reference is zero), and hit (both satellite and gauges detect precipitation). Results over Oklahoma show that the estimated probability distributions are able to reproduce the probability density functions of the benchmark precipitation, in terms of both expected values and quantiles. PUSH adequately captures missed precipitation and false detection uncertainties, reproduces the spatial pattern of the error, and shows a good agreement between observed and estimated errors. The resulting error estimates could be attached to the standard products for the scientific community to use. Investigation is underway to: 1) test the approach in different regions of the world; 2) verify the ability of the model to discern the systematic and random components of the error; 3) and evaluate the model performance when higher time-resolution satellite products (i.e., 3-hourly) are employed.
Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.
Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał
2016-08-01
Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014.
Error criteria for cross validation in the context of chaotic time series prediction.
Lim, Teck Por; Puthusserypady, Sadasivan
2006-03-01
The prediction of a chaotic time series over a long horizon is commonly done by iterating one-step-ahead prediction. Prediction can be implemented using machine learning methods, such as radial basis function networks. Typically, cross validation is used to select prediction models based on mean squared error. The bias-variance dilemma dictates that there is an inevitable tradeoff between bias and variance. However, invariants of chaotic systems are unchanged by linear transformations; thus, the bias component may be irrelevant to model selection in the context of chaotic time series prediction. Hence, the use of error variance for model selection, instead of mean squared error, is examined. Clipping is introduced, as a simple way to stabilize iterated predictions. It is shown that using the error variance for model selection, in combination with clipping, may result in better models.
NASA Astrophysics Data System (ADS)
Crow, Wade T.; Koster, Randal D.; Reichle, Rolf H.; Sharif, Hatim O.
2005-12-01
Errors in remotely-sensed soil moisture retrievals originate from a combination of time-invariant and time-varying sources. For land modeling applications such as forecast initialization, some of the impact of time-invariant sources can be removed given known differences between observed and modeled soil moisture climatologies. Nevertheless, the distinction is seldom made when evaluating remotely-sensed soil moisture products. Here we describe an Observing System Simulation Experiment (OSSE) for radiometer-only soil moisture products derived from the NASA Hydrosphere States (Hydros) mission where the impact of time-invariant errors is explicitly removed via the linear rescaling of retrievals. OSSE results for the 575,000 km2 Red-Arkansas River Basin indicate that climatological rescaling may significantly reduce the perceived magnitude of Hydros soil moisture retrieval errors and expands the geographic areas over which retrievals demonstrate value for land surface modeling applications.
A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series.
Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan
2015-01-01
Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS. PMID:26193283
A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series
Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan
2015-01-01
Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS. PMID:26193283
A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series
ERIC Educational Resources Information Center
Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.
2011-01-01
Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…
ERIC Educational Resources Information Center
Vocat, Roland; Pourtois, Gilles; Vuilleumier, Patrik
2008-01-01
The detection of errors is known to be associated with two successive neurophysiological components in EEG, with an early time-course following motor execution: the error-related negativity (ERN/Ne) and late positivity (Pe). The exact cognitive and physiological processes contributing to these two EEG components, as well as their functional…
First photoelectron timing error evaluation of a new scintillation detector model
Petrick, N.; Clinthorne, N.H.; Rogers, W.L.; Hero, A.O. III . Div. of Nuclear Medicine)
1991-04-01
In this paper, a general timing system model for a scintillation detector developed is experimentally evaluated. The detector consists of a scintillator and a photodetector such as a photomultiplier tube or an avalanche photodiode. The model uses a Poisson point process to characterize the light output from the scintillator. This timing model was used to simulate a BGO scintillator with a Burle 8575 PMT using first photoelectron timing detection. Evaluation of the model consisted of comparing the RMS error from the simulations with the error from the actual detector system. The authors find that the general model compares well with the actual error results for the BGO/8575 PMT detector. In addition, the optimal threshold is found to be dependent upon the energy of the scintillation. In the low energy part of the spectrum, the authors find a low threshold is optimal while for higher energy pulses the optimal threshold increases.
Detection and absolute quantitation of Tomato torrado virus (ToTV) by real time RT-PCR.
Herrera-Vásquez, José Angel; Rubio, Luis; Alfaro-Fernández, Ana; Debreczeni, Diana Elvira; Font-San-Ambrosio, Isabel; Falk, Bryce W; Ferriol, Inmaculada
2015-09-01
Tomato torrado virus (ToTV) causes serious damage to the tomato industry and significant economic losses. A quantitative real-time reverse-transcription polymerase chain reaction (RT-qPCR) method using primers and a specific TaqMan(®) MGB probe for ToTV was developed for sensitive detection and quantitation of different ToTV isolates. A standard curve using RNA transcripts enabled absolute quantitation, with a dynamic range from 10(4) to 10(10) ToTV RNA copies/ng of total RNA. The specificity of the RT-qPCR was tested with twenty-three ToTV isolates from tomato (Solanum lycopersicum L.), and black nightshade (Solanum nigrum L.) collected in Spain, Australia, Hungary and France, which covered the genetic variation range of this virus. This new RT-qPCR assay enables a reproducible, sensitive and specific detection and quantitation of ToTV, which can be a valuable tool in disease management programs and epidemiological studies.
Tunçal, Tolga
2010-04-14
Although enhanced biological phosphorus removal processes (EBPR) are popular methods for nutrient control, unstable treatment performances of full-scale systems are still not well understood. In this study, the interaction between electron acceptors present at the start of the anaerobic phase of an EBPR system and the amount of organic acids generated from simple substrate (rbsCOD) was investigated in a full-scale wastewater treatment plant. Quantification of microbial groups including phosphorus-accumulating microorganisms (PAOs), denitrifying PAOs (DPAOs), glycogen-accumulating microorganisms (GAOs) and ordinary heterotrophic microorganisms (OHOs) was based on a modified dynamic model. The intracellular phosphorus content of PAOs was also determined by the execution of mass balances for the biological stages of the plant. The EBPR activities observed in the plant and in batch tests (under idealized conditions) were compared with each other statistically as well. Modelling efforts indicated that the use of absolute anaerobic reaction (eta1) instead of nominal anaerobic reaction time (eta), to estimate the amount of available substrate for PAOs, significantly improved model accuracy. Another interesting result of the study was the differences in EBPR characteristics observed in idealized and real conditions. PMID:20480829
NASA Astrophysics Data System (ADS)
Schumann, G.; di Baldassarre, G.; Alsdorf, D.; Bates, P. D.
2009-04-01
In February 2000, the Shuttle Radar Topography Mission (SRTM) measured the elevation of most of the Earth's surface with spatially continuous sampling and an absolute vertical accuracy greater than 9 m. The vertical error has been shown to change with topographic complexity, being less important over flat terrain. This allows water surface slopes to be measured and associated discharge volumes to be estimated for open channels in large basins, such as the Amazon. Building on these capabilities, this paper demonstrates that near real-time coarse resolution radar imagery of a recent flood event on a 98 km reach of the River Po (Northern Italy) combined with SRTM terrain height data leads to a water slope remarkably similar to that derived by combining the radar image with highly accurate airborne laser altimetry. Moreover, it is shown that this space-borne flood wave approximation compares well to a hydraulic model and thus allows the performance of the latter, calibrated on a previous event, to be assessed when applied to an event of different magnitude in near real-time. These results are not only of great importance to real-time flood management and flood forecasting but also support the upcoming Surface Water and Ocean Topography (SWOT) mission that will routinely provide water levels and slopes with higher precision around the globe.
Empirical versus time stepping with embedded error control for density-driven flow in porous media
NASA Astrophysics Data System (ADS)
Younes, Anis; Ackerer, Philippe
2010-08-01
Modeling density-driven flow in porous media may require very long computational time due to the nonlinear coupling between flow and transport equations. Time stepping schemes are often used to adapt the time step size in order to reduce the computational cost of the simulation. In this work, the empirical time stepping scheme which adapts the time step size according to the performance of the iterative nonlinear solver is compared to an adaptive time stepping scheme where the time step length is controlled by the temporal truncation error. Results of the simulations of the Elder problem show that (1) the empirical time stepping scheme can lead to inaccurate results even with a small convergence criterion, (2) accurate results are obtained when the time step size selection is based on the truncation error control, (3) a non iterative scheme with proper time step management can be faster and leads to more accurate solution than the standard iterative procedure with the empirical time stepping and (4) the temporal truncation error can have a significant effect on the results and can be considered as one of the reasons for the differences observed in the Elder numerical results.
Structure and dating errors in the geologic time scale and periodicity in mass extinctions
NASA Technical Reports Server (NTRS)
Stothers, Richard B.
1989-01-01
Structure in the geologic time scale reflects a partly paleontological origin. As a result, ages of Cenozoic and Mesozoic stage boundaries exhibit a weak 28-Myr periodicity that is similar to the strong 26-Myr periodicity detected in mass extinctions of marine life by Raup and Sepkoski. Radiometric dating errors in the geologic time scale, to which the mass extinctions are stratigraphically tied, do not necessarily lessen the likelihood of a significant periodicity in mass extinctions, but do spread the acceptable values of the period over the range 25-27 Myr for the Harland et al. time scale or 25-30 Myr for the DNAG time scale. If the Odin time scale is adopted, acceptable periods fall between 24 and 33 Myr, but are not robust against dating errors. Some indirect evidence from independently-dated flood-basalt volcanic horizons tends to favor the Odin time scale.
Gillard, Jonathan
2015-12-01
This article re-examines parametric methods for the calculation of time specific reference intervals where there is measurement error present in the time covariate. Previous published work has commonly been based on the standard ordinary least squares approach, weighted where appropriate. In fact, this is an incorrect method when there are measurement errors present, and in this article, we show that the use of this approach may, in certain cases, lead to referral patterns that may vary with different values of the covariate. Thus, it would not be the case that all patients are treated equally; some subjects would be more likely to be referred than others, hence violating the principle of equal treatment required by the International Federation for Clinical Chemistry. We show, by using measurement error models, that reference intervals are produced that satisfy the requirement for equal treatment for all subjects.
pp ii Variation in reading error in P times for explosions with body-wave magnitude
NASA Astrophysics Data System (ADS)
Douglas, A.; Young, J. B.; Bowers, D.; Lewis, M.
2005-09-01
The differences between true travel-times of P and times predicted from travel-time tables (path effects) can be estimated for groups of closely spaced explosions with known hypocentres and origin times, if the onsets are observed at large signal-to-noise ratios (SNR) and read by analysts. Reading error can also be estimated and is usually assumed to be normally distributed with zero mean. Two experiments have been carried out to look at how reading error in P times from explosions varies with magnitude - taken as a measure of SNR - when read by analysts and by automatic systems. Although at low magnitudes there is some evidence of analyst readings being biased late, the largest variation in reading error with magnitude is found for automatic systems. The results show just how difficult it can be to estimate path effects free from observational bias, at least using bulletin data. The current programme to estimate path effects to improve epicentre location for verification of the Comprehensive Test Ban needs to include checks to ensure that apparent variations in path effects with location, are not due to bias from systematic reading error.
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk.
NASA Technical Reports Server (NTRS)
Wilkins, L. C.; Wintz, P. A.
1975-01-01
Many redundancy removal algorithms employ some sort of run length code. Blocks of timing words are coded with synchronization words inserted between blocks. The probability of incorrectly reconstructing a sample because of a channel error in the timing data is a monotonically nondecreasing function of time since the last synchronization word. In this paper we compute the 'probability that the accumulated magnitude of timing errors equal zero' as a function of time since the last synchronization word for a zero-order predictor (ZOP). The result is valid for any data source that can be modeled by a first-order Markov chain and any digital channel that can be modeled by a channel transition matrix. An example is presented.
Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G
2014-10-01
Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding.
NASA Astrophysics Data System (ADS)
Ni, Kai; Dong, Hao; Zhou, Qian; Xu, Mingfei; Li, Xinghui; Wu, Guanhao
2015-08-01
Absolute distance measurement using dual femtosecond comb lasers can achieve higher accuracy and faster measurement speed, which makes it more and more attractive. The data processing flow consists of four steps: interference peak detection, fast Fourier transform (FFT), phase fitting and compensation of index of refraction. A realtime data processing system based on Field-Programmable Gate Array (FPGA) for dual-comb ranging has been newly developed. The design and implementation of the interference peak detection algorithm by FPGA and Verilog language is introduced in this paper, which is viewed as the most complicated part and an important guarantee for system precision and reliability. An adaptive sliding window for scanning is used to detect peaks. In the process of detection, the algorithm stores 16 sample data as a detection unit and calculates the average of each unit. The average result is used to determine the vertical center height of the sliding window. The algorithm estimates the noise intensity of each detection unit, and then calculates the average of the noise strength of successive 128 units. The noise average is used to calculate the signal to noise ratio of the current working environment, which is used to adjust the height of the sliding window. This adaptive sliding window helps to eliminate fake peaks caused by noise. The whole design is based on the way of pipeline, which can improves the real-time throughput of the overall peak detection module. Its execution speed is up to 140MHz in the FPGA, and the peak can be detected in 16 clock cycle when it appears.
Ball, Hope C.; Holmes, Robert K.; Londraville, Richard L.; Thewissen, Johannes G. M.; Duff, Robert Joel
2013-01-01
Leptin is the primary hormone in mammals that regulates adipose stores. Arctic adapted cetaceans maintain enormous adipose depots, suggesting possible modifications of leptin or receptor function. Determining expression of these genes is the first step to understanding the extreme physiology of these animals, and the uniqueness of these animals presents special challenges in estimating and comparing expression levels of mRNA transcripts. Here, we compare expression of two model genes, leptin and leptin-receptor gene-related product (OB-RGRP), using two quantitative real-time PCR (qPCR) methods: “relative” and “absolute”. To assess the expression of leptin and OB-RGRP in cetacean tissues, we first examined how relative expression of those genes might differ when normalized to four common endogenous control genes. We performed relative expression qPCR assays measuring the amplification of these two model target genes relative to amplification of 18S ribosomal RNA (18S), ubiquitously expressed transcript (Uxt), ribosomal protein 9 (Rs9) and ribosomal protein 15 (Rs15) endogenous controls. Results demonstrated significant differences in the expression of both genes when different control genes were employed; emphasizing a limitation of relative qPCR assays, especially in studies where differences in physiology and/or a lack of knowledge regarding levels and patterns of expression of common control genes may possibly affect data interpretation. To validate the absolute quantitative qPCR methods, we evaluated the effects of plasmid structure, the purity of the plasmid standard preparation and the influence of type of qPCR “background” material on qPCR amplification efficiencies and copy number determination of both model genes, in multiple tissues from one male bowhead whale. Results indicate that linear plasmids are more reliable than circular plasmid standards, no significant differences in copy number estimation based upon background material used, and
Gerencser, Akos A; Mookerjee, Shona A; Jastroch, Martin; Brand, Martin D
2016-01-01
The aim of this study was to simplify, improve and validate quantitative measurement of the mitochondrial membrane potential (ΔψM) in pancreatic β-cells. This built on our previously introduced calculation of the absolute magnitude of ΔψM in intact cells, using time-lapse imaging of the non-quench mode fluorescence of tetramethylrhodamine methyl ester and a bis-oxonol plasma membrane potential (ΔψP) indicator. ΔψM is a central mediator of glucose-stimulated insulin secretion in pancreatic β-cells. ΔψM is at the crossroads of cellular energy production and demand, therefore precise assay of its magnitude is a valuable tool to study how these processes interplay in insulin secretion. Dispersed islet cell cultures allowed cell type-specific, single-cell observations of cell-to-cell heterogeneity of ΔψM and ΔψP. Glucose addition caused hyperpolarization of ΔψM and depolarization of ΔψP. The hyperpolarization was a monophasic step increase, even in cells where the ΔψP depolarization was biphasic. The biphasic response of ΔψP was associated with a larger hyperpolarization of ΔψM than the monophasic response. Analysis of the relationships between ΔψP and ΔψM revealed that primary dispersed β-cells responded to glucose heterogeneously, driven by variable activation of energy metabolism. Sensitivity analysis of the calibration was consistent with β-cells having substantial cell-to-cell variations in amounts of mitochondria, and this was predicted not to impair the accuracy of determinations of relative changes in ΔψM and ΔψP. Finally, we demonstrate a significant problem with using an alternative ΔψM probe, rhodamine 123. In glucose-stimulated and oligomycin-inhibited β-cells the principles of the rhodamine 123 assay were breached, resulting in misleading conclusions.
Influence of measurement errors on temperature-based death time determination.
Hubig, Michael; Muggenthaler, Holger; Mall, Gita
2011-07-01
Temperature-based methods represent essential tools in forensic death time determination. Empirical double exponential models have gained wide acceptance because they are highly flexible and simple to handle. The most established model commonly used in forensic practice was developed by Henssge. It contains three independent variables: the body mass, the environmental temperature, and the initial body core temperature. The present study investigates the influence of variations in the input data (environmental temperature, initial body core temperature, core temperature, time) on the standard deviation of the model-based estimates of the time since death. Two different approaches were used for calculating the standard deviation: the law of error propagation and the Monte Carlo method. Errors in environmental temperature measurements as well as deviations of the initial rectal temperature were identified as major sources of inaccuracies in model based death time estimation.
Roughness/error trade-offs in neural network time series models
NASA Astrophysics Data System (ADS)
Gustafson, Steven C.; Little, Gordon R.; Loomis, John S.; Tuthill, Theresa A.
1997-04-01
Radial basis function neural network models of a time series may be developed or trained using samples from the series. Each model is a continuous curve that can be used to represent the series or predict future vales. Model development requires a tradeoff between a measure of roughness of the curve and a measure of its error relative to the samples. For roughness defined as the root integrated squared second derivative and for error defined as the root sum squared deviation (which are among the most common definitions), an optimal tradeoff conjecture is proposed and illustrated. The conjecture states that the curve that minimizes roughness subject to given error is a weighted mean of the least squares line and the natural cubic spline through the samples.
NASA Astrophysics Data System (ADS)
Phillips, Alfred, Jr.
Summ means the entirety of the multiverse. It seems clear, from the inflation theories of A. Guth and others, that the creation of many universes is plausible. We argue that Absolute cosmological ideas, not unlike those of I. Newton, may be consistent with dynamic multiverse creations. As suggested in W. Heisenberg's uncertainty principle, and with the Anthropic Principle defended by S. Hawking, et al., human consciousness, buttressed by findings of neuroscience, may have to be considered in our models. Predictability, as A. Einstein realized with Invariants and General Relativity, may be required for new ideas to be part of physics. We present here a two postulate model geared to an Absolute Summ. The seedbed of this work is part of Akhnaton's philosophy (see S. Freud, Moses and Monotheism). Most important, however, is that the structure of human consciousness, manifest in Kenya's Rift Valley 200,000 years ago as Homo sapiens, who were the culmination of the six million year co-creation process of Hominins and Nature in Africa, allows us to do the physics that we do. .
A neighbourhood analysis based technique for real-time error concealment in H.264 intra pictures
NASA Astrophysics Data System (ADS)
Beesley, Steven T. C.; Grecos, Christos; Edirisinghe, Eran
2007-02-01
H.264s extensive use of context-based adaptive binary arithmetic or variable length coding makes streams highly susceptible to channel errors, a common occurrence over networks such as those used by mobile devices. Even a single bit error will cause a decoder to discard all stream data up to the next fixed length resynchronisation point, the worst scenario is that an entire slice is lost. In cases where retransmission and forward error concealment are not possible, a decoder should conceal any erroneous data in order to minimise the impact on the viewer. Stream errors can often be spotted early in the decode cycle of a macroblock which if aborted can provide unused processor cycles, these can instead be used to conceal errors at minimal cost, even as part of a real time system. This paper demonstrates a technique that utilises Sobel convolution kernels to quickly analyse the neighbourhood surrounding erroneous macroblocks before performing a weighted multi-directional interpolation. This generates significantly improved statistical (PSNR) and visual (IEEE structural similarity) results when compared to the commonly used weighted pixel value averaging. Furthermore it is also computationally scalable, both during analysis and concealment, achieving maximum performance from the spare processing power available.
Mitigation of Second-Order Ionospheric Error for Real-Time PPP Users in Europe
NASA Astrophysics Data System (ADS)
Abdelazeem, Mohamed
2016-07-01
Currently, the international global navigation satellite system (GNSS) real-time service (IGS-RTS) products are used extensively for real-time precise point positioning and ionosphere modeling applications. The major challenge of the dual frequency real-time precise point positioning (RT-PPP) is that the solution requires relatively long time to converge to the centimeter-level accuracy. This relatively long convergence time results essentially from the un-modeled high-order ionospheric errors. To overcome this challenge, a method for the second-order ionospheric delay mitigation, which represents the bulk of the high-order ionospheric errors, is proposed for RT-PPP users in Europe. A real-time regional ionospheric model (RT-RIM) over Europe is developed using the IGS-RTS precise satellite orbit and clock products. GPS observations from a regional network consisting of 60 IGS and EUREF reference stations are processed using the Bernese 5.2 software package in order to extract the real-time vertical total electron content (RT-VTEC). The proposed RT-RIM has spatial and temporal resolution of 1º×1º and 15 minutes, respectively. In order to investigate the effect of the second-order ionospheric delay on the RT-PPP solution, new GPS data sets from another reference stations are used. The examined stations are selected to represent different latitudes. The GPS observations are corrected from the second-order ionospheric errors using the extracted RT-VTEC values. In addition, the IGS-RTS precise orbit and clock products are used to account for the satellite orbit and clock errors, respectively. It is shown that the RT-PPP convergence time and positioning accuracy are improved when the second-order ionospheric delay is accounted for.
ERIC Educational Resources Information Center
Linderholm, Tracy; Zhao, Qin
2008-01-01
Working-memory capacity, strategy instruction, and timing of estimates were investigated for their effects on absolute monitoring accuracy, which is the difference between estimated and actual reading comprehension test performance. Participants read two expository texts under one of two randomly assigned reading strategy instruction conditions…
NASA Astrophysics Data System (ADS)
Lu, Aiming; Atkinson, Ian C.; Vaughn, J. Thomas; Thulborn, Keith R.
2011-12-01
The rapid biexponential transverse relaxation of the sodium MR signal from brain tissue requires efficient k-space sampling for quantitative imaging in a time that is acceptable for human subjects. The flexible twisted projection imaging (flexTPI) sequence has been shown to be suitable for quantitative sodium imaging with an ultra-short echo time to minimize signal loss. The fidelity of the k-space center location is affected by the readout gradient timing errors on the three physical axes, which is known to cause image distortion for projection-based acquisitions. This study investigated the impact of these timing errors on the voxel-wise accuracy of the tissue sodium concentration (TSC) bioscale measured with the flexTPI sequence. Our simulations show greater than 20% spatially varying quantification errors when the gradient timing errors are larger than 10 μs on all three axes. The quantification is more tolerant of gradient timing errors on the Z-axis. An existing method was used to measure the gradient timing errors with <1 μs error. The gradient timing error measurement is shown to be RF coil dependent, and timing error differences of up to ˜16 μs have been observed between different RF coils used on the same scanner. The measured timing errors can be corrected prospectively or retrospectively to obtain accurate TSC values.
NASA Astrophysics Data System (ADS)
Schwatke, Christian; Dettmering, Denise; Boergens, Eva
2015-04-01
compare our results with gauges and external inland altimeter databases (e.g. Hydroweb). We yield very high correlations between absolute water level height time series from altimetry and gauges. Moreover, the comparisons of water level heights are also used for the validation of the error assessment. More than 200 water level time series were already computed and made public available via the "Database for Hydrological Time Series of Inland Waters" (DAHITI) which is available via http://dahiti.dgfi.tum.de .
Time that tells: critical clock-drawing errors for dementia screening
Lessig, Mary C.; Scanlan, James M.; Nazemi, Hamid; Borson, Soo
2009-01-01
Background Clock-drawing tests are popular components of dementia screens but no single scoring system has been universally accepted. We sought to identify an optimal subset of clock errors for dementia screening and compare them with three other systems representative of the existing wide variations in approach (Shulman, Mendez, Wolf-Klein), as well as with the CDT system used in the Mini-Cog, which combines clock drawing with delayed recall. Methods The clock drawings of an ethnolinguistically and educationally diverse sample (N = 536) were analyzed for the association of 24 different errors with the presence and severity of dementia defined by independent research criteria. The final sample included 364 subjects with ≥5 years of education, as preliminary examination suggested different error patterns in subjects with 0–4 years of education and inadequate numbers of normal controls for reliable analysis. Results Eleven of 24 errors were significantly associated with dementia in subjects with ≥5 years of education, and six were combined to identify dementia with 88% specificity and 71% sensitivity: inaccurate time setting, no hands, missing numbers, number substitutions or repetitions, or refusal to attempt clock drawing. Time setting was the most prevalent error at all dementia stages, refusal occurred only in moderate and severe dementia; and ethnicity and language of administration had no effect. All critical errors increased in frequency with dementia stage. This simplified scoring system had much better specificity than two other systems (88% vs 39% for Mendez’s system –63% for Shulman’s) and much better sensitivity than Wolf-Klein’s (71% vs 51%). Stepwise logistic regression found the simplified system to be more strongly predictive of dementia than the three other CDT systems of dementia. Substituting the new CDT algorithm for that used in the original CDT Mini-Cog improved the Mini-Cog’s specificity from 89 to 93% with minimal change in
5 CFR 1605.22 - Claims for correction of Board or TSP record keeper errors; time limitations.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 5 Administrative Personnel 3 2012-01-01 2012-01-01 false Claims for correction of Board or TSP record keeper errors; time limitations. 1605.22 Section 1605.22 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD CORRECTION OF ADMINISTRATIVE ERRORS Board or TSP Record Keeper Errors § 1605.22 Claims for correction of Board or...
Gerencser, Akos A; Mookerjee, Shona A; Jastroch, Martin; Brand, Martin D
2016-01-01
The aim of this study was to simplify, improve and validate quantitative measurement of the mitochondrial membrane potential (ΔψM) in pancreatic β-cells. This built on our previously introduced calculation of the absolute magnitude of ΔψM in intact cells, using time-lapse imaging of the non-quench mode fluorescence of tetramethylrhodamine methyl ester and a bis-oxonol plasma membrane potential (ΔψP) indicator. ΔψM is a central mediator of glucose-stimulated insulin secretion in pancreatic β-cells. ΔψM is at the crossroads of cellular energy production and demand, therefore precise assay of its magnitude is a valuable tool to study how these processes interplay in insulin secretion. Dispersed islet cell cultures allowed cell type-specific, single-cell observations of cell-to-cell heterogeneity of ΔψM and ΔψP. Glucose addition caused hyperpolarization of ΔψM and depolarization of ΔψP. The hyperpolarization was a monophasic step increase, even in cells where the ΔψP depolarization was biphasic. The biphasic response of ΔψP was associated with a larger hyperpolarization of ΔψM than the monophasic response. Analysis of the relationships between ΔψP and ΔψM revealed that primary dispersed β-cells responded to glucose heterogeneously, driven by variable activation of energy metabolism. Sensitivity analysis of the calibration was consistent with β-cells having substantial cell-to-cell variations in amounts of mitochondria, and this was predicted not to impair the accuracy of determinations of relative changes in ΔψM and ΔψP. Finally, we demonstrate a significant problem with using an alternative ΔψM probe, rhodamine 123. In glucose-stimulated and oligomycin-inhibited β-cells the principles of the rhodamine 123 assay were breached, resulting in misleading conclusions. PMID:27404273
Gerencser, Akos A.; Mookerjee, Shona A.; Jastroch, Martin; Brand, Martin D.
2016-01-01
The aim of this study was to simplify, improve and validate quantitative measurement of the mitochondrial membrane potential (ΔψM) in pancreatic β-cells. This built on our previously introduced calculation of the absolute magnitude of ΔψM in intact cells, using time-lapse imaging of the non-quench mode fluorescence of tetramethylrhodamine methyl ester and a bis-oxonol plasma membrane potential (ΔψP) indicator. ΔψM is a central mediator of glucose-stimulated insulin secretion in pancreatic β-cells. ΔψM is at the crossroads of cellular energy production and demand, therefore precise assay of its magnitude is a valuable tool to study how these processes interplay in insulin secretion. Dispersed islet cell cultures allowed cell type-specific, single-cell observations of cell-to-cell heterogeneity of ΔψM and ΔψP. Glucose addition caused hyperpolarization of ΔψM and depolarization of ΔψP. The hyperpolarization was a monophasic step increase, even in cells where the ΔψP depolarization was biphasic. The biphasic response of ΔψP was associated with a larger hyperpolarization of ΔψM than the monophasic response. Analysis of the relationships between ΔψP and ΔψM revealed that primary dispersed β-cells responded to glucose heterogeneously, driven by variable activation of energy metabolism. Sensitivity analysis of the calibration was consistent with β-cells having substantial cell-to-cell variations in amounts of mitochondria, and this was predicted not to impair the accuracy of determinations of relative changes in ΔψM and ΔψP. Finally, we demonstrate a significant problem with using an alternative ΔψM probe, rhodamine 123. In glucose-stimulated and oligomycin-inhibited β-cells the principles of the rhodamine 123 assay were breached, resulting in misleading conclusions. PMID:27404273
NASA Astrophysics Data System (ADS)
Dumont, Gaël; Pilawski, Tamara; Robert, Tanguy; Hermans, Thomas; Garré, Sarah; Nguyen, Frederic
2016-04-01
The electrical resistivity tomography is a suitable method to estimate the water content of a waste material and detect changes in water content. Various ERT profiles, both static data and time-lapse, where acquired on a landfill during the Minerve project. In the literature, the relative change of resistivity (Δρ/ρ) is generally computed. For saline or heat tracer tests in the saturated zone, the Δρ/ρ can be easily translated into pore water conductivity or underground temperature changes (provided that the initial salinity or temperature condition is homogeneous over the ERT panel extension). For water content changes in the vadose zone resulting of an infiltration event or injection experiment, many authors also work with the Δρ/ρ or relative changes of water content Δθ/θ (linked to the change of resistivity through one single parameter: the Archie's law exponent "m"). This parameter is not influenced by the underground temperature and pore fluid conductivity (ρ¬w) condition but is influenced by the initial water content distribution. Therefore, you never know if the loss of Δθ/θ signal is representative of the limit of the infiltration front or more humid initial condition. Another approach for the understanding of the infiltration process is the assessment of the absolute change of water content (Δθ). This requires the direct computation of the water content of the waste from the resistivity data. For that purpose, we used petrophysical laws calibrated with laboratory experiments and our knowledge of the in situ temperature and pore fluid conductivity parameters. Then, we investigated water content changes in the waste material after a rainfall event (Δθ= Δθ/θ* θ). This new observation is really representatives of the quantity of water infiltrated in the waste material. However, the uncertainty in the pore fluid conductivity value may influence the computed water changes (Δθ=k*m√(ρw) ; where "m" is the Archie's law exponent
Xiao, Yongling; Abrahamowicz, Michal
2010-03-30
We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.
Error correction in short time steps during the application of quantum gates
NASA Astrophysics Data System (ADS)
de Castro, L. A.; Napolitano, R. d. J.
2016-04-01
We propose a modification of the standard quantum error-correction method to enable the correction of errors that occur due to the interaction with a noisy environment during quantum gates without modifying the codification used for memory qubits. Using a perturbation treatment of the noise that allows us to separate it from the ideal evolution of the quantum gate, we demonstrate that in certain cases it is necessary to divide the logical operation in short time steps intercalated by correction procedures. A prescription of how these gates can be constructed is provided, as well as a proof that, even for the cases when the division of the quantum gate in short time steps is not necessary, this method may be advantageous for reducing the total duration of the computation.
Accelerating Time-Varying Hardware Volume Rendering Using TSP Trees and Color-Based Error Metrics
NASA Technical Reports Server (NTRS)
Ellsworth, David; Chiang, Ling-Jen; Shen, Han-Wei; Kwak, Dochan (Technical Monitor)
2000-01-01
This paper describes a new hardware volume rendering algorithm for time-varying data. The algorithm uses the Time-Space Partitioning (TSP) tree data structure to identify regions within the data that have spatial or temporal coherence. By using this coherence, the rendering algorithm can improve performance when the volume data is larger than the texture memory capacity by decreasing the amount of textures required. This coherence can also allow improved speed by appropriately rendering flat-shaded polygons instead of textured polygons, and by not rendering transparent regions. To reduce the polygonization overhead caused by the use of the hierarchical data structure, we introduce an optimization method using polygon templates. The paper also introduces new color-based error metrics, which more accurately identify coherent regions compared to the earlier scalar-based metrics. By showing experimental results from runs using different data sets and error metrics, we demonstrate that the new methods give substantial improvements in volume rendering performance.
A time dependent approach for removing the cell boundary error in elliptic homogenization problems
NASA Astrophysics Data System (ADS)
Arjmand, Doghonay; Runborg, Olof
2016-06-01
This paper concerns the cell-boundary error present in multiscale algorithms for elliptic homogenization problems. Typical multiscale methods have two essential components: a macro and a micro model. The micro model is used to upscale parameter values which are missing in the macro model. To solve the micro model, boundary conditions are required on the boundary of the microscopic domain. Imposing a naive boundary condition leads to O (ε / η) error in the computation, where ε is the size of the microscopic variations in the media and η is the size of the micro-domain. The removal of this error in modern multiscale algorithms still remains an important open problem. In this paper, we present a time-dependent approach which is general in terms of dimension. We provide a theorem which shows that we have arbitrarily high order convergence rates in terms of ε / η in the periodic setting. Additionally, we present numerical evidence showing that the method improves the O (ε / η) error to O (ε) in general non-periodic media.
Tracking a Quantum Error Syndrome in Real Time: Quantum Jumps of Photon Parity
NASA Astrophysics Data System (ADS)
Schoelkopf, Robert
2015-03-01
Dramatic progress has been made in the last decade and a half towards realizing solid-state systems for quantum information processing with superconducting quantum circuits. Artificial atoms (or qubits) based on Josephson junctions have improved their coherence times more than 100,000-fold, have been entangled, and used to perform simple quantum algorithms. The next challenge for the field is demonstrating quantum error correction that actually improves the lifetimes, a necessary step for building more complex systems. I will describe recent experiments with superconducting circuits, where we store quantum information in the form of Schrodinger cat states of a microwave cavity, containing up to 100 photons. Using an ancilla qubit, we then monitor the gradual death of these cats, photon by photon, by observing the first jumps of photon number parity. This represents the first continuous observation of a quantum error syndrome, and may enable new approaches to quantum information based on photonic qubits. The performance of this error-monitoring system and the prospects for reaching ``breakeven,'' where quantum error correction improves the lifetime of stored information, will be discussed. This worked performed with many collaborators at Yale University, and supported by the Army Research Office, the Laboratory for Physical Science, and the NSF.
NASA Astrophysics Data System (ADS)
Weaver, J. L.; Feldman, U.; Seely, J. F.; Holland, G.; Serlin, V.; Klapisch, M.; Columbant, D.; Mostovych, A.
2001-12-01
Accurate simulation of pellet implosions for direct drive inertial confinement fusion requires benchmarking the codes with experimental data. The Naval Research Laboratory (NRL) has begun to measure the absolute intensity of radiation from laser irradiated targets to provide critical information for the radiatively preheated pellet designs developed by the Nike laser group. Two main diagnostics for this effort are two spectrometers incorporating three detection systems. While both spectrometers use 2500 lines/mm transmission gratings, one instrument is coupled to a soft x-ray streak camera and the other is coupled to both an absolutely calibrated Si photodiode array and a charge coupled device (CCD) camera. Absolute calibration of spectrometer components has been undertaken at the National Synchrotron Light Source at Brookhaven National Laboratories. Currently, the system has been used to measure the spatially integrated soft x-ray flux as a function of target material, laser power, and laser spot size. A comparison between measured and calculated flux for Au and CH targets shows reasonable agreement to one-dimensional modeling for two laser power densities.
Error Analysis of the IGS repro2 Station Position Time Series
NASA Astrophysics Data System (ADS)
Rebischung, P.; Ray, J.; Benoist, C.; Metivier, L.; Altamimi, Z.
2015-12-01
Eight Analysis Centers (ACs) of the International GNSS Service (IGS) have completed a second reanalysis campaign (repro2) of the GNSS data collected by the IGS global tracking network back to 1994, using the latest available models and methodology. The AC repro2 contributions include in particular daily terrestrial frame solutions, the first time with sub-weekly resolution for the full IGS history. The AC solutions, comprising positions for 1848 stations with daily polar motion coordinates, were combined to form the IGS contribution to the next release of the International Terrestrial Reference Frame (ITRF2014). Inter-AC position consistency is excellent, about 1.5 mm horizontal and 4 mm vertical. The resulting daily combined frames were then stacked into a long-term cumulative frame assuming generally linear motions, which constitutes the GNSS input to the ITRF2014 inter-technique combination. A special challenge involved identifying the many position discontinuities, averaging about 1.8 per station. A stacked periodogram of the station position residual time series from this long-term solution reveals a number of unexpected spectral lines (harmonics of the GPS draconitic year, fortnightly tidal lines) on top of a white+flicker background noise and strong seasonal variations. In this study, we will present results from station- and AC-specific analyses of the noise and periodic errors present in the IGS repro2 station position time series. So as to better understand their sources, and in view of developing a spatio-temporal error model, we will focus in particular on the spatial distribution of the noise characteristics and of the periodic errors. By computing AC-specific long-term frames and analyzing the respective residual time series, we will additionally study how the characteristics of the noise and of the periodic errors depend on the adopted analysis strategy and reduction software.
SEPARABLE RESPONSES TO ERROR, AMBIGUITY, AND REACTION TIME IN CINGULO-OPERCULAR TASK CONTROL REGIONS
Neta, Maital; Schlaggar, Bradley L.; Petersen, Steven E.
2014-01-01
The dorsal anterior cingulate (dACC), along with the closely affiliated anterior insula/frontal operculum have been demonstrated to show three types of task control signals across a wide variety of tasks. One of these signals, a transient signal that is thought to represent performance feedback, shows greater activity to error than correct trials. Other work has found similar effects for uncertainty/ambiguity or conflict, though some argue that dACC activity is, instead, modulated primarily by other processes more reflected in reaction time. Here, we demonstrate that, rather than a single explanation, multiple information processing operations are crucial to characterizing the function of these brain regions, by comparing operations within a single paradigm. Participants performed two tasks in an fMRI experimental session: (1) deciding whether or not visually presented word pairs rhyme, and (2) rating auditorily presented single words as abstract or concrete. A pilot was used to identify ambiguous stimuli for both tasks (e.g., word pair: BASS/GRACE; single word: CHANGE). We found greater cingulo-opercular activity for errors and ambiguous trials than clear/correct trials, with a robust effect of reaction time. The effects of error and ambiguity remained when reaction time was regressed out, although the differences decreased. Further stepwise regression of response consensus (agreement across participants for each stimulus; a proxy for ambiguity) decreased differences between ambiguous and clear trials, but left error-related differences almost completely intact. These observations suggest that trial-wise responses in cinguloopercular regions monitor multiple performance indices, including accuracy, ambiguity, and reaction time. PMID:24887509
PULSAR TIMING ERRORS FROM ASYNCHRONOUS MULTI-FREQUENCY SAMPLING OF DISPERSION MEASURE VARIATIONS
Lam, M. T.; Cordes, J. M.; Chatterjee, S.; Dolch, T.
2015-03-10
Free electrons in the interstellar medium cause frequency-dependent delays in pulse arrival times due to both scattering and dispersion. Multi-frequency measurements are used to estimate and remove dispersion delays. In this paper, we focus on the effect of any non-simultaneity of multi-frequency observations on dispersive delay estimation and removal. Interstellar density variations combined with changes in the line of sight from pulsar and observer motions cause dispersion measure (DM) variations with an approximately power-law power spectrum, augmented in some cases by linear trends. We simulate time series, estimate the magnitude and statistical properties of timing errors that result from non-simultaneous observations, and derive prescriptions for data acquisition that are needed in order to achieve a specified timing precision. For nearby, highly stable pulsars, measurements need to be simultaneous to within about one day in order for the timing error from asynchronous DM correction to be less than about 10 ns. We discuss how timing precision improves when increasing the number of dual-frequency observations used in DM estimation for a given epoch. For a Kolmogorov wavenumber spectrum, we find about a factor of two improvement in precision timing when increasing from two to three observations but diminishing returns thereafter.
NASA Astrophysics Data System (ADS)
Hirose, Kenichiro; Manzawa, Yasuo; Goshima, Masahiro; Sakai, Shuichi
2008-04-01
With the continuous downscaling of transistors, process variation and power consumption have become major issues. Dynamic voltage and frequency scaling (DVFS) with in-situ timing-error monitoring is an effective method that addresses both issues. However, the conventional implementations of this method, which are mainly based on duplicated circuits, have some implementation-specific constraints. In this paper, the authors propose a delay-compensation flip-flop (DCFF) that does not use duplicated circuit components. It monitors timing errors by directly checking the transient timings of signals. The DCFF adjusts the rising-edge timings of the clock to avoid timing errors and compensates the timing margins between successive stages. Simulations using simulation program with integrated circuit emphasis (SPICE) indicated that the DCFF can operate in a wider supply voltage range than the conventional implementation of DVFS with in-situ timing-error monitoring. A 2.5 ×2.5 mm2 test chip was designed by using a 0.18 µm 5-metal process. An essential circuit component of the DCFF was implemented using semi-custom gate-array chips and its operation was verified. Although more detailed and varied simulations and actual measurements are required as future work, DCFFs can be effectively applied to process-variation tolerance and low-power computation and to optimize the design margin and resolve the false-path problem.
Real-Time Baseline Error Estimation and Correction for GNSS/Strong Motion Seismometer Integration
NASA Astrophysics Data System (ADS)
Li, C. Y. N.; Groves, P. D.; Ziebart, M. K.
2014-12-01
Accurate and rapid estimation of permanent surface displacement is required immediately after a slip event for earthquake monitoring or tsunami early warning. It is difficult to achieve the necessary accuracy and precision at high- and low-frequencies using GNSS or seismometry alone. GNSS and seismic sensors can be integrated to overcome the limitations of each. Kalman filter algorithms with displacement and velocity states have been developed to combine GNSS and accelerometer observations to obtain the optimal displacement solutions. However, the sawtooth-like phenomena caused by the bias or tilting of the sensor decrease the accuracy of the displacement estimates. A three-dimensional Kalman filter algorithm with an additional baseline error state has been developed. An experiment with both a GNSS receiver and a strong motion seismometer mounted on a movable platform and subjected to known displacements was carried out. The results clearly show that the additional baseline error state enables the Kalman filter to estimate the instrument's sensor bias and tilt effects and correct the state estimates in real time. Furthermore, the proposed Kalman filter algorithm has been validated with data sets from the 2010 Mw 7.2 El Mayor-Cucapah Earthquake. The results indicate that the additional baseline error state can not only eliminate the linear and quadratic drifts but also reduce the sawtooth-like effects from the displacement solutions. The conventional zero-mean baseline-corrected results cannot show the permanent displacements after an earthquake; the two-state Kalman filter can only provide stable and optimal solutions if the strong motion seismometer had not been moved or tilted by the earthquake. Yet the proposed Kalman filter can achieve the precise and accurate displacements by estimating and correcting for the baseline error at each epoch. The integration filters out noise-like distortions and thus improves the real-time detection and measurement capability
Mixed control for perception and action: timing and error correction in rhythmic ball-bouncing.
Siegler, I A; Bazile, C; Warren, W H
2013-05-01
The task of bouncing a ball on a racket was adopted as a model system for investigating the behavioral dynamics of rhythmic movement, specifically how perceptual information modulates the dynamics of action. Two experiments, with sixteen participants each, were carried out to definitively answer the following questions: How are passive stability and active stabilization combined to produce stable behavior? What informational quantities are used to actively regulate the two main components of the action-the timing of racket oscillation and the correction of errors in bounce height? We used a virtual ball-bouncing setup to simultaneously perturb gravity (g) and ball launch velocity (v b) at impact. In Experiment 1, we tested the control of racket timing by varying the ball's upward half-period t up while holding its peak height h p constant. Conversely, in Experiment 2, we tested error correction by varying h p while holding t up constant. Participants adopted a mixed control mode in which information in the ball's trajectory is used to actively stabilize behavior on a cycle-by-cycle basis, in order to keep the system within or near the passively stable region. The results reveal how these adjustments are visually controlled: the period of racket oscillation is modulated by the half-period of the ball's upward flight, and the change in racket velocity from the previous impact (via a change in racket amplitude) is governed by the error to the target. PMID:23515627
Finite-approximation-error-based discrete-time iterative adaptive dynamic programming.
Wei, Qinglai; Wang, Fei-Yue; Liu, Derong; Yang, Xiong
2014-12-01
In this paper, a new iterative adaptive dynamic programming (ADP) algorithm is developed to solve optimal control problems for infinite horizon discrete-time nonlinear systems with finite approximation errors. First, a new generalized value iteration algorithm of ADP is developed to make the iterative performance index function converge to the solution of the Hamilton-Jacobi-Bellman equation. The generalized value iteration algorithm permits an arbitrary positive semi-definite function to initialize it, which overcomes the disadvantage of traditional value iteration algorithms. When the iterative control law and iterative performance index function in each iteration cannot accurately be obtained, for the first time a new "design method of the convergence criteria" for the finite-approximation-error-based generalized value iteration algorithm is established. A suitable approximation error can be designed adaptively to make the iterative performance index function converge to a finite neighborhood of the optimal performance index function. Neural networks are used to implement the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the developed method. PMID:25265640
Effects of dating errors on nonparametric trend analyses of speleothem time series
NASA Astrophysics Data System (ADS)
Mudelsee, M.; Fohlmeister, J.; Scholz, D.
2012-10-01
A fundamental problem in paleoclimatology is to take fully into account the various error sources when examining proxy records with quantitative methods of statistical time series analysis. Records from dated climate archives such as speleothems add extra uncertainty from the age determination to the other sources that consist in measurement and proxy errors. This paper examines three stalagmite time series of oxygen isotopic composition (δ18O) from two caves in western Germany, the series AH-1 from the Atta Cave and the series Bu1 and Bu4 from the Bunker Cave. These records carry regional information about past changes in winter precipitation and temperature. U/Th and radiocarbon dating reveals that they cover the later part of the Holocene, the past 8.6 thousand years (ka). We analyse centennial- to millennial-scale climate trends by means of nonparametric Gasser-Müller kernel regression. Error bands around fitted trend curves are determined by combining (1) block bootstrap resampling to preserve noise properties (shape, autocorrelation) of the δ18O residuals and (2) timescale simulations (models StalAge and iscam). The timescale error influences on centennial- to millennial-scale trend estimation are not excessively large. We find a "mid-Holocene climate double-swing", from warm to cold to warm winter conditions (6.5 ka to 6.0 ka to 5.1 ka), with warm-cold amplitudes of around 0.5‰ δ18O; this finding is documented by all three records with high confidence. We also quantify the Medieval Warm Period (MWP), the Little Ice Age (LIA) and the current warmth. Our analyses cannot unequivocally support the conclusion that current regional winter climate is warmer than that during the MWP.
Effects of dating errors on nonparametric trend analyses of speleothem time series
NASA Astrophysics Data System (ADS)
Mudelsee, M.; Fohlmeister, J.; Scholz, D.
2012-05-01
A fundamental problem in paleoclimatology is to take fully into account the various error sources when examining proxy records with quantitative methods of statistical time series analysis. Records from dated climate archives such as speleothems add extra uncertainty from the age determination to the other sources that consist in measurement and proxy errors. This paper examines three stalagmite time series of oxygen isotopic composition (δ18O) from two caves in Western Germany, the series AH-1 from the Atta cave and the series Bu1 and Bu4 from the Bunker cave. These records carry regional information about past changes in winter precipitation and temperature. U/Th and radiocarbon dating reveals that they cover the later part of the Holocene, the past 8.6 thousand years (ka). We analyse centennial- to millennial-scale climate trends by means of nonparametric Gasser-Müller kernel regression. Error bands around fitted trend curves are determined by combining (1) block bootstrap resampling to preserve noise properties (shape, autocorrelation) of the δ18O residuals and (2) timescale simulations (models StalAge and iscam). The timescale error influences on centennial- to millennial-scale trend estimation are not excessively large. We find a "mid-Holocene climate double-swing", from warm to cold to warm winter conditions (6.5 ka to 6.0 ka to 5.1 ka), with warm-cold amplitudes of around 0.5‰ δ18O; this finding is documented by all three records with high confidence. We also quantify the Medieval Warm Period (MWP), the Little Ice Age (LIA) and the current warmth. Our analyses cannot unequivocally support the conclusion that current regional winter climate is warmer than that during the MWP.
Absolute calibration of optical flats
Sommargren, Gary E.
2005-04-05
The invention uses the phase shifting diffraction interferometer (PSDI) to provide a true point-by-point measurement of absolute flatness over the surface of optical flats. Beams exiting the fiber optics in a PSDI have perfect spherical wavefronts. The measurement beam is reflected from the optical flat and passed through an auxiliary optic to then be combined with the reference beam on a CCD. The combined beams include phase errors due to both the optic under test and the auxiliary optic. Standard phase extraction algorithms are used to calculate this combined phase error. The optical flat is then removed from the system and the measurement fiber is moved to recombine the two beams. The newly combined beams include only the phase errors due to the auxiliary optic. When the second phase measurement is subtracted from the first phase measurement, the absolute phase error of the optical flat is obtained.
Accounting for baseline differences and measurement error in the analysis of change over time.
Braun, Julia; Held, Leonhard; Ledergerber, Bruno
2014-01-15
If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy.
Accounting for baseline differences and measurement error in the analysis of change over time.
Braun, Julia; Held, Leonhard; Ledergerber, Bruno
2014-01-15
If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy. PMID:23900718
Zhou, Hui; Kunz, Thomas; Schwartz, Howard
2011-01-01
Traditional oscillators used in timing modules of CDMA and WiMAX base stations are large and expensive. Applying cheaper and smaller, albeit more inaccurate, oscillators in timing modules is an interesting research challenge. An adaptive control algorithm is presented to enhance the oscillators to meet the requirements of base stations during holdover mode. An oscillator frequency stability model is developed for the adaptive control algorithm. This model takes into account the control loop which creates the correction signal when the timing module is in locked mode. A recursive prediction error method is used to identify the system model parameters. Simulation results show that an oscillator enhanced by our adaptive control algorithm improves the oscillator performance significantly, compared with uncorrected oscillators. Our results also show the benefit of explicitly modeling the control loop. Finally, the cumulative time error upper bound of such enhanced oscillators is investigated analytically and comparison results between the analytical and simulated upper bound are provided. The results show that the analytical upper bound can serve as a practical guide for system designers. PMID:21244973
Zhou, Hui; Kunz, Thomas; Schwartz, Howard
2011-01-01
Traditional oscillators used in timing modules of CDMA and WiMAX base stations are large and expensive. Applying cheaper and smaller, albeit more inaccurate, oscillators in timing modules is an interesting research challenge. An adaptive control algorithm is presented to enhance the oscillators to meet the requirements of base stations during holdover mode. An oscillator frequency stability model is developed for the adaptive control algorithm. This model takes into account the control loop which creates the correction signal when the timing module is in locked mode. A recursive prediction error method is used to identify the system model parameters. Simulation results show that an oscillator enhanced by our adaptive control algorithm improves the oscillator performance significantly, compared with uncorrected oscillators. Our results also show the benefit of explicitly modeling the control loop. Finally, the cumulative time error upper bound of such enhanced oscillators is investigated analytically and comparison results between the analytical and simulated upper bound are provided. The results show that the analytical upper bound can serve as a practical guide for system designers.
Delays without Mistakes: Response Time and Error Distributions in Dual-Task
Kamienkowski, Juan Esteban; Sigman, Mariano
2008-01-01
Background When two tasks are presented within a short interval, a delay in the execution of the second task has been systematically observed. Psychological theorizing has argued that while sensory and motor operations can proceed in parallel, the coordination between these modules establishes a processing bottleneck. This model predicts that the timing but not the characteristics (duration, precision, variability…) of each processing stage are affected by interference. Thus, a critical test to this hypothesis is to explore whether the qualitiy of the decision is unaffected by a concurrent task. Methodology/Principal Findings In number comparison–as in most decision comparison tasks with a scalar measure of the evidence–the extent to which two stimuli can be discriminated is determined by their ratio, referred as the Weber fraction. We investigated performance in a rapid succession of two non-symbolic comparison tasks (number comparison and tone discrimination) in which error rates in both tasks could be manipulated parametrically from chance to almost perfect. We observed that dual-task interference has a massive effect on RT but does not affect the error rates, or the distribution of errors as a function of the evidence. Conclusions/Significance Our results imply that while the decision process itself is delayed during multiple task execution, its workings are unaffected by task interference, providing strong evidence in favor of a sequential model of task execution. PMID:18787706
Dynamic time warping in phoneme modeling for fast pronunciation error detection.
Miodonska, Zuzanna; Bugdol, Marcin D; Krecichwost, Michal
2016-02-01
The presented paper describes a novel approach to the detection of pronunciation errors. It makes use of the modeling of well-pronounced and mispronounced phonemes by means of the Dynamic Time Warping (DTW) algorithm. Four approaches that make use of the DTW phoneme modeling were developed to detect pronunciation errors: Variations of the Word Structure (VoWS), Normalized Phoneme Distances Thresholding (NPDT), Furthest Segment Search (FSS) and Normalized Furthest Segment Search (NFSS). The performance evaluation of each module was carried out using a speech database of correctly and incorrectly pronounced words in the Polish language, with up to 10 patterns of every trained word from a set of 12 words having different phonetic structures. The performance of DTW modeling was compared to Hidden Markov Models (HMM) that were used for the same four approaches (VoWS, NPDT, FSS, NFSS). The average error rate (AER) was the lowest for DTW with NPDT (AER=0.287) and scored better than HMM with FSS (AER=0.473), which was the best result for HMM. The DTW modeling was faster than HMM for all four approaches. This technique can be used for computer-assisted pronunciation training systems that can work with a relatively small training speech corpus (less than 20 patterns per word) to support speech therapy at home.
Dynamic time warping in phoneme modeling for fast pronunciation error detection.
Miodonska, Zuzanna; Bugdol, Marcin D; Krecichwost, Michal
2016-02-01
The presented paper describes a novel approach to the detection of pronunciation errors. It makes use of the modeling of well-pronounced and mispronounced phonemes by means of the Dynamic Time Warping (DTW) algorithm. Four approaches that make use of the DTW phoneme modeling were developed to detect pronunciation errors: Variations of the Word Structure (VoWS), Normalized Phoneme Distances Thresholding (NPDT), Furthest Segment Search (FSS) and Normalized Furthest Segment Search (NFSS). The performance evaluation of each module was carried out using a speech database of correctly and incorrectly pronounced words in the Polish language, with up to 10 patterns of every trained word from a set of 12 words having different phonetic structures. The performance of DTW modeling was compared to Hidden Markov Models (HMM) that were used for the same four approaches (VoWS, NPDT, FSS, NFSS). The average error rate (AER) was the lowest for DTW with NPDT (AER=0.287) and scored better than HMM with FSS (AER=0.473), which was the best result for HMM. The DTW modeling was faster than HMM for all four approaches. This technique can be used for computer-assisted pronunciation training systems that can work with a relatively small training speech corpus (less than 20 patterns per word) to support speech therapy at home. PMID:26739104
Xue, Hongqi; Miao, Hongyu; Wu, Hulin
2010-01-01
This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge–Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the p-order numerical algorithm goes to zero at a rate faster than n−1/(p∧4), the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we have shown that the numerical solution-based NLS estimator and the sieve NLS estimator are strongly consistent. The sieve estimator of constant parameters is asymptotically normal with the same asymptotic co-variance as that of the case where the true ODE solution is exactly known, while the estimator of the time-varying parameter has the optimal convergence rate under some regularity conditions. The theoretical results are also developed for the case when the step size of the ODE numerical solver does not go to zero fast enough or the numerical error is comparable to the measurement error. We illustrate our approach with both simulation studies and clinical data on HIV viral dynamics. PMID:21132064
Measurement error in time-series analysis: a simulation study comparing modelled and monitored data
2013-01-01
Background Assessing health effects from background exposure to air pollution is often hampered by the sparseness of pollution monitoring networks. However, regional atmospheric chemistry-transport models (CTMs) can provide pollution data with national coverage at fine geographical and temporal resolution. We used statistical simulation to compare the impact on epidemiological time-series analysis of additive measurement error in sparse monitor data as opposed to geographically and temporally complete model data. Methods Statistical simulations were based on a theoretical area of 4 regions each consisting of twenty-five 5 km × 5 km grid-squares. In the context of a 3-year Poisson regression time-series analysis of the association between mortality and a single pollutant, we compared the error impact of using daily grid-specific model data as opposed to daily regional average monitor data. We investigated how this comparison was affected if we changed the number of grids per region containing a monitor. To inform simulations, estimates (e.g. of pollutant means) were obtained from observed monitor data for 2003–2006 for national network sites across the UK and corresponding model data that were generated by the EMEP-WRF CTM. Average within-site correlations between observed monitor and model data were 0.73 and 0.76 for rural and urban daily maximum 8-hour ozone respectively, and 0.67 and 0.61 for rural and urban loge(daily 1-hour maximum NO2). Results When regional averages were based on 5 or 10 monitors per region, health effect estimates exhibited little bias. However, with only 1 monitor per region, the regression coefficient in our time-series analysis was attenuated by an estimated 6% for urban background ozone, 13% for rural ozone, 29% for urban background loge(NO2) and 38% for rural loge(NO2). For grid-specific model data the corresponding figures were 19%, 22%, 54% and 44% respectively, i.e. similar for rural loge(NO2) but more marked for urban loge(NO2
Kertzscher, Gustavo Andersen, Claus E.; Tanderup, Kari
2014-05-15
Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusive dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was
Representation of layer-counted proxy records as probability densities on error-free time axes
NASA Astrophysics Data System (ADS)
Boers, Niklas; Goswami, Bedartha; Ghil, Michael
2016-04-01
Time series derived from paleoclimatic proxy records exhibit substantial dating uncertainties in addition to the measurement errors of the proxy values. For radiometrically dated proxy archives, Goswami et al. [1] have recently introduced a framework rooted in Bayesian statistics that successfully propagates the dating uncertainties from the time axis to the proxy axis. The resulting proxy record consists of a sequence of probability densities over the proxy values, conditioned on prescribed age values. One of the major benefits of this approach is that the proxy record is represented on an accurate, error-free time axis. Such unambiguous dating is crucial, for instance, in comparing different proxy records. This approach, however, is not directly applicable to proxy records with layer-counted chronologies, as for example ice cores, which are typically dated by counting quasi-annually deposited ice layers. Hence the nature of the chronological uncertainty in such records is fundamentally different from that in radiometrically dated ones. Here, we introduce a modification of the Goswami et al. [1] approach that is specifically designed for layer-counted proxy records, instead of radiometrically dated ones. We apply our method to isotope ratios and dust concentrations in the NGRIP core, using a published 60,000-year chronology [2]. It is shown that the further one goes into the past, the more the layer-counting errors accumulate and lead to growing uncertainties in the probability density sequence for the proxy values that results from the proposed approach. For the older parts of the record, these uncertainties affect more and more a statistically sound estimation of proxy values. This difficulty implies that great care has to be exercised when comparing and in particular aligning specific events among different layer-counted proxy records. On the other hand, when attempting to derive stochastic dynamical models from the proxy records, one is only interested in the
Zhou, Yi-Hong; Raj, Vinay R; Siegel, Eric; Yu, Liping
2010-08-16
In the last decade, genome-wide gene expression data has been collected from a large number of cancer specimens. In many studies utilizing either microarray-based or knowledge-based gene expression profiling, both the validation of candidate genes and the identification and inclusion of biomarkers in prognosis-modeling has employed real-time quantitative PCR on reverse transcribed mRNA (qRT-PCR) because of its inherent sensitivity and quantitative nature. In qRT-PCR data analysis, an internal reference gene is used to normalize the variation in input sample quantity. The relative quantification method used in current real-time qRT-PCR analysis fails to ensure data comparability pivotal in identification of prognostic biomarkers. By employing an absolute qRT-PCR system that uses a single standard for marker and reference genes (SSMR) to achieve absolute quantification, we showed that the normalized gene expression data is comparable and independent of variations in the quantities of sample as well as the standard used for generating standard curves. We compared two sets of normalized gene expression data with same histological diagnosis of brain tumor from two labs using relative and absolute real-time qRT-PCR. Base-10 logarithms of the gene expression ratio relative to ACTB were evaluated for statistical equivalence between tumors processed by two different labs. The results showed an approximate comparability for normalized gene expression quantified using a SSMR-based qRT-PCR. Incomparable results were seen for the gene expression data using relative real-time qRT-PCR, due to inequality in molar concentration of two standards for marker and reference genes. Overall results show that SSMR-based real-time qRT-PCR ensures comparability of gene expression data much needed in establishment of prognostic/predictive models for cancer patients-a process that requires large sample sizes by combining independent sets of data.
NASA Astrophysics Data System (ADS)
Minamikawa, Takeo; Hayashi, Kenta; Mizuguchi, Tatsuya; Hsieh, Yi-Da; Abdelsalam, Dahi Ghareab; Mizutani, Yasuhiro; Yamamoto, Hirotsugu; Iwata, Tetsuo; Yasui, Takeshi
2016-05-01
A practical method for the absolute frequency measurement of continuous-wave terahertz (CW-THz) radiation uses a photocarrier terahertz frequency comb (PC-THz comb) because of its ability to realize real-time, precise measurement without the need for cryogenic cooling. However, the requirement for precise stabilization of the repetition frequency ( f rep) and/or use of dual femtosecond lasers hinders its practical use. In this article, based on the fact that an equal interval between PC-THz comb modes is always maintained regardless of the fluctuation in f rep, the PC-THz comb induced by an unstabilized laser was used to determine the absolute frequency f THz of CW-THz radiation. Using an f rep-free-running PC-THz comb, the f THz of the frequency-fixed or frequency-fluctuated active frequency multiplier chain CW-THz source was determined at a measurement rate of 10 Hz with a relative accuracy of 8.2 × 10-13 and a relative precision of 8.8 × 10-12 to a rubidium frequency standard. Furthermore, f THz was correctly determined even when fluctuating over a range of 20 GHz. The proposed method enables the use of any commercial femtosecond laser for the absolute frequency measurement of CW-THz radiation.
Post-event human decision errors: operator action tree/time reliability correlation
Hall, R E; Fragola, J; Wreathall, J
1982-11-01
This report documents an interim framework for the quantification of the probability of errors of decision on the part of nuclear power plant operators after the initiation of an accident. The framework can easily be incorporated into an event tree/fault tree analysis. The method presented consists of a structure called the operator action tree and a time reliability correlation which assumes the time available for making a decision to be the dominating factor in situations requiring cognitive human response. This limited approach decreases the magnitude and complexity of the decision modeling task. Specifically, in the past, some human performance models have attempted prediction by trying to emulate sequences of human actions, or by identifying and modeling the information processing approach applicable to the task. The model developed here is directed at describing the statistical performance of a representative group of hypothetical individuals responding to generalized situations.
A new global time-variable gravity mascon solution: Signal and error analysis
NASA Astrophysics Data System (ADS)
Loomis, B.; Luthcke, S. B.; Sabaka, T. J.
2014-12-01
The latest time-variable global gravity mascon solution product from the NASA Goddard Space Flight Center is described and analyzed. This most recent solution is estimated directly from the reduction of the GRACE L1B RL2 data with an optimized set of arc parameters and the full noise covariance. The mascons are estimated monthly with 1-arc-degree equal-area sampling where anisotropic spatial constraints are applied to maximize the recovery of signal while minimizing noise and signal leakage across the geographic constraint region boundaries. Analysis of the solution signals and errors is presented at global and regional scales and comparisons to the GRACE project solutions and independent models are presented. Time series of cryospheric and hydrologic regions are analyzed with the complete Ensemble Empirical Mode Decomposition (EEMD) with adaptive noise algorithm, which adaptively sifts the signal into intrinsic frequency-ordered modes. Lastly, the impact of different solution components is discussed.
A wearable device for real-time motion error detection and vibrotactile instructional cuing.
Lee, Beom-Chan; Chen, Shu; Sienko, Kathleen H
2011-08-01
We have developed a mobile instrument for motion instruction and correction (MIMIC) that enables an expert (i.e., physical therapist) to map his/her movements to a trainee (i.e., patient) in a hands-free fashion. MIMIC comprises an expert module (EM) and a trainee module (TM). Both the EM and TM are composed of six-degree-of-freedom inertial measurement units, microcontrollers, and batteries. The TM also has an array of actuators that provide the user with vibrotactile instructional cues. The expert wears the EM, and his/her relevant body position is computed by an algorithm based on an extended Kalman filter that provides asymptotic state estimation. The captured expert body motion information is transmitted wirelessly to the trainee, and based on the computed difference between the expert and trainee motion, directional instructions are displayed via vibrotactile stimulation to the skin. The trainee is instructed to move in the direction of the vibration sensation until the vibration is eliminated. Two proof-of-concept studies involving young, healthy subjects were conducted using a simplified version of the MIMIC system (pre-specified target trajectories representing ideal expert movements and only two actuators) during anterior-posterior trunk movements. The first study was designed to investigate the effects of changing the expert-trainee error thresholds (0.5(°), 1.0(°), and 1.5(°)) and varying the nature of the control signal (proportional, proportional plus derivative). Expert-subject cross-correlation values were maximized (0.99) and average position errors (0.33(°)) and time delays (0.2 s) were minimized when the controller used a 0.5(°) error threshold and proportional plus derivative feedback control signal. The second study used the best performing activation threshold and control signal determined from the first study to investigate subject performance when the motion task complexity and speed were varied. Subject performance decreased as motion
A Research on Errors in Two-way Satellite Time and Frequency Transfer
NASA Astrophysics Data System (ADS)
Wu, W. J.
2013-07-01
The two-way satellite time and frequency transfer (TWSTFT) is one of the most accurate means for remote clock comparison with an uncertainty in time of less than 1 ns and with a relative uncertainty in frequency of about 10^{-14} d^{-1}. The transmission paths of signals between two stations are almost symmetrical in the TWSTFT. In principal, most of all kinds of path delays are canceled out, which guarantees the high accuracy of TWSTFT. With the development of TWSTFT and the increase in the frequence of observations, it is showed that the diurnal variation of systematic errors is about 1˜3 ns in the TWSTFT. This problem has become a hot topic of research around the world. By using the data of Transfer Satellite Orbit Determination Net (TSODN) and international TWSTFT links, the systematic errors are studied in detail as follows: (1) The atmospheric effect. This includes ionospheric and tropospheric effects. The tropospheric effect is very small, and it can be ignored. The ionospheric error can be corrected by using the IGS ionosphere product. The variations of ionospheric effect are about 0˜0.05 ns and 0˜0.7 ns at KU band and C band, respectively, and have the diurnal variation characteristics. (2) The equipment time delay. The equipment delay is closely related with temperature, presenting a linear relation at the normal temperature. Its outdoor part indicates the characteristics of the diurnal variation with the environment temperature. The various kinds of effects related with the modem are studied. Some resolutions are proposed. (3) The satellite transponder effect. This effect is studied by using the data of international TWSTFT links. It is analyzed that different satellite transponders can highly increase the amplitude of the diurnal variation in one TWSTFT link. This is the major reason of the diurnal variation in the TWSTFT. The function fitting method is used to basically solve this problem. (4) The satellite motion effect. The geostationary
van Tuinen, Marcel; Hadly, Elizabeth A
2004-08-01
The best reconstructions of the history of life will use both molecular time estimates and fossil data. Errors in molecular rate estimation typically are unaccounted for and no attempts have been made to quantify this uncertainty comprehensively. Here, focus is primarily on fossil calibration error because this error is least well understood and nearly universally disregarded. Our quantification of errors in the synapsid-diapsid calibration illustrates that although some error can derive from geological dating of sedimentary rocks, the absence of good stem fossils makes phylogenetic error the most critical. We therefore propose the use of calibration ages that are based on the first undisputed synapsid and diapsid. This approach yields minimum age estimates and standard errors of 306.1 +/- 8.5 MYR for the divergence leading to birds and mammals. Because this upper bound overlaps with the recent use of 310 MYR, we do not support the notion that several metazoan divergence times are significantly overestimated because of serious miscalibration (sensuLee 1999). However, the propagation of relevant errors reduces the statistical significance of the pre-K-T boundary diversification of many bird lineages despite retaining similar point time estimates. Our results demand renewed investigation into suitable loci and fossil calibrations for constructing evolutionary timescales.
Sarnat, Stefanie E; Klein, Mitchel; Sarnat, Jeremy A; Flanders, W Dana; Waller, Lance A; Mulholland, James A; Russell, Armistead G; Tolbert, Paige E
2010-03-01
Relatively few studies have evaluated the effects of heterogeneous spatiotemporal pollutant distributions on health risk estimates in time-series analyses that use data from a central monitor to assign exposures. We present a method for examining the effects of exposure measurement error relating to spatiotemporal variability in ambient air pollutant concentrations on air pollution health risk estimates in a daily time-series analysis of emergency department visits in Atlanta, Georgia. We used Poisson generalized linear models to estimate associations between current-day pollutant concentrations and circulatory emergency department visits for the 1998-2004 time period. Data from monitoring sites located in different geographical regions of the study area and at different distances from several urban geographical subpopulations served as alternative measures of exposure. We observed associations for spatially heterogeneous pollutants (CO and NO(2)) using data from several different urban monitoring sites. These associations were not observed when using data from the most rural site, located 38 miles from the city center. In contrast, associations for spatially homogeneous pollutants (O(3) and PM(2.5)) were similar, regardless of the monitoring site location. We found that monitoring site location and the distance of a monitoring site to a population of interest did not meaningfully affect estimated associations for any pollutant when using data from urban sites located within 20 miles from the population center under study. However, for CO and NO(2), these factors were important when using data from rural sites located > or = 30 miles from the population center, most likely owing to exposure measurement error. Overall, our findings lend support to the use of pollutant data from urban central sites to assess population exposures within geographically dispersed study populations in Atlanta and similar cities. PMID:19277071
Verdict: Time-Dependent Density Functional Theory "Not Guilty" of Large Errors for Cyanines.
Jacquemin, Denis; Zhao, Yan; Valero, Rosendo; Adamo, Carlo; Ciofini, Ilaria; Truhlar, Donald G
2012-04-10
We assess the accuracy of eight Minnesota density functionals (M05 through M08-SO) and two others (PBE and PBE0) for the prediction of electronic excitation energies of a family of four cyanine dyes. We find that time-dependent density functional theory (TDDFT) with the five most recent of these functionals (from M06-HF through M08-SO) is able to predict excitation energies for cyanine dyes within 0.10-0.36 eV accuracy with respect to the most accurate available Quantum Monte Carlo calculations, providing a comparable accuracy to the latest generation of CASPT2 calculations, which have errors of 0.16-0.34 eV. Therefore previous conclusions that TDDFT cannot treat cyanine dyes reasonably accurately must be revised.
GPS receivers timing data processing using neural networks: optimal estimation and errors modeling.
Mosavi, M R
2007-10-01
The Global Positioning System (GPS) is a network of satellites, whose original purpose was to provide accurate navigation, guidance, and time transfer to military users. The past decade has also seen rapid concurrent growth in civilian GPS applications, including farming, mining, surveying, marine, and outdoor recreation. One of the most significant of these civilian applications is commercial aviation. A stand-alone civilian user enjoys an accuracy of 100 meters and 300 nanoseconds, 25 meters and 200 nanoseconds, before and after Selective Availability (SA) was turned off. In some applications, high accuracy is required. In this paper, five Neural Networks (NNs) are proposed for acceptable noise reduction of GPS receivers timing data. The paper uses from an actual data collection for evaluating the performance of the methods. An experimental test setup is designed and implemented for this purpose. The obtained experimental results from a Coarse Acquisition (C/A)-code single-frequency GPS receiver strongly support the potential of methods to give high accurate timing. Quality of the obtained results is very good, so that GPS timing RMS error reduce to less than 120 and 40 nanoseconds, with and without SA. PMID:18098370
GPS receivers timing data processing using neural networks: optimal estimation and errors modeling.
Mosavi, M R
2007-10-01
The Global Positioning System (GPS) is a network of satellites, whose original purpose was to provide accurate navigation, guidance, and time transfer to military users. The past decade has also seen rapid concurrent growth in civilian GPS applications, including farming, mining, surveying, marine, and outdoor recreation. One of the most significant of these civilian applications is commercial aviation. A stand-alone civilian user enjoys an accuracy of 100 meters and 300 nanoseconds, 25 meters and 200 nanoseconds, before and after Selective Availability (SA) was turned off. In some applications, high accuracy is required. In this paper, five Neural Networks (NNs) are proposed for acceptable noise reduction of GPS receivers timing data. The paper uses from an actual data collection for evaluating the performance of the methods. An experimental test setup is designed and implemented for this purpose. The obtained experimental results from a Coarse Acquisition (C/A)-code single-frequency GPS receiver strongly support the potential of methods to give high accurate timing. Quality of the obtained results is very good, so that GPS timing RMS error reduce to less than 120 and 40 nanoseconds, with and without SA.
NASA Astrophysics Data System (ADS)
Colli, Matteo; Lanza, Luca Giovanni; Rasmussen, Roy; Mireille Thériault, Julie
2014-05-01
Among the different environmental sources of error for ground based solid precipitation measurements, wind is the main responsible for a large reduction of the catching performance. This is due to the aero-dynamic response of the gauge that affects the originally undisturbed airflow causing the deformation of the snowflakes trajectories. The application of composite gauge/wind shield measuring configurations allows the improvements of the collection efficiency (CE) at low wind speeds (Uw) but the performance achievable under severe airflow velocities and the role of turbulence still have to be explained. This work is aimed to assess the wind induced errors of a Geonor T200B vibrating wires gauge equipped with a single Alter shield. This is a common measuring system for solid precipitation, which constitutes of the R3 reference system in the ongoing WMO Solid Precipitation InterComparison Experiment (SPICE). The analysis is carried out by adopting advanced Computational Fluid Dynamics (CFD) tools for the numerical simulation of the turbulent airflow realized in the proximity of the catching section of the gauge. The airflow patterns were computed by running both time-dependent (Large Eddies Simulation) and time-independent (Reynolds Averaged Navier-Stokes) simulations. on the Yellowstone high performance computing system of the National Center for Atmospheric Research. The evaluation of CE under different Uw conditions was obtained by running a Lagrangian model for the calculation of the snowflakes trajectories building on the simulated airflow patterns. Particular attention has been paid to the sensitivity of the trajectories to different snow particles sizes and water content (corresponding to dry and wet snow). The results will be illustrated in comparative form between the different methodologies adopted and the existing infield CE evaluations based on double shield reference gauges.
Statistical modelling of forecast errors for multiple lead-times and a system of reservoirs
NASA Astrophysics Data System (ADS)
Engeland, Kolbjorn; Steinsland, Ingelin; Kolberg, Sjur
2010-05-01
Water resources management, e.g. operation of reservoirs, is amongst others based on forecasts of inflow provided by a precipitation-runoff model. The forecasted inflow is normally given as one value, even though it is an uncertain value. There is a growing interest to account for uncertain information in decision support systems, e.g. how to operate a hydropower reservoir to maximize the gain. One challenge is to develop decision support systems that can use uncertain information. The contribution from the hydrological modeler is to derive a forecast distribution (from which uncertainty intervals can be computed) for the inflow predictions. In this study we constructed a statistical model for the forecast errors for daily inflow into a system of four hydropower reservoirs in Ulla-Førre in Western Norway. A distributed hydrological model was applied to generate the inflow forecasts using weather forecasts provided by ECM for lead-times up to 10 days. The precipitation forecasts were corrected for systematic bias. A statistical model based on auto-regressive innovations for Box-Cox-transformed observations and forecasts was constructed for the forecast errors. The parameters of the statistical model were conditioned on climate and the internal snow state in the hydrological model. The model was evaluated according to the reliability of the forecast distribution, the width of the forecast distribution, and efficiency of the median forecast for the 10 lead times and the four catchments. The interpretation of the results had to be done carefully since the inflow data have a large uncertainty.
Saarelma, Jukka; Botts, Jonathan; Hamilton, Brian; Savioja, Lauri
2016-04-01
Finite-difference time-domain (FDTD) simulation has been a popular area of research in room acoustics due to its capability to simulate wave phenomena in a wide bandwidth directly in the time-domain. A downside of the method is that it introduces a direction and frequency dependent error to the simulated sound field due to the non-linear dispersion relation of the discrete system. In this study, the perceptual threshold of the dispersion error is measured in three-dimensional FDTD schemes as a function of simulation distance. Dispersion error is evaluated for three different explicit, non-staggered FDTD schemes using the numerical wavenumber in the direction of the worst-case error of each scheme. It is found that the thresholds for the different schemes do not vary significantly when the phase velocity error level is fixed. The thresholds are found to vary significantly between the different sound samples. The measured threshold for the audibility of dispersion error at the probability level of 82% correct discrimination for three-alternative forced choice is found to be 9.1 m of propagation in a free field, that leads to a maximum group delay error of 1.8 ms at 20 kHz with the chosen phase velocity error level of 2%. PMID:27106330
Saarelma, Jukka; Botts, Jonathan; Hamilton, Brian; Savioja, Lauri
2016-04-01
Finite-difference time-domain (FDTD) simulation has been a popular area of research in room acoustics due to its capability to simulate wave phenomena in a wide bandwidth directly in the time-domain. A downside of the method is that it introduces a direction and frequency dependent error to the simulated sound field due to the non-linear dispersion relation of the discrete system. In this study, the perceptual threshold of the dispersion error is measured in three-dimensional FDTD schemes as a function of simulation distance. Dispersion error is evaluated for three different explicit, non-staggered FDTD schemes using the numerical wavenumber in the direction of the worst-case error of each scheme. It is found that the thresholds for the different schemes do not vary significantly when the phase velocity error level is fixed. The thresholds are found to vary significantly between the different sound samples. The measured threshold for the audibility of dispersion error at the probability level of 82% correct discrimination for three-alternative forced choice is found to be 9.1 m of propagation in a free field, that leads to a maximum group delay error of 1.8 ms at 20 kHz with the chosen phase velocity error level of 2%.
A wearable device for real-time motion error detection and vibrotactile instructional cuing.
Lee, Beom-Chan; Chen, Shu; Sienko, Kathleen H
2011-08-01
We have developed a mobile instrument for motion instruction and correction (MIMIC) that enables an expert (i.e., physical therapist) to map his/her movements to a trainee (i.e., patient) in a hands-free fashion. MIMIC comprises an expert module (EM) and a trainee module (TM). Both the EM and TM are composed of six-degree-of-freedom inertial measurement units, microcontrollers, and batteries. The TM also has an array of actuators that provide the user with vibrotactile instructional cues. The expert wears the EM, and his/her relevant body position is computed by an algorithm based on an extended Kalman filter that provides asymptotic state estimation. The captured expert body motion information is transmitted wirelessly to the trainee, and based on the computed difference between the expert and trainee motion, directional instructions are displayed via vibrotactile stimulation to the skin. The trainee is instructed to move in the direction of the vibration sensation until the vibration is eliminated. Two proof-of-concept studies involving young, healthy subjects were conducted using a simplified version of the MIMIC system (pre-specified target trajectories representing ideal expert movements and only two actuators) during anterior-posterior trunk movements. The first study was designed to investigate the effects of changing the expert-trainee error thresholds (0.5(°), 1.0(°), and 1.5(°)) and varying the nature of the control signal (proportional, proportional plus derivative). Expert-subject cross-correlation values were maximized (0.99) and average position errors (0.33(°)) and time delays (0.2 s) were minimized when the controller used a 0.5(°) error threshold and proportional plus derivative feedback control signal. The second study used the best performing activation threshold and control signal determined from the first study to investigate subject performance when the motion task complexity and speed were varied. Subject performance decreased as motion
Killeen, P R; Taylor, T J
2000-07-01
The performance of fallible counters is investigated in the context of pacemaker-counter models of interval timing. Failure to reliably transmit signals from one stage of a counter to the next generates periodicity in mean and variance of counts registered, with means power functions of input and standard deviations approximately proportional to the means (Weber's law). The transition diagrams and matrices of the counter are self-similar: Their eigenvalues have a fractal form and closely approximate Julia sets. The distributions of counts registered and of hitting times approximate Weibull densities, which provide the foundation for a signal-detection model of discrimination. Different schemes for weighting the values of each stage may be established by conditioning. As higher order stages of a cascade come on-line the veridicality of lower order stages degrades, leading to scale-invariance in error. The capacity of a counter is more likely to be limited by fallible transmission between stages than by a paucity of stages. Probabilities of successful transmission between stages of a binary counter around 0.98 yield predictions consistent with performance in temporal discrimination and production and with channel capacities for identification of unidimensional stimuli.
Teaching Absolute Value Meaningfully
ERIC Educational Resources Information Center
Wade, Angela
2012-01-01
What is the meaning of absolute value? And why do teachers teach students how to solve absolute value equations? Absolute value is a concept introduced in first-year algebra and then reinforced in later courses. Various authors have suggested instructional methods for teaching absolute value to high school students (Wei 2005; Stallings-Roberts…
ERIC Educational Resources Information Center
Sherwood, David E.
2010-01-01
According to closed-loop accounts of motor control, movement errors are detected by comparing sensory feedback to an acquired reference state. Differences between the reference state and the movement-produced feedback results in an error signal that serves as a basis for a correction. The main question addressed in the current study was how…
Liu, Tze-An; Newbury, Nathan R; Coddington, Ian
2011-09-12
We demonstrate a simplified dual-comb LIDAR setup for precision absolute ranging that can achieve a ranging precision of 2 μm in 140 μs acquisition time. With averaging, the precision drops below 1 μm at 0.8 ms and below 200 nm at 20 ms. The system can measure the distance to multiple targets with negligible dead zones and a ranging ambiguity of 1 meter. The system is much simpler than a previous coherent dual-comb LIDAR because the two combs are replaced by free-running, saturable-absorber-based femtosecond Er fiber lasers, rather than tightly phase-locked combs, with the entire time base provided by a single 10-digit frequency counter. Despite the simpler design, the system provides a factor of three improved performance over the previous coherent dual comb LIDAR system.
Liu, Tze-An; Newbury, Nathan R; Coddington, Ian
2011-09-12
We demonstrate a simplified dual-comb LIDAR setup for precision absolute ranging that can achieve a ranging precision of 2 μm in 140 μs acquisition time. With averaging, the precision drops below 1 μm at 0.8 ms and below 200 nm at 20 ms. The system can measure the distance to multiple targets with negligible dead zones and a ranging ambiguity of 1 meter. The system is much simpler than a previous coherent dual-comb LIDAR because the two combs are replaced by free-running, saturable-absorber-based femtosecond Er fiber lasers, rather than tightly phase-locked combs, with the entire time base provided by a single 10-digit frequency counter. Despite the simpler design, the system provides a factor of three improved performance over the previous coherent dual comb LIDAR system. PMID:21935219
Trombley, Adrienne R; Wachter, Leslie; Garrison, Jeffrey; Buckley-Beason, Valerie A; Jahrling, Jordan; Hensley, Lisa E; Schoepp, Randal J; Norwood, David A; Goba, Augustine; Fair, Joseph N; Kulesh, David A
2010-05-01
Viral hemorrhagic fever is caused by a diverse group of single-stranded, negative-sense or positive-sense RNA viruses belonging to the families Filoviridae (Ebola and Marburg), Arenaviridae (Lassa, Junin, Machupo, Sabia, and Guanarito), and Bunyaviridae (hantavirus). Disease characteristics in these families mark each with the potential to be used as a biological threat agent. Because other diseases have similar clinical symptoms, specific laboratory diagnostic tests are necessary to provide the differential diagnosis during outbreaks and for instituting acceptable quarantine procedures. We designed 48 TaqMan-based polymerase chain reaction (PCR) assays for specific and absolute quantitative detection of multiple hemorrhagic fever viruses. Forty-six assays were determined to be virus-specific, and two were designated as pan assays for Marburg virus. The limit of detection for the assays ranged from 10 to 0.001 plaque-forming units (PFU)/PCR. Although these real-time hemorrhagic fever virus assays are qualitative (presence of target), they are also quantitative (measure a single DNA/RNA target sequence in an unknown sample and express the final results as an absolute value (e.g., viral load, PFUs, or copies/mL) on the basis of concentration of standard samples and can be used in viral load, vaccine, and antiviral drug studies.
NASA Astrophysics Data System (ADS)
He, Feng; Zhou, ShanShi; Hu, XiaoGong; Zhou, JianHua; Liu, Li; Guo, Rui; Li, XiaoJie; Wu, Shan
2014-07-01
Satellite-station two-way time comparison is a typical design in Beidou System (BDS) which is significantly different from other satellite navigation systems. As a type of two-way time comparison method, BDS time synchronization is hardly influenced by satellite orbit error, atmosphere delay, tracking station coordinate error and measurement model error. Meanwhile, single-way time comparison can be realized through the method of Multi-satellite Precision Orbit Determination (MPOD) with pseudo-range and carrier phase of monitor receiver. It is proved in the constellation of 3GEO/2IGSO that the radial orbit error can be reflected in the difference between two-way time comparison and single-way time comparison, and that may lead to a substitute for orbit evaluation by SLR. In this article, the relation between orbit error and difference of two-way and single-way time comparison is illustrated based on the whole constellation of BDS. Considering the all-weather and real-time operation mode of two-way time comparison, the orbit error could be quantifiably monitored in a real-time mode through comparing two-way and single-way time synchronization. In addition, the orbit error can be predicted and corrected in a short time based on its periodic characteristic. It is described in the experiments of GEO and IGSO that the prediction accuracy of space signal can be obviously improved when the prediction orbit error is sent to the users through navigation message, and then the UERE including terminal error can be reduced from 0.1 m to 0.4 m while the average accuracy can be improved more than 27%. Though it is still hard to make accuracy improvement for Precision Orbit Determination (POD) and orbit prediction because of the confined tracking net and the difficulties in dynamic model optimization, in this paper, a practical method for orbit accuracy improvement is proposed based on two-way time comparison which can result in the reflection of orbit error.
Correcting incompatible DN values and geometric errors in nighttime lights time series images
Zhao, Naizhuo; Zhou, Yuyu; Samson, Eric L.
2014-09-19
The Defense Meteorological Satellite Program’s Operational Linescan System (DMSP-OLS) nighttime lights imagery has proven to be a powerful remote sensing tool to monitor urbanization and assess socioeconomic activities at large scales. However, the existence of incompatible digital number (DN) values and geometric errors severely limit application of nighttime light image data on multi-year quantitative research. In this study we extend and improve previous studies on inter-calibrating nighttime lights image data to obtain more compatible and reliable nighttime lights time series (NLT) image data for China and the United States (US) through four steps: inter-calibration, geometric correction, steady increase adjustment, and population data correction. We then use gross domestic product (GDP) data to test the processed NLT image data indirectly and find that sum light (summed DN value of pixels in a nighttime light image) maintains apparent increase trends with relatively large GDP growth rates but does not increase or decrease with relatively small GDP growth rates. As nighttime light is a sensitive indicator for economic activity, the temporally consistent trends between sum light and GDP growth rate imply that brightness of nighttime lights on the ground is correctly represented by the processed NLT image data. Finally, through analyzing the corrected NLT image data from 1992 to 2008, we find that China experienced apparent nighttime lights development in 1992-1997 and 2001-2008 respectively and the US suffered from nighttime lights decay in large areas after 2001.
NASA Technical Reports Server (NTRS)
Taylor, B. K.; Casasent, D. P.
1989-01-01
The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.
NASA Astrophysics Data System (ADS)
Mast, Jeffrey; Mlynczak, Martin G.; Hunt, Linda A.; Marshall, B. Thomas; Mertens, Christoper J.; Russell, James M.; Thompson, R. Earl; Gordley, Larry L.
2013-02-01