Science.gov

Sample records for absolute timing error

  1. Relative errors can cue absolute visuomotor mappings.

    PubMed

    van Dam, Loes C J; Ernst, Marc O

    2015-12-01

    When repeatedly switching between two visuomotor mappings, e.g. in a reaching or pointing task, adaptation tends to speed up over time. That is, when the error in the feedback corresponds to a mapping switch, fast adaptation occurs. Yet, what is learned, the relative error or the absolute mappings? When switching between mappings, errors with a size corresponding to the relative difference between the mappings will occur more often than other large errors. Thus, we could learn to correct more for errors with this familiar size (Error Learning). On the other hand, it has been shown that the human visuomotor system can store several absolute visuomotor mappings (Mapping Learning) and can use associated contextual cues to retrieve them. Thus, when contextual information is present, no error feedback is needed to switch between mappings. Using a rapid pointing task, we investigated how these two types of learning may each contribute when repeatedly switching between mappings in the absence of task-irrelevant contextual cues. After training, we examined how participants changed their behaviour when a single error probe indicated either the often-experienced error (Error Learning) or one of the previously experienced absolute mappings (Mapping Learning). Results were consistent with Mapping Learning despite the relative nature of the error information in the feedback. This shows that errors in the feedback can have a double role in visuomotor behaviour: they drive the general adaptation process by making corrections possible on subsequent movements, as well as serve as contextual cues that can signal a learned absolute mapping. PMID:26280315

  2. Space Saving Statistics: An Introduction to Constant Error, Variable Error, and Absolute Error.

    ERIC Educational Resources Information Center

    Guth, David

    1990-01-01

    Article discusses research on orientation and mobility (O&M) for individuals with visual impairments, examining constant, variable, and absolute error (descriptive statistics that quantify fundamentally different characteristics of distributions of spatially directed behavior). It illustrates the statistics with examples, noting their application…

  3. Clock time is absolute and universal

    NASA Astrophysics Data System (ADS)

    Shen, Xinhang

    2015-09-01

    A critical error is found in the Special Theory of Relativity (STR): mixing up the concepts of the STR abstract time of a reference frame and the displayed time of a physical clock, which leads to use the properties of the abstract time to predict time dilation on physical clocks and all other physical processes. Actually, a clock can never directly measure the abstract time, but can only record the result of a physical process during a period of the abstract time such as the number of cycles of oscillation which is the multiplication of the abstract time and the frequency of oscillation. After Lorentz Transformation, the abstract time of a reference frame expands by a factor gamma, but the frequency of a clock decreases by the same factor gamma, and the resulting multiplication i.e. the displayed time of a moving clock remains unchanged. That is, the displayed time of any physical clock is an invariant of Lorentz Transformation. The Lorentz invariance of the displayed times of clocks can further prove within the framework of STR our earth based standard physical time is absolute, universal and independent of inertial reference frames as confirmed by both the physical fact of the universal synchronization of clocks on the GPS satellites and clocks on the earth, and the theoretical existence of the absolute and universal Galilean time in STR which has proved that time dilation and space contraction are pure illusions of STR. The existence of the absolute and universal time in STR has directly denied that the reference frame dependent abstract time of STR is the physical time, and therefore, STR is wrong and all its predictions can never happen in the physical world.

  4. Sub-nanometer periodic nonlinearity error in absolute distance interferometers.

    PubMed

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°. PMID:26026510

  5. Combined Use of Absolute and Differential Seismic Arrival Time Data to Improve Absolute Event Location

    NASA Astrophysics Data System (ADS)

    Myers, S.; Johannesson, G.

    2012-12-01

    Arrival time measurements based on waveform cross correlation are becoming more common as advanced signal processing methods are applied to seismic data archives and real-time data streams. Waveform correlation can precisely measure the time difference between the arrival of two phases, and differential time data can be used to constrain relative location of events. Absolute locations are needed for many applications, which generally requires the use of absolute time data. Current methods for measuring absolute time data are approximately two orders of magnitude less precise than differential time measurements. To exploit the strengths of both absolute and differential time data, we extend our multiple-event location method Bayesloc, which previously used absolute time data only, to include the use of differential time measurements that are based on waveform cross correlation. Fundamentally, Bayesloc is a formulation of the joint probability over all parameters comprising the multiple event location system. The Markov-Chain Monte Carlo method is used to sample from the joint probability distribution given arrival data sets. The differential time component of Bayesloc includes scaling a stochastic estimate of differential time measurement precision based the waveform correlation coefficient for each datum. For a regional-distance synthetic data set with absolute and differential time measurement error of 0.25 seconds and 0.01 second, respectively, epicenter location accuracy is improved from and average of 1.05 km when solely absolute time data are used to 0.28 km when absolute and differential time data are used jointly (73% improvement). The improvement in absolute location accuracy is the result of conditionally limiting absolute location probability regions based on the precise relative position with respect to neighboring events. Bayesloc estimates of data precision are found to be accurate for the synthetic test, with absolute and differential time measurement

  6. On the Error Sources in Absolute Individual Antenna Calibrations

    NASA Astrophysics Data System (ADS)

    Aerts, Wim; Baire, Quentin; Bilich, Andria; Bruyninx, Carine; Legrand, Juliette

    2013-04-01

    field) multi path errors, both during calibration and later on at the station, absolute sub-millimeter positioning with GPS is not (yet) possible. References [1] G. Wübbena, M. Schmitz, G. Boettcher, C. Schumann, "Absolute GNSS Antenna Calibration with a Robot: Repeatability of Phase Variations, Calibration of GLONASS and Determination of Carrier-to-Noise Pattern", International GNSS Service: Analysis Center workshop, 8-12 May 2006, Darmstadt, Germany. [2] P. Zeimetz, H. Kuhlmann, "On the Accuracy of Absolute GNSS Antenna Calibration and the Conception of a New Anechoic Chamber", FIG Working Week 2008, 14-19 June 2008, Stockholm, Sweden. [3] P. Zeimetz, H. Kuhlmann, L. Wanninger, V. Frevert, S. Schön and K. Strauch, "Ringversuch 2009", 7th GNSS-Antennen-Workshop, 19-20 March 2009, Dresden, Germany.

  7. Absolute Timing Calibration of the USA Experiment Using Pulsar Observations

    NASA Astrophysics Data System (ADS)

    Ray, P. S.; Wood, K. S.; Wolff, M. T.; Lovellette, M. N.; Sheikh, S.; Moon, D.-S.; Eikenberry, S. S.; Roberts, M.; Lyne, A.; Jordon, C.; Bloom, E. D.; Tournear, D.; Saz Parkinson, P.; Reilly, K.

    2003-03-01

    We update the status of the absolute time calibration of the USA Experiment as determined by observations of X-ray emitting rotation-powered pulsars. The brightest such source is the Crab Pulsar and we have obtained observations of the Crab at radio, IR, optical, and X-ray wavelengths. We directly compare arrival time determinations for 2--10 keV X-ray observations made contemporaneously with the PCA on the Rossi X-ray Timing Explorer and the USA Experiment on ARGOS. These two X-ray measurements employ very different means of measuring time and satellite position and thus have different systematic error budgets. The comparison with other wavelengths requires additional steps such as dispersion measure corrections and a precise definition of the ``peak'' of the light curve since the light curve shape varies with observing wavelength. We will describe each of these effects and quantify the magnitude of the systematic error that each may contribute. We will also include time comparison results for other pulsars, such as PSR B1509-58 and PSR B1821-24. Once the absolute time calibrations are well understood, comparing absolute arrival times at multiple energies can provide clues to the magnetospheric structure and emission region geometry. Basic research on X-ray Astronomy at NRL is funded by NRL/ONR.

  8. Astigmatism error modification for absolute shape reconstruction using Fourier transform method

    NASA Astrophysics Data System (ADS)

    He, Yuhang; Li, Qiang; Gao, Bo; Liu, Ang; Xu, Kaiyuan; Wei, Xiaohong; Chai, Liqun

    2014-12-01

    A method is proposed to modify astigmatism errors in absolute shape reconstruction of optical plane using Fourier transform method. If a transmission and reflection flat are used in an absolute test, two translation measurements lead to obtain the absolute shapes by making use of the characteristic relationship between the differential and original shapes in spatial frequency domain. However, because the translation device cannot guarantee the test and reference flats rigidly parallel to each other after the translations, a tilt error exists in the obtained differential data, which caused power and astigmatism errors in the reconstructed shapes. In order to modify the astigmatism errors, a rotation measurement is added. Based on the rotation invariability of the form of Zernike polynomial in circular domain, the astigmatism terms are calculated by solving polynomial coefficient equations related to the rotation differential data, and subsequently the astigmatism terms including error are modified. Computer simulation proves the validity of the proposed method.

  9. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    NASA Astrophysics Data System (ADS)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  10. Absolute plate velocities from seismic anisotropy: Importance of correlated errors

    NASA Astrophysics Data System (ADS)

    Zheng, Lin; Gordon, Richard G.; Kreemer, Corné

    2014-09-01

    The errors in plate motion azimuths inferred from shear wave splitting beneath any one tectonic plate are shown to be correlated with the errors of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. Our preferred set of angular velocities, SKS-MORVEL, is determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25 ± 0.11° Ma-1 (95% confidence limits) right handed about 57.1°S, 68.6°E. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ = 19.2°) differs insignificantly from that for continental lithosphere (σ = 21.6°). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ = 7.4°) than for continental lithosphere (σ = 14.7°). Two of the slowest-moving plates, Antarctica (vRMS = 4 mm a-1, σ = 29°) and Eurasia (vRMS = 3 mm a-1, σ = 33°), have two of the largest within-plate dispersions, which may indicate that a plate must move faster than ≈ 5 mm a-1 to result in seismic anisotropy useful for estimating plate motion. The tendency of observed azimuths on the Arabia plate to be counterclockwise of plate motion may provide information about the direction and amplitude of superposed asthenospheric flow or about anisotropy in the lithospheric mantle.

  11. Absolute Timing of the Crab Pulsar with RXTE

    NASA Technical Reports Server (NTRS)

    Rots, Arnold H.; Jahoda, Keith; Lyne, Andrew G.

    2004-01-01

    We have monitored the phase of the main X-ray pulse of the Crab pulsar with the Rossi X-ray Timing Explorer (RXTE) for almost eight years, since the start of the mission in January 1996. The absolute time of RXTE's clock is sufficiently accurate to allow this phase to be compared directly with the radio profile. Our monitoring observations of the pulsar took place bi-weekly (during the periods when it was at least 30 degrees from the Sun) and we correlated the data with radio timing ephemerides derived from observations made at Jodrell Bank. We have determined the phase of the X-ray main pulse for each observation with a typical error in the individual data points of 50 microseconds. The total ensemble is consistent with a phase that is constant over the monitoring period, with the X-ray pulse leading the radio pulse by 0.01025 plus or minus 0.00120 period in phase, or 344 plus or minus 40 microseconds in time. The error estimate is dominated by a systematic error of 40 microseconds, most likely constant, arising from uncertainties in the instrumental calibration of the radio data. The statistical error is 0.00015 period, or 5 microseconds. The separation of the main pulse and interpulse appears to be unchanging at time scales of a year or less, with an average value of 0.4001 plus or minus 0.0002 period. There is no apparent variation in these values with energy over the 2-30 keV range. The lag between the radio and X-ray pulses ma be constant in phase (i.e., rotational in nature) or constant in time (i.e., due to a pathlength difference). We are not (yet) able to distinguish between these two interpretations.

  12. Assessing suturing skills in a self-guided learning setting: absolute symmetry error.

    PubMed

    Brydges, Ryan; Carnahan, Heather; Dubrowski, Adam

    2009-12-01

    Directed self-guidance, whereby trainees independently practice a skill-set in a structured setting, may be an effective technique for novice training. Currently, however, most evaluation methods require an expert to be present during practice. The study aim was to determine if absolute symmetry error, a clinically important measure that can be assessed by the trainee, is a feasible assessment tool for self-guided learning of suturing skill. Forty-eight undergraduate medical trainees independently practiced suturing and knot tying skills using a benchtop model. Performance on a pretest, posttest, retention test and a transfer test was assessed using (1) the validated final product analysis (FPA), (2) the surgical efficiency score (SES), a combination of the FPA and hand motion analysis and (3) absolute symmetry error, a new measure that assesses the symmetry of the final product. Absolute symmetry error, along with the other objective assessment tools, detected improvements in performance from pretest to posttest (P < 0.05). A battery of correlation analyses indicated that absolute symmetry error correlates moderately with the FPA and SES. The development of valid, reliable and feasible technical skill assessments is needed to ensure all training centers evaluate trainee performance in a standardized fashion. Measures that do not require the use of experts or computers have potential for widespread use. We suggest that absolute symmetry error is a useful approximation of novices' suturing and knot tying performance. Future research should evaluate whether absolute symmetry error can enhance learning when used as a source of feedback during self-guided practice. PMID:19132540

  13. Assessing Suturing Skills in a Self-Guided Learning Setting: Absolute Symmetry Error

    ERIC Educational Resources Information Center

    Brydges, Ryan; Carnahan, Heather; Dubrowski, Adam

    2009-01-01

    Directed self-guidance, whereby trainees independently practice a skill-set in a structured setting, may be an effective technique for novice training. Currently, however, most evaluation methods require an expert to be present during practice. The study aim was to determine if absolute symmetry error, a clinically important measure that can be…

  14. IMPROVEMENT OF SMVGEAR II ON VECTOR AND SCALAR MACHINES THROUGH ABSOLUTE ERROR TOLERANCE CONTROL (R823186)

    EPA Science Inventory

    The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...

  15. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  16. Error budget for a calibration demonstration system for the reflected solar instrument for the climate absolute radiance and refractivity observatory

    NASA Astrophysics Data System (ADS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2013-09-01

    A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.

  17. Error Budget for a Calibration Demonstration System for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2013-01-01

    A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.

  18. Demonstrating the Error Budget for the Climate Absolute Radiance and Refractivity Observatory Through Solar Irradiance Measurements

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2016-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.

  19. Preliminary error budget for the reflected solar instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Astrophysics Data System (ADS)

    Thome, K.; Gubbels, T.; Barnes, R.

    2011-10-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as the most critical method to determine the accuracy of climate change. The CLARREO Project will implement a spaceborne earth observation mission designed to provide rigorous SI-traceable observations (i.e., radiance, reflectance, and refractivity) that are sensitive to a wide range of key decadal change variables. The instrument suite includes emitted infrared spectrometers, global navigation receivers for radio occultation, and reflected solar spectrometers. The measurements will be acquired for a period of five years and will enable follow-on missions to extend the climate record over the decades needed to understand climate change. This work describes a preliminary error budget for the RS sensor. The RS sensor will retrieve at-sensor reflectance over the spectral range from 320 to 2300 nm with 500-m GIFOV and a 100-km swath width. The current design is based on an Offner spectrometer with two separate focal planes each with its own entrance aperture and grating covering spectral ranges of 320-640, 600-2300 nm. Reflectance is obtained from the ratio of measurements of radiance while viewing the earth's surface to measurements of irradiance while viewing the sun. The requirement for the RS instrument is that the reflectance must be traceable to SI standards at an absolute uncertainty <0.3%. The calibration approach to achieve the ambitious 0.3% absolute calibration uncertainty is predicated on a reliance on heritage hardware, reduction of sensor complexity, and adherence to detector-based calibration standards. The design above has been used to develop a preliminary error budget that meets the 0.3% absolute requirement. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and

  20. Preliminary Error Budget for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; Gubbels, Timothy; Barnes, Robert

    2011-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) plans to observe climate change trends over decadal time scales to determine the accuracy of climate projections. The project relies on spaceborne earth observations of SI-traceable variables sensitive to key decadal change parameters. The mission includes a reflected solar instrument retrieving at-sensor reflectance over the 320 to 2300 nm spectral range with 500-m spatial resolution and 100-km swath. Reflectance is obtained from the ratio of measurements of the earth s surface to those while viewing the sun relying on a calibration approach that retrieves reflectance with uncertainties less than 0.3%. The calibration is predicated on heritage hardware, reduction of sensor complexity, adherence to detector-based calibration standards, and an ability to simulate in the laboratory on-orbit sources in both size and brightness to provide the basis of a transfer to orbit of the laboratory calibration including a link to absolute solar irradiance measurements. The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe high-accuracy, long-term climate change trends and to use decadal change observations as the most critical method to determine the accuracy of climate change projections such as those in the IPCC Report. A rigorously known accuracy of both decadal change observations as well as climate projections is critical in order to enable sound policy decisions. The CLARREO Project will implement a spaceborne earth observation mission designed to provide rigorous SI traceable observations (i.e., radiance, reflectance, and refractivity) that are sensitive to a wide range of key decadal change variables, including: 1) Surface temperature and atmospheric temperature profile 2) Atmospheric water vapor profile 3) Far infrared water vapor greenhouse 4) Aerosol properties and anthropogenic aerosol direct radiative forcing 5) Total and spectral solar

  1. Generalized approach for using unbiased symmetric metrics with negative values: normalized mean bias factor and normalized mean absolute error factor

    EPA Science Inventory

    Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations...

  2. Henry More and the development of absolute time.

    PubMed

    Thomas, Emily

    2015-12-01

    This paper explores the nature, development and influence of the first English account of absolute time, put forward in the mid-seventeenth century by the 'Cambridge Platonist' Henry More. Against claims in the literature that More does not have an account of time, this paper sets out More's evolving account and shows that it reveals the lasting influence of Plotinus. Further, this paper argues that More developed his views on time in response to his adoption of Descartes' vortex cosmology and cosmogony, providing new evidence of More's wider project to absorb Cartesian natural philosophy into his Platonic metaphysics. Finally, this paper argues that More should be added to the list of sources that later English thinkers - including Newton and Samuel Clarke - drew on in constructing their absolute accounts of time. PMID:26568082

  3. Minimum mean absolute error estimation over the class of generalized stack filters

    NASA Astrophysics Data System (ADS)

    Lin, Jean-Hsang; Coyle, Edward J.

    1990-04-01

    A class of sliding window operators called generalized stack filters is developed. This class of filters, which includes all rank order filters, stack filters, and digital morphological filters, is the set of all filters possessing the threshold decomposition architecture and a consistency property called the stacking property. Conditions under which these filters possess the weak superposition property known as threshold decomposition are determined. An algorithm is provided for determining a generalized stack filter which minimizes the mean absolute error (MAE) between the output of the filter and a desired input signal, given noisy observations of that signal. The algorithm is a linear program whose complexity depends on the window width of the filter and the number of threshold levels observed by each of the filters in the superposition architecture. The results show that choosing the generalized stack filter which minimizes the MAE is equivalent to massively parallel threshold-crossing decision making when the decisions are consistent with each other.

  4. Effective connectivity associated with auditory error detection in musicians with absolute pitch

    PubMed Central

    Parkinson, Amy L.; Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Larson, Charles R.; Robin, Donald A.

    2014-01-01

    It is advantageous to study a wide range of vocal abilities in order to fully understand how vocal control measures vary across the full spectrum. Individuals with absolute pitch (AP) are able to assign a verbal label to musical notes and have enhanced abilities in pitch identification without reliance on an external referent. In this study we used dynamic causal modeling (DCM) to model effective connectivity of ERP responses to pitch perturbation in voice auditory feedback in musicians with relative pitch (RP), AP, and non-musician controls. We identified a network compromising left and right hemisphere superior temporal gyrus (STG), primary motor cortex (M1), and premotor cortex (PM). We specified nine models and compared two main factors examining various combinations of STG involvement in feedback pitch error detection/correction process. Our results suggest that modulation of left to right STG connections are important in the identification of self-voice error and sensory motor integration in AP musicians. We also identify reduced connectivity of left hemisphere PM to STG connections in AP and RP groups during the error detection and corrections process relative to non-musicians. We suggest that this suppression may allow for enhanced connectivity relating to pitch identification in the right hemisphere in those with more precise pitch matching abilities. Musicians with enhanced pitch identification abilities likely have an improved auditory error detection and correction system involving connectivity of STG regions. Our findings here also suggest that individuals with AP are more adept at using feedback related to pitch from the right hemisphere. PMID:24634644

  5. Absolute Timing of the Crab Pulsar: X-ray, Radio, and Optical Observations

    NASA Astrophysics Data System (ADS)

    Ray, P. S.; Wood, K. S.; Wolff, M. T.; Lovellette, M. N.; Sheikh, S.; Moon, D.-S.; Eikenberry, S. S.; Roberts, M.; Bloom, E. D.; Tournear, D.; Saz Parkinson, P.; Reilly, K.

    2002-12-01

    We report on multiwavelength observations of the Crab Pulsar and compare the pulse arrival time at radio, IR, optical, and X-ray wavelengths. Comparing absolute arrival times at multiple energies can provide clues to the magnetospheric structure and emission region geometry. Absolute time calibration of each observing system is of paramount importance for these observations and we describe how this is done for each system. We directly compare arrival time determinations for 2--10 keV X-ray observations made contemporaneously with the PCA on the Rossi X-ray Timing Explorer and the USA Experiment on ARGOS. These two X-ray measurements employ very different means of measuring time and satellite position and thus have different systematic error budgets. The comparison with other wavelengths requires additional steps such as dispersion measure corrections and a precise definition of the ``peak'' of the light curve since the light curve shape varies with observing wavelength. We will describe each of these effects and quantify the magnitude of the systematic error that each may contribute. Basic research on X-ray Astronomy at NRL is funded by NRL/ONR.

  6. Improved Strategies and Optimization of Calibration Models for Real-time PCR Absolute Quantification

    EPA Science Inventory

    Real-time PCR absolute quantification applications rely on the use of standard curves to make estimates of DNA target concentrations in unknown samples. Traditional absolute quantification approaches dictate that a standard curve must accompany each experimental run. However, t...

  7. Generalized approach for using unbiased symmetric metrics with negative values: normalized mean bias factor and normalized mean absolute error factor

    SciTech Connect

    Gustafson, William I.; Yu, Shaocai

    2012-10-23

    Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations of these metrics are only valid for datasets with positive means. This paper presents a methodology to use and interpret the metrics with datasets that have negative means. The updated formulations give identical results compared to the original formulations for the case of positive means, so researchers are encouraged to use the updated formulations going forward without introducing ambiguity.

  8. 75 FR 15371 - Time Error Correction Reliability Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-29

    ... Energy Regulatory Commission 18 CFR Part 40 Time Error Correction Reliability Standard March 18, 2010... section 215 of the Federal Power Act, the Commission proposes to remand the proposed revised Time Error... Commission proposes to remand the Time Error Correction Reliability Standard (BAL-004-1) developed by...

  9. System Measures Errors Between Time-Code Signals

    NASA Technical Reports Server (NTRS)

    Cree, David; Venkatesh, C. N.

    1993-01-01

    System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.

  10. Alterations in Error-Related Brain Activity and Post-Error Behavior over Time

    ERIC Educational Resources Information Center

    Themanson, Jason R.; Rosen, Peter J.; Pontifex, Matthew B.; Hillman, Charles H.; McAuley, Edward

    2012-01-01

    This study examines the relation between the error-related negativity (ERN) and post-error behavior over time in healthy young adults (N = 61). Event-related brain potentials were collected during two sessions of an identical flanker task. Results indicated changes in ERN and post-error accuracy were related across task sessions, with more…

  11. Time dependent corrections to absolute gravity determinations in the establishment of modern gravity control

    NASA Astrophysics Data System (ADS)

    Dykowski, Przemyslaw; Krynski, Jan

    2015-04-01

    The establishment of modern gravity control with the use of exclusively absolute method of gravity determination has significant advantages as compared to the one established mostly with relative gravity measurements (e.g. accuracy, time efficiency). The newly modernized gravity control in Poland consists of 28 fundamental stations (laboratory) and 168 base stations (PBOG14 - located in the field). Gravity at the fundamental stations was surveyed with the FG5-230 gravimeter of the Warsaw University of Technology, and at the base stations - with the A10-020 gravimeter of the Institute of Geodesy and Cartography, Warsaw. This work concerns absolute gravity determinations at the base stations. Although free of common relative measurement errors (e.g. instrumental drift) and effects of network adjustment, absolute gravity determinations for the establishment of gravity control require advanced corrections due to time dependent factors, i.e. tidal and ocean loading corrections, atmospheric corrections and hydrological corrections that were not taken into account when establishing the previous gravity control in Poland. Currently available services and software allow to determine high accuracy and high temporal resolution corrections for atmospheric (based on digital weather models, e.g. ECMWF) and hydrological (based on hydrological models, e.g. GLDAS/Noah) gravitational and loading effects. These corrections are mostly used for processing observations with Superconducting Gravimeters in the Global Geodynamics Project. For the area of Poland the atmospheric correction based on weather models can differ from standard atmospheric correction by even ±2 µGal. The hydrological model shows the annual variability of ±8 µGal. In addition the standard tidal correction may differ from the one obtained from the local tidal model (based on tidal observations). Such difference at Borowa Gora Observatory reaches the level of ±1.5 µGal. Overall the sum of atmospheric and

  12. A Mechanism for Error Detection in Speeded Response Time Tasks

    ERIC Educational Resources Information Center

    Holroyd, Clay B.; Yeung, Nick; Coles, Michael G. H.; Cohen, Jonathan D.

    2005-01-01

    The concept of error detection plays a central role in theories of executive control. In this article, the authors present a mechanism that can rapidly detect errors in speeded response time tasks. This error monitor assigns values to the output of cognitive processes involved in stimulus categorization and response generation and detects errors…

  13. Multi-channel data acquisition system with absolute time synchronization

    NASA Astrophysics Data System (ADS)

    Włodarczyk, Przemysław; Pustelny, Szymon; Budker, Dmitry; Lipiński, Marcin

    2014-11-01

    We present a low-cost, stand-alone global-time-synchronized data acquisition system. Our prototype allows recording up to four analog signals with a 16-bit resolution in variable ranges and a maximum sampling rate of 1000 S/s. The system simultaneously acquires readouts of external sensors e.g. magnetometer or thermometer. A complete data set, including a header containing timestamp, is stored on a Secure Digital (SD) card or transmitted to a computer using Universal Serial Bus (USB). The estimated time accuracy of the data acquisition is better than ±200 ns. The device is intended for use in a global network of optical magnetometers (the Global Network of Optical Magnetometers for Exotic physics - GNOME), which aims to search for signals heralding physics beyond the Standard Model, that can be generated by ordinary spin coupling to exotic particles or anomalous spin interactions.

  14. Absolute GPS Time Event Generation and Capture for Remote Locations

    NASA Astrophysics Data System (ADS)

    HIRES Collaboration

    The HiRes experiment operates fixed location and portable lasers at remote desert locations to generate calibration events. One physics goal of HiRes is to search for unusual showers. These may appear similar to upward or horizontally pointing laser tracks used for atmospheric calibration. It is therefore necessary to remove all of these calibration events from the HiRes detector data stream in a physics blind manner. A robust and convenient "tagging" method is to generate the calibration events at precisely known times. To facilitate this tagging method we have developed the GPSY (Global Positioning System YAG) module. It uses a GPS receiver, an embedded processor and additional timing logic to generate laser triggers at arbitrary programmed times and frequencies with better than 100nS accuracy. The GPSY module has two trigger outputs (one microsecond resolution) to trigger the laser flash-lamp and Q-switch and one event capture input (25nS resolution). The GPSY module can be programmed either by a front panel menu based interface or by a host computer via an RS232 serial interface. The latter also allows for computer logging of generated and captured event times. Details of the design and the implementation of these devices will be presented. 1 Motivation Air Showers represent a small fraction, much less than a percent, of the total High Resolution Fly's Eye data sample. The bulk of the sample is calibration data. Most of this calibration data is generated by two types of systems that use lasers. One type sends light directly to the detectors via optical fibers to monitor detector gains (Girard 2001). The other sends a beam of light into the sky and the scattered light that reaches the detectors is used to monitor atmospheric effects (Wiencke 1998). It is important that these calibration events be cleanly separated from the rest of the sample both to provide a complete set of monitoring information, and more

  15. Inactivation of Cerebellar Cortical Crus II Disrupts Temporal Processing of Absolute Timing but not Relative Timing in Voluntary Movements

    PubMed Central

    Yamaguchi, Kenji; Sakurai, Yoshio

    2016-01-01

    Several recent studies have demonstrated that the cerebellum plays an important role in temporal processing at the scale of milliseconds. However, it is not clear whether intrinsic cerebellar function involves the temporal processing of discrete or continuous events. Temporal processing during discrete events functions by counting absolute time like a stopwatch, while during continuous events it measures events at intervals. During the temporal processing of continuous events, animals might respond to rhythmic timing of sequential responses rather than to the absolute durations of intervals. Here, we tested the contribution of the cerebellar cortex to temporal processing of absolute and relative timings in voluntary movements. We injected muscimol and baclofen to a part of the cerebellar cortex of rats. We then tested the accuracy of their absolute or relative timing prediction using two timing tasks requiring almost identical reaching movements. Inactivation of the cerebellar cortex disrupted accurate temporal prediction in the absolute timing task. The rats formed two groups based on the changes to their timing accuracy following one of two distinct patterns which can be described as longer or shorter declines in the accuracy of learned intervals. However, a part of the cerebellar cortical inactivation did not affect the rats’ performance of relative timing tasks. We concluded that a part of the cerebellar cortex, Crus II, contributes to the accurate temporal prediction of absolute timing and that the entire cerebellar cortex may be unnecessary in cases in which accurately knowing the absolute duration of an interval is not required for temporal prediction. PMID:26941621

  16. Time Interval Errors of a Flicker-noise Generator

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    1984-01-01

    Time interval error (TIE) is the error of a clock at time t after it is synchronized and syntonized at time zero. Previous simulations of Flicker FM noise yielded a mean-square TIE proportional to sq t. It is shown that the order of growth is actually sq t log t. The earlier sq t result is explained and a modified version of the Barnes-Jarvis simulation algorithm is given.

  17. Left-hemisphere activation is associated with enhanced vocal pitch error detection in musicians with absolute pitch

    PubMed Central

    Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A.; Larson, Charles R.

    2014-01-01

    The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing. PMID:24355545

  18. Error Representation in Time For Compressible Flow Calculations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2010-01-01

    Time plays an essential role in most real world fluid mechanics problems, e.g. turbulence, combustion, acoustic noise, moving geometries, blast waves, etc. Time dependent calculations now dominate the computational landscape at the various NASA Research Centers but the accuracy of these computations is often not well understood. In this presentation, we investigate error representation (and error control) for time-periodic problems as a prelude to the investigation of feasibility of error control for stationary statistics and space-time averages. o These statistics and averages (e.g. time-averaged lift and drag forces) are often the output quantities sought by engineers. o For systems such as the Navier-Stokes equations, pointwise error estimates deteriorate rapidly which increasing Reynolds number while statistics and averages may remain well behaved.

  19. Objective Error Criterion for Evaluation of Mapping Accuracy Based on Sensor Time-of-Flight Measurements

    PubMed Central

    Barshan, Billur

    2008-01-01

    An objective error criterion is proposed for evaluating the accuracy of maps of unknown environments acquired by making range measurements with different sensing modalities and processing them with different techniques. The criterion can also be used for the assessment of goodness of fit of curves or shapes fitted to map points. A demonstrative example from ultrasonic mapping is given based on experimentally acquired time-of-flight measurements and compared with a very accurate laser map, considered as absolute reference. The results of the proposed criterion are compared with the Hausdorff metric and the median error criterion results. The error criterion is sufficiently general and flexible that it can be applied to discrete point maps acquired with other mapping techniques and sensing modalities as well.

  20. Absolute frequency measurement at 10-16 level based on the international atomic time

    NASA Astrophysics Data System (ADS)

    Hachisu, H.; Fujieda, M.; Kumagai, M.; Ido, T.

    2016-06-01

    Referring to International Atomic Time (TAI), we measured the absolute frequency of the 87Sr lattice clock with its uncertainty of 1.1 x 10-15. Unless an optical clock is continuously operated for the five days of the TAI grid, it is required to evaluate dead time uncertainty in order to use the available five-day average of the local frequency reference. We homogeneously distributed intermittent measurements over the five-day grid of TAI, by which the dead time uncertainty was reduced to low 10-16 level. Three campaigns of the five (or four)-day consecutive measurements have resulted in the absolute frequency of the 87Sr clock transition of 429 228 004 229 872.85 (47) Hz, where the systematic uncertainty of the 87Sr optical frequency standard amounts to 8.6 x 10-17.

  1. Perturbative approach to continuous-time quantum error correction

    NASA Astrophysics Data System (ADS)

    Ippoliti, Matteo; Mazza, Leonardo; Rizzi, Matteo; Giovannetti, Vittorio

    2015-04-01

    We present a discussion of the continuous-time quantum error correction introduced by J. P. Paz and W. H. Zurek [Proc. R. Soc. A 454, 355 (1998), 10.1098/rspa.1998.0165]. We study the general Lindbladian which describes the effects of both noise and error correction in the weak-noise (or strong-correction) regime through a perturbative expansion. We use this tool to derive quantitative aspects of the continuous-time dynamics both in general and through two illustrative examples: the three-qubit and five-qubit stabilizer codes, which can be independently solved by analytical and numerical methods and then used as benchmarks for the perturbative approach. The perturbatively accessible time frame features a short initial transient in which error correction is ineffective, followed by a slow decay of the information content consistent with the known facts about discrete-time error correction in the limit of fast operations. This behavior is explained in the two case studies through a geometric description of the continuous transformation of the state space induced by the combined action of noise and error correction.

  2. Absolute value optimization to estimate phase properties of stochastic time series

    NASA Technical Reports Server (NTRS)

    Scargle, J. D.

    1977-01-01

    Most existing deconvolution techniques are incapable of determining phase properties of wavelets from time series data; to assure a unique solution, minimum phase is usually assumed. It is demonstrated, for moving average processes of order one, that deconvolution filtering using the absolute value norm provides an estimate of the wavelet shape that has the correct phase character when the random driving process is nonnormal. Numerical tests show that this result probably applies to more general processes.

  3. Heat conduction errors and time lag in cryogenic thermometer installations

    NASA Technical Reports Server (NTRS)

    Warshawsky, I.

    1973-01-01

    Installation practices are recommended that will increase rate of heat exchange between the thermometric sensing element and the cryogenic fluid and that will reduce the rate of undesired heat transfer to higher-temperature objects. Formulas and numerical data are given that help to estimate the magnitude of heat-conduction errors and of time lag in response.

  4. Real-Time Minimization of Tracking Error for Aircraft Systems

    NASA Technical Reports Server (NTRS)

    Garud, Sumedha; Kaneshige, John T.; Krishnakumar, Kalmanje S.; Kulkarni, Nilesh V.; Burken, John

    2013-01-01

    This technology presents a novel, stable, discrete-time adaptive law for flight control in a Direct adaptive control (DAC) framework. Where errors are not present, the original control design has been tuned for optimal performance. Adaptive control works towards achieving nominal performance whenever the design has modeling uncertainties/errors or when the vehicle suffers substantial flight configuration change. The baseline controller uses dynamic inversion with proportional-integral augmentation. On-line adaptation of this control law is achieved by providing a parameterized augmentation signal to a dynamic inversion block. The parameters of this augmentation signal are updated to achieve the nominal desired error dynamics. If the system senses that at least one aircraft component is experiencing an excursion and the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, then the neural network (NN) modeling of aircraft operation may be changed.

  5. Real-Time Estimation Of Aiming Error Of Spinning Antenna

    NASA Technical Reports Server (NTRS)

    Dolinsky, Shlomo

    1992-01-01

    Spinning-spacecraft dynamics and amplitude variations in communications links studied from received-signal fluctuations. Mathematical model and associated analysis procedure provide real-time estimates of aiming error of remote rotating transmitting antenna radiating constant power in narrow, pencillike beam from spinning platform, and current amplitude of received signal. Estimates useful in analyzing and enhancing calibration of communication system, and in analyzing complicated dynamic effects in spinning platform and antenna-aiming mechanism.

  6. Non-iterative adaptive time stepping with truncation error control for simulating variable-density flow

    NASA Astrophysics Data System (ADS)

    Hirthe, E. M.; Graf, T.

    2012-04-01

    Fluid density variations occur due to changes in the solute concentration, temperature and pressure of groundwater. Examples are interaction between freshwater and seawater, radioactive waste disposal, groundwater contamination, and geothermal energy production. The physical coupling between flow and transport introduces non-linearity in the governing mathematical equations, such that solving variable-density flow problems typically requires very long computational time. Computational efficiency can be attained through the use of adaptive time-stepping schemes. The aim of this work is therefore to apply a non-iterative adaptive time-stepping scheme based on local truncation error in variable-density flow problems. That new scheme is implemented into the code of the HydroGeoSphere model (Therrien et al., 2011). The new time-stepping scheme is applied to the Elder (1967) and the Shikaze et al. (1998) problem of free convection in porous and fractured-porous media, respectively. Numerical simulations demonstrate that non-iterative time-stepping based on local truncation error control fully automates the time step size and efficiently limits the temporal discretization error to the user-defined tolerance. Results of the Elder problem show that the new time-stepping scheme presented here is significantly more efficient than uniform time-stepping when high accuracy is required. Results of the Shikaze problem reveal that the new scheme is considerably faster than conventional time-stepping where time step sizes are either constant or controlled by absolute head/concentration changes. Future research will focus on the application of the new time-stepping scheme to variable-density flow in complex real-world fractured-porous rock.

  7. Membrane electroporation: The absolute rate equation and nanosecond time scale pore creation

    NASA Astrophysics Data System (ADS)

    Vasilkoski, Zlatko; Esser, Axel T.; Gowrishankar, T. R.; Weaver, James C.

    2006-08-01

    The recent applications of nanosecond, megavolt-per-meter electric field pulses to biological systems show striking cellular and subcellular electric field induced effects and revive the interest in the biophysical mechanism of electroporation. We first show that the absolute rate theory, with experimentally based parameter input, is consistent with membrane pore creation on a nanosecond time scale. Secondly we use a Smoluchowski equation-based model to formulate a self-consistent theoretical approach. The analysis is carried out for a planar cell membrane patch exposed to a 10ns trapezoidal pulse with 1.5ns rise and fall times. Results demonstrate reversible supraelectroporation behavior in terms of transmembrane voltage, pore density, membrane conductance, fractional aqueous area, pore distribution, and average pore radius. We further motivate and justify the use of Krassowska’s asymptotic electroporation model for analyzing nanosecond pulses, showing that pore creation dominates the electrical response and that pore expansion is a negligible effect on this time scale.

  8. Real-Time Parameter Estimation Using Output Error

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2014-01-01

    Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.

  9. Absolutely Exponential Stability and Temperature Control for Gas Chromatograph System Under Dwell Time Switching Techniques.

    PubMed

    Sun, Xi-Ming; Wang, Xue-Fang; Tan, Ying; Wang, Xiao-Liang; Wang, Wei

    2016-06-01

    This paper provides a design strategy for temperature control of the gas chromatograph. Usually gas chromatograph is modeled by a simple first order system with a time-delay, and a proportion integration (PI) controller is widely used to regulate the output of the gas chromatograph to the desired temperature. As the characteristics of the gas chromatograph varies at the different temperature range, the single-model based PI controller cannot work well when output temperature varies from one range to another. Moreover, the presence of various disturbance will further deteriorate the performance. In order to improve the accuracy of the temperature control, multiple models are used at the different temperature ranges. With a PI controller designed for each model accordingly, a delay-dependent switching control scheme using the dwell time technique is proposed to ensure the absolute exponential stability of the closed loop. Experiment results demonstrate the effectiveness of the proposed switching technique. PMID:26316283

  10. Time-series modeling and prediction of global monthly absolute temperature for environmental decision making

    NASA Astrophysics Data System (ADS)

    Ye, Liming; Yang, Guixia; Van Ranst, Eric; Tang, Huajun

    2013-03-01

    A generalized, structural, time series modeling framework was developed to analyze the monthly records of absolute surface temperature, one of the most important environmental parameters, using a deterministicstochastic combined (DSC) approach. Although the development of the framework was based on the characterization of the variation patterns of a global dataset, the methodology could be applied to any monthly absolute temperature record. Deterministic processes were used to characterize the variation patterns of the global trend and the cyclic oscillations of the temperature signal, involving polynomial functions and the Fourier method, respectively, while stochastic processes were employed to account for any remaining patterns in the temperature signal, involving seasonal autoregressive integrated moving average (SARIMA) models. A prediction of the monthly global surface temperature during the second decade of the 21st century using the DSC model shows that the global temperature will likely continue to rise at twice the average rate of the past 150 years. The evaluation of prediction accuracy shows that DSC models perform systematically well against selected models of other authors, suggesting that DSC models, when coupled with other ecoenvironmental models, can be used as a supplemental tool for short-term (˜10-year) environmental planning and decision making.

  11. Relationship between Brazilian airline pilot errors and time of day.

    PubMed

    de Mello, M T; Esteves, A M; Pires, M L N; Santos, D C; Bittencourt, L R A; Silva, R S; Tufik, S

    2008-12-01

    Flight safety is one of the most important and frequently discussed issues in aviation. Recent accident inquiries have raised questions as to how the work of flight crews is organized and the extent to which these conditions may have been contributing factors to accidents. Fatigue is based on physiologic limitations, which are reflected in performance deficits. The purpose of the present study was to provide an analysis of the periods of the day in which pilots working for a commercial airline presented major errors. Errors made by 515 captains and 472 co-pilots were analyzed using data from flight operation quality assurance systems. To analyze the times of day (shifts) during which incidents occurred, we divided the light-dark cycle (24:00) in four periods: morning, afternoon, night, and early morning. The differences of risk during the day were reported as the ratio of morning to afternoon, morning to night and morning to early morning error rates. For the purposes of this research, level 3 events alone were taken into account, since these were the most serious in which company operational limits were exceeded or when established procedures were not followed. According to airline flight schedules, 35% of flights take place in the morning period, 32% in the afternoon, 26% at night, and 7% in the early morning. Data showed that the risk of errors increased by almost 50% in the early morning relative to the morning period (ratio of 1:1.46). For the period of the afternoon, the ratio was 1:1.04 and for the night a ratio of 1:1.05 was found. These results showed that the period of the early morning represented a greater risk of attention problems and fatigue. PMID:19148377

  12. The Question of Absolute Space and Time Directions in Relation to Molecular Chirality, Parity Violation, and Biomolecular Homochirality

    SciTech Connect

    Quack, Martin

    2001-03-21

    The questions of the absolute directions of space and time or the “observability” of absolute time direction as well as absolute handedness-left or right- are related to the fundamental symmetries of physics C, P, T as well as their combinations, in particular CPT, and their violations, such as parity violation. At the same time there is a relation to certain still open questions in chemistry concerning the fundamental physical- chemical principles of molecular chirality and in biochemistry concerning the selection of homochirality in evolution. In the lecture we shall introduce the concepts and then report new theoretical results from our work on parity violation in chiral molecules, showing order of magnitude increases with respect to previously accepted values. We discus as well our current experimental efforts. We shall briefly mention the construction of an absolute molecular clock.

  13. The Question of Absolute Space and Time Directions in Relation to Molecular Chirality, Parity Violation, and Biomolecular Homochirality

    SciTech Connect

    Quack, Martin

    2001-03-21

    The questions of the absolute directions of space and time or the 'observability' of absolute time direction as well as absolute handedness - left or right - are related to the fundamental symmetries of physics C, P, T as well as their combinations, in particular CPT, and their violations, such as parity violation. At the same time there is a relation to certain still open questions in chemistry concerning the fundamental physical-chemical principles of molecular chirality and in biochemistry concerning the selection of homochirality in evolution. In the lecture we shall introduce the concepts and then report new theoretical results from our work on parity violation in chiral molecules, showing order of magnitude increases with respect to previously accepted values. We discuss as well our current experimental efforts. We shall briefly mention the construction of an absolute molecular clock.

  14. Alignment between seafloor spreading directions and absolute plate motions through time

    NASA Astrophysics Data System (ADS)

    Williams, Simon E.; Flament, Nicolas; Müller, R. Dietmar

    2016-02-01

    The history of seafloor spreading in the ocean basins provides a detailed record of relative motions between Earth's tectonic plates since Pangea breakup. Determining how tectonic plates have moved relative to the Earth's deep interior is more challenging. Recent studies of contemporary plate motions have demonstrated links between relative plate motion and absolute plate motion (APM), and with seismic anisotropy in the upper mantle. Here we explore the link between spreading directions and APM since the Early Cretaceous. We find a significant alignment between APM and spreading directions at mid-ocean ridges; however, the degree of alignment is influenced by geodynamic setting, and is strongest for mid-Atlantic spreading ridges between plates that are not directly influenced by time-varying slab pull. In the Pacific, significant mismatches between spreading and APM direction may relate to a major plate-mantle reorganization. We conclude that spreading fabric can be used to improve models of APM.

  15. A California statewide three-dimensional seismic velocity model from both absolute and differential times

    USGS Publications Warehouse

    Lin, G.; Thurber, C.H.; Zhang, H.; Hauksson, E.; Shearer, P.M.; Waldhauser, F.; Brocher, T.M.; Hardebeck, J.

    2010-01-01

    We obtain a seismic velocity model of the California crust and uppermost mantle using a regional-scale double-difference tomography algorithm. We begin by using absolute arrival-time picks to solve for a coarse three-dimensional (3D) P velocity (VP) model with a uniform 30 km horizontal node spacing, which we then use as the starting model for a finer-scale inversion using double-difference tomography applied to absolute and differential pick times. For computational reasons, we split the state into 5 subregions with a grid spacing of 10 to 20 km and assemble our final statewide VP model by stitching together these local models. We also solve for a statewide S-wave model using S picks from both the Southern California Seismic Network and USArray, assuming a starting model based on the VP results and a VP=VS ratio of 1.732. Our new model has improved areal coverage compared with previous models, extending 570 km in the SW-NE directionand 1320 km in the NW-SE direction. It also extends to greater depth due to the inclusion of substantial data at large epicentral distances. Our VP model generally agrees with previous separate regional models for northern and southern California, but we also observe some new features, such as high-velocity anomalies at shallow depths in the Klamath Mountains and Mount Shasta area, somewhat slow velocities in the northern Coast Ranges, and slow anomalies beneath the Sierra Nevada at midcrustal and greater depths. This model can be applied to a variety of regional-scale studies in California, such as developing a unified statewide earthquake location catalog and performing regional waveform modeling.

  16. Detecting the errors in solar system ephemeris by pulsar timing

    NASA Astrophysics Data System (ADS)

    Li, Liang; Guo, Li; Wang, Guang-Li

    2016-04-01

    Pulsar timing uses planetary ephemerides to convert the measured pulse arrival time at an observatory to the arrival time at the Solar System barycenter (SSB). Since these planetary ephemerides cannot be perfect, a method of detecting the associated errors based on a pulsar timing array is developed. By using observations made by an array of 18 millisecond pulsars from the Parkes Pulsar Timing Array, we estimated the vector uncertainty from the Earth to the SSB of JPL DE421, which reflects the offset of the ephemeris origin with respect to the ideal SSB, in different piecewise intervals of pulsar timing data, and found consistent results. To investigate the stability and reliability of our method, we divided all the pulsars into two groups. Both groups yield largely consistent results, and the uncertainty of the Earth-SSB vector is several hundred meters, which is consistent with the accuracy of JPL DE421. As an improvement in the observational accuracy, pulsar timing will be helpful to improve the solar system ephemeris in the future.

  17. Absolute calibration method for nanosecond-resolved, time-streaked, fiber optic light collection, spectroscopy systems

    SciTech Connect

    Johnston, Mark D.; Oliver, Bryan V.; Droemer, Darryl W.; Frogget, Brent; Crain, Marlon D.; Maron, Yitzhak

    2012-08-15

    This paper describes a convenient and accurate method to calibrate fast (<1 ns resolution) streaked, fiber optic light collection, spectroscopy systems. Such systems are inherently difficult to calibrate due to the lack of sufficiently intense, calibrated light sources. Such a system is used to collect spectral data on plasmas generated in electron beam diodes fielded on the RITS-6 accelerator (8-12MV, 140-200kA) at Sandia National Laboratories. On RITS, plasma light is collected through a small diameter (200 {mu}m) optical fiber and recorded on a fast streak camera at the output of a 1 meter Czerny-Turner monochromator. For this paper, a 300 W xenon short arc lamp (Oriel Model 6258) was used as the calibration source. Since the radiance of the xenon arc varies from cathode to anode, just the area around the tip of the cathode ('hotspot') was imaged onto the fiber, to produce the highest intensity output. To compensate for chromatic aberrations, the signal was optimized at each wavelength measured. Output power was measured using 10 nm bandpass interference filters and a calibrated photodetector. These measurements give power at discrete wavelengths across the spectrum, and when linearly interpolated, provide a calibration curve for the lamp. The shape of the spectrum is determined by the collective response of the optics, monochromator, and streak tube across the spectral region of interest. The ratio of the spectral curve to the measured bandpass filter curve at each wavelength produces a correction factor (Q) curve. This curve is then applied to the experimental data and the resultant spectra are given in absolute intensity units (photons/sec/cm{sup 2}/steradian/nm). Error analysis shows this method to be accurate to within +/- 20%, which represents a high level of accuracy for this type of measurement.

  18. Absolute calibration method for nanosecond-resolved, time-streaked, fiber optic light collection, spectroscopy systems

    NASA Astrophysics Data System (ADS)

    Johnston, Mark D.; Oliver, Bryan V.; Droemer, Darryl W.; Frogget, Brent; Crain, Marlon D.; Maron, Yitzhak

    2012-08-01

    This paper describes a convenient and accurate method to calibrate fast (<1 ns resolution) streaked, fiber optic light collection, spectroscopy systems. Such systems are inherently difficult to calibrate due to the lack of sufficiently intense, calibrated light sources. Such a system is used to collect spectral data on plasmas generated in electron beam diodes fielded on the RITS-6 accelerator (8-12MV, 140-200kA) at Sandia National Laboratories. On RITS, plasma light is collected through a small diameter (200 μm) optical fiber and recorded on a fast streak camera at the output of a 1 meter Czerny-Turner monochromator. For this paper, a 300 W xenon short arc lamp (Oriel Model 6258) was used as the calibration source. Since the radiance of the xenon arc varies from cathode to anode, just the area around the tip of the cathode ("hotspot") was imaged onto the fiber, to produce the highest intensity output. To compensate for chromatic aberrations, the signal was optimized at each wavelength measured. Output power was measured using 10 nm bandpass interference filters and a calibrated photodetector. These measurements give power at discrete wavelengths across the spectrum, and when linearly interpolated, provide a calibration curve for the lamp. The shape of the spectrum is determined by the collective response of the optics, monochromator, and streak tube across the spectral region of interest. The ratio of the spectral curve to the measured bandpass filter curve at each wavelength produces a correction factor (Q) curve. This curve is then applied to the experimental data and the resultant spectra are given in absolute intensity units (photons/sec/cm2/steradian/nm). Error analysis shows this method to be accurate to within +/- 20%, which represents a high level of accuracy for this type of measurement.

  19. Evaluating multi-exposure speckle imaging estimates of absolute autocorrelation times.

    PubMed

    Kazmi, S M Shams; Wu, Rebecca K; Dunn, Andrew K

    2015-08-01

    Multi-exposure speckle imaging (MESI) is a camera-based flow-imaging technique for quantitative blood-flow monitoring by mapping the speckle-contrast dependence on camera exposure duration. The ability of laser speckle contrast imaging to measure the temporal dynamics of backscattered and interfering coherent fields, in terms of the accuracy of autocorrelation measurements, is a major unresolved issue in quantitative speckle flowmetry. MESI fits for a number of parameters including an estimate of the electric field autocorrelation decay time from the imaged speckles. We compare the MESI-determined correlation times in vitro and in vivo with accepted true values from direct temporal measurements acquired with a photon-counting photon-multiplier tube and an autocorrelator board. The correlation times estimated by MESI in vivo remain on average within 14±11% of those obtained from direct temporal autocorrelation measurements, demonstrating that MESI yields highly comparable statistics of the time-varying fields that can be useful for applications seeking not only quantitative blood flow dynamics but also absolute perfusion. PMID:26258378

  20. An Integrated Model of Choices and Response Times in Absolute Identification

    ERIC Educational Resources Information Center

    Brown, Scott D.; Marley, A. A. J.; Donkin, Christopher; Heathcote, Andrew

    2008-01-01

    Recent theoretical developments in the field of absolute identification have stressed differences between relative and absolute processes, that is, whether stimulus magnitudes are judged relative to a shorter term context provided by recently presented stimuli or a longer term context provided by the entire set of stimuli. The authors developed a…

  1. Voice Onset Time in Consonant Cluster Errors: Can Phonetic Accommodation Differentiate Cognitive from Motor Errors?

    ERIC Educational Resources Information Center

    Pouplier, Marianne; Marin, Stefania; Waltl, Susanne

    2014-01-01

    Purpose: Phonetic accommodation in speech errors has traditionally been used to identify the processing level at which an error has occurred. Recent studies have challenged the view that noncanonical productions may solely be due to phonetic, not phonological, processing irregularities, as previously assumed. The authors of the present study…

  2. 5 CFR 1605.16 - Claims for correction of employing agency errors; time limitations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... it, but, in any event, the agency must act promptly in doing so. (b) Participant's discovery of error... of employing agency errors; time limitations. (a) Agency's discovery of error. Upon discovery of an error made within the past six months involving the correct or timely remittance of payments to the...

  3. Time-resolved near-infrared technique for bedside monitoring of absolute cerebral blood flow

    NASA Astrophysics Data System (ADS)

    Diop, Mamadou; Tichauer, Kenneth M.; Elliott, Jonathan T.; Migueis, Mark; Lee, Ting-Yim; St. Lawrence, Keith

    2010-02-01

    A primary focus of neurointensive care is monitoring the injured brain to detect harmful events that can impair cerebral blood flow (CBF). Since current non-invasive bedside methods can only indirectly assess blood flow, the goal of this research was to develop an optical technique for measuring absolute CBF. A time-resolved near-infrared (NIR) apparatus was built and its ability to accurately measure changes in optical properties was demonstrated in tissue-mimicking phantoms. The time-resolved system was combined with a bolus-tracking method for measuring CBF using the dye indocyanine green (ICG) as an intravascular flow tracer. Cerebral blood flow was measured in newborn piglets and for comparison, CBF was concurrently measured using a previously developed continuous-wave NIR method. Measurements were acquired with both techniques under three conditions: normocapnia, hypercapnia and following occlusion of the carotid arteries. Mean CBF values (N = 3) acquired with the TR-NIR system were 31.9 +/- 11.7 ml/100g/min during occlusion, 39.7 +/- 1.6 ml/100g/min at normocapnia, and 58.8 +/- 9.9 ml/100g/min at hypercapnia. Results demonstrate that the developed TR-NIR technique has the sensitivity to measure changes in CBF; however, the CBF measurements were approximately 25% lower than the values obtained with the CW-NIRS technique.

  4. Validation of absolute quantitative real-time PCR for the diagnosis of Streptococcus agalactiae in fish.

    PubMed

    Sebastião, Fernanda de A; Lemos, Eliana G M; Pilarski, Fabiana

    2015-12-01

    Streptococcus agalactiae (GBS) are Gram-positive cocci responsible for substantial losses in tilapia fish farms in Brazil and worldwide. It causes septicemia, meningoencephalitis and mortality of whole shoals that can occur within 72 h. Thus, diagnostic methods are needed that are rapid, specific and sensitive. In this study, a pair of specific primers for GBS was generated based on the cfb gene sequence and initially evaluated by conventional PCR. The protocols for absolute quantitative real-time PCR (qPCR) were then adapted to validate the technique for the identification and quantification of GBS isolated by real-time detection of amplicons using fluorescence measurements. Finally, an infectivity test was conducted in tilapia infected with GBS strains. Total DNA from the host brain was subjected to the same technique, and the strains were re-isolated to validate Koch's postulates. The assay showed 100% specificity for the other bacterial species evaluated and a sensitivity of 367 gene copies per 20 mg of brain tissue within 4 h, making this test a valuable tool for health monitoring programs. PMID:26519771

  5. Absolutely referenced distance measurement by combination of time-of-flight and digital holographic methods

    NASA Astrophysics Data System (ADS)

    Fratz, Markus; Weimann, Claudius; Wölfelschneider, Harald; Koos, Christian; Höfler, Heinrich

    2014-03-01

    We present a novel optical system for distance measurement based on the combination of optical time-of-flight metrology and digital holography. In addition absolute calibration of the measurement results is performed by a sideband modulation technique. For the time-of-flight technique a diode laser (1470 nm) is modulated sinusoidally (128 MHz). The light reflected and scattered by an object is detected by an avalanche-photo-diode. The phase difference between the sent and detected modulation is a measure for the distance between the sensor and the object. This allows for distance measurements up to 1.17 m with resolutions of ~2 mm. The interferometric setup uses 4 whispering-gallery-mode lasers to perform multiwavelengths-holographic distance measurements. The four wavelengths span the range from 1547 nm to 1554 nm. The unambiguous measurement measurement-range of the interferometric setup is approx. 7 mm while resolutions of 0.6 μm are observed. Both setups are integrated into one setup and perform measurements synchronously. Exact knowledge of the frequency differences of hundreds of GHz between the four lasers is crucial for the interferometric fine scale measurement. For this aim the light of the lasers is phase-modulated with frequencies of 36 GHz and 40 GHz to produce optical sidebands of higher order, thus generating beat signals in the hundreds-of-MHz regime, which can be measured electronically. The setup shows a way to measure distances in the meter range with sub-micron resolution.

  6. Easy Absolute Values? Absolutely

    ERIC Educational Resources Information Center

    Taylor, Sharon E.; Mittag, Kathleen Cage

    2015-01-01

    The authors teach a problem-solving course for preservice middle-grades education majors that includes concepts dealing with absolute-value computations, equations, and inequalities. Many of these students like mathematics and plan to teach it, so they are adept at symbolic manipulations. Getting them to think differently about a concept that they…

  7. Error analysis and modeling for the time grating length measurement system

    NASA Astrophysics Data System (ADS)

    Gao, Zhonghua; Fen, Jiqin; Zheng, Fangyan; Chen, Ziran; Peng, Donglin; Liu, Xiaokang

    2013-10-01

    Through analyzing errors of the length measurement system in which a linear time grating was the principal measuring component, we found that the study on the error law was very important to reduce system errors and optimize the system structure. Mainly error sources in the length measuring system, including the time grating sensor, slide way, and cantilever, were studied; and therefore total errors were obtained. Meanwhile we erected the mathematic model of errors of the length measurement system. Using the error model, we calibrated system errors being in the length measurement system. Also, we developed a set of experimental devices in which a laser interferometer was used to calibrate the length measurement system errors. After error calibrating, the accuracy of the measurement system was improved from original 36um/m to 14um/m. The fact that experiment results are consistent with the simulation results shows that the error mathematic model is suitable for the length measuring system.

  8. Method for quantum-jump continuous-time quantum error correction

    NASA Astrophysics Data System (ADS)

    Hsu, Kung-Chuan; Brun, Todd A.

    2016-02-01

    Continuous-time quantum error correction (CTQEC) is a technique for protecting quantum information against decoherence, where both the decoherence and error correction processes are considered continuous in time. Given any [[n ,k ,d

  9. ABSOLUTE TIMING OF THE CRAB PULSAR WITH THE INTEGRAL/SPI TELESCOPE

    SciTech Connect

    Molkov, S.; Jourdain, E.; Roques, J. P.

    2010-01-01

    We have investigated the pulse shape evolution of the Crab pulsar emission in the hard X-ray domain of the electromagnetic spectrum. In particular, we have studied the alignment of the Crab pulsar phase profiles measured in the hard X-rays and in other wavebands. To obtain the hard X-ray pulse profiles, we have used six years (2003-2009, with a total exposure of about 4 Ms) of publicly available data of the SPI telescope on-board the International Gamma-Ray Astrophysics Laboratory observatory, folded with the pulsar time solution derived from the Jodrell Bank Crab Pulsar Monthly Ephemeris. We found that the main pulse in the hard X-ray 20-100 keV energy band leads the radio one by 8.18 +- 0.46 milliperiods in phase, or 275 +- 15 mus in time. Quoted errors represent only statistical uncertainties. Our systematic error is estimated to be approx40 mus and is mainly caused by the radio measurement uncertainties. In hard X-rays, the average distance between the main pulse and interpulse on the phase plane is 0.3989 +- 0.0009. To compare our findings in hard X-rays with the soft 2-20 keV X-ray band, we have used data of quasi-simultaneous Crab observations with the proportional counter array monitor on-board the Rossi X-Ray Timing Explorer mission. The time lag and the pulses separation values measured in the 3-20 keV band are 0.00933 +- 0.00016 (corresponding to 310 +- 6 mus) and 0.40016 +- 0.00028 parts of the cycle, respectively. While the pulse separation values measured in soft X-rays and hard X-rays agree, the time lags are statistically different. Additional analysis show that the delay between the radio and X-ray signals varies with energy in the 2-300 keV energy range. We explain such a behavior as due to the superposition of two independent components responsible for the Crab pulsed emission in this energy band.

  10. Using Graphs for Fast Error Term Approximation of Time-varying Datasets

    SciTech Connect

    Nuber, C; LaMar, E C; Pascucci, V; Hamann, B; Joy, K I

    2003-02-27

    We present a method for the efficient computation and storage of approximations of error tables used for error estimation of a region between different time steps in time-varying datasets. The error between two time steps is defined as the distance between the data of these time steps. Error tables are used to look up the error between different time steps of a time-varying dataset, especially when run time error computation is expensive. However, even the generation of error tables itself can be expensive. For n time steps, the exact error look-up table (which stores the error values for all pairs of time steps in a matrix) has a memory complexity and pre-processing time complexity of O(n2), and O(1) for error retrieval. Our approximate error look-up table approach uses trees, where the leaf nodes represent original time steps, and interior nodes contain an average (or best-representative) of the children nodes. The error computed on an edge of a tree describes the distance between the two nodes on that edge. Evaluating the error between two different time steps requires traversing a path between the two leaf nodes, and accumulating the errors on the traversed edges. For n time steps, this scheme has a memory complexity and pre-processing time complexity of O(nlog(n)), a significant improvement over the exact scheme; the error retrieval complexity is O(log(n)). As we do not need to calculate all possible n2 error terms, our approach is a fast way to generate the approximation.

  11. Mapping the Origins of Time: Scalar Errors in Infant Time Estimation

    ERIC Educational Resources Information Center

    Addyman, Caspar; Rocha, Sinead; Mareschal, Denis

    2014-01-01

    Time is central to any understanding of the world. In adults, estimation errors grow linearly with the length of the interval, much faster than would be expected of a clock-like mechanism. Here we present the first direct demonstration that this is also true in human infants. Using an eye-tracking paradigm, we examined 4-, 6-, 10-, and…

  12. A BAYESIAN METHOD FOR CALCULATING REAL-TIME QUANTITATIVE PCR CALIBRATION CURVES USING ABSOLUTE PLASMID DNA STANDARDS

    EPA Science Inventory

    In real-time quantitative PCR studies using absolute plasmid DNA standards, a calibration curve is developed to estimate an unknown DNA concentration. However, potential differences in the amplification performance of plasmid DNA compared to genomic DNA standards are often ignore...

  13. Absolute plate motion of Africa around Hawaii-Emperor bend time

    NASA Astrophysics Data System (ADS)

    Maher, S. M.; Wessel, P.; Müller, R. D.; Williams, S. E.; Harada, Y.

    2015-06-01

    Numerous regional plate reorganizations and the coeval ages of the Hawaiian Emperor bend (HEB) and Louisville bend of 50-47 Ma have been interpreted as a possible global tectonic plate reorganization at ˜chron 21 (47.9 Ma). Yet for a truly global event we would expect a contemporaneous change in Africa absolute plate motion (APM) reflected by physical evidence distributed on the Africa Plate. This evidence has been postulated to take the form of the Réunion-Mascarene bend which exhibits many HEB-like features, such as a large angular change close to ˜chron 21. However, the Réunion hotspot trail has recently been interpreted as a sequence of continental fragments with incidental hotspot volcanism. Here we show that the alternative Réunion-Mascarene Plateau trail can also satisfy the age progressions and geometry of other hotspot trails on the Africa Plate. The implied motion, suggesting a pivoting of Africa from 67 to 50 Ma, could explain the apparent bifurcation of the Tristan hotspot chain, the age reversals seen along the Walvis Ridge, the sharp curve of the Canary trail, and the diffuse nature of the St. Helena chain. To test this hypothesis further we made a new Africa APM model that extends back to ˜80 Ma using a modified version of the Hybrid Polygonal Finite Rotation Method. This method uses seamount chains and their associated hotspots as geometric constraints for the model, and seamount age dates to determine APM through time. While this model successfully explains many of the volcanic features, it implies an unrealistically fast global lithospheric net rotation, as well as improbable APM trajectories for many other plates, including the Americas, Eurasia and Australia. We contrast this speculative model with a more conventional model in which the Mascarene Plateau is excluded in favour of the Chagos-Laccadive Ridge rotated into the Africa reference frame. This second model implies more realistic net lithospheric rotation and far-field APMs, but

  14. Repeated quantum error correction on a continuously encoded qubit by real-time feedback

    NASA Astrophysics Data System (ADS)

    Cramer, J.; Kalb, N.; Rol, M. A.; Hensen, B.; Blok, M. S.; Markham, M.; Twitchen, D. J.; Hanson, R.; Taminiau, T. H.

    2016-05-01

    Reliable quantum information processing in the face of errors is a major fundamental and technological challenge. Quantum error correction protects quantum states by encoding a logical quantum bit (qubit) in multiple physical qubits. To be compatible with universal fault-tolerant computations, it is essential that states remain encoded at all times and that errors are actively corrected. Here we demonstrate such active error correction on a continuously protected logical qubit using a diamond quantum processor. We encode the logical qubit in three long-lived nuclear spins, repeatedly detect phase errors by non-destructive measurements, and apply corrections by real-time feedback. The actively error-corrected qubit is robust against errors and encoded quantum superposition states are preserved beyond the natural dephasing time of the best physical qubit in the encoding. These results establish a powerful platform to investigate error correction under different types of noise and mark an important step towards fault-tolerant quantum information processing.

  15. Error Correction for Foot Clearance in Real-Time Measurement

    NASA Astrophysics Data System (ADS)

    Wahab, Y.; Bakar, N. A.; Mazalan, M.

    2014-04-01

    Mobility performance level, fall related injuries, unrevealed disease and aging stage can be detected through examination of gait pattern. The gait pattern is normally directly related to the lower limb performance condition in addition to other significant factors. For that reason, the foot is the most important part for gait analysis in-situ measurement system and thus directly affects the gait pattern. This paper reviews the development of ultrasonic system with error correction using inertial measurement unit for gait analysis in real life measurement of foot clearance. This paper begins with the related literature where the necessity of measurement is introduced. Follow by the methodology section, problem and solution. Next, this paper explains the experimental setup for the error correction using the proposed instrumentation, results and discussion. Finally, this paper shares the planned future works.

  16. A method for in situ absolute DD yield calibration of neutron time-of-flight detectors on OMEGA using CR-39-based proton detectors

    SciTech Connect

    Waugh, C. J.; Rosenberg, M. J.; Zylstra, A. B.; Frenje, J. A.; Seguin, F. H.; Petrasso, R. D.; Glebov, V. Yu.; Sangster, T. C.; Stoeckl, C.

    2015-05-27

    Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition, comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.

  17. A method for in situ absolute DD yield calibration of neutron time-of-flight detectors on OMEGA using CR-39-based proton detectors

    DOE PAGESBeta

    Waugh, C. J.; Rosenberg, M. J.; Zylstra, A. B.; Frenje, J. A.; Seguin, F. H.; Petrasso, R. D.; Glebov, V. Yu.; Sangster, T. C.; Stoeckl, C.

    2015-05-27

    Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition,more » comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.« less

  18. A method for in situ absolute DD yield calibration of neutron time-of-flight detectors on OMEGA using CR-39-based proton detectors

    SciTech Connect

    Waugh, C. J. Zylstra, A. B.; Frenje, J. A.; Séguin, F. H.; Petrasso, R. D.; Rosenberg, M. J.; Glebov, V. Yu.; Sangster, T. C.; Stoeckl, C.

    2015-05-15

    Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition, comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.

  19. A method for in situ absolute DD yield calibration of neutron time-of-flight detectors on OMEGA using CR-39-based proton detectors.

    PubMed

    Waugh, C J; Rosenberg, M J; Zylstra, A B; Frenje, J A; Séguin, F H; Petrasso, R D; Glebov, V Yu; Sangster, T C; Stoeckl, C

    2015-05-01

    Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition, comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule. PMID:26026524

  20. A method for in situ absolute DD yield calibration of neutron time-of-flight detectors on OMEGA using CR-39-based proton detectors

    NASA Astrophysics Data System (ADS)

    Waugh, C. J.; Rosenberg, M. J.; Zylstra, A. B.; Frenje, J. A.; Séguin, F. H.; Petrasso, R. D.; Glebov, V. Yu.; Sangster, T. C.; Stoeckl, C.

    2015-05-01

    Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition, comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.

  1. Reduction of MRC error review time through the simplified and classified MRC result

    NASA Astrophysics Data System (ADS)

    Lee, Casper W.; Lin, Jason C.; Chen, Frank F.

    2009-04-01

    As the Manufacturing Rule Check (MRC) error counts are very huge, it has been getting difficult to review by each point and maybe some of the design errors will be ignored. It's necessary to reduce the review error counts and improve the checking methods. The paper presents an error classification function and auto-waived mechanism for decreasing the repeated MRC errors in MRC report. In auto-waived mechanism, the report will omit the error point if it is same as previous report and the defect location output will keep all of the error points for Do Not Inspection Area (DNIR) reference. (DNIR needs customer's approval). Furthermore, it is possible to develop an auto-waived function to skip the confirmed errors which is provided by customer with a marking information table or GDS/OASIS database. Besides, this paper also presents how these errors can be grouping and reducing checking time.

  2. The primary motor cortex is associated with learning the absolute, but not relative, timing dimension of a task: A tDCS study.

    PubMed

    Apolinário-Souza, Tércio; Romano-Silva, Marco Aurélio; de Miranda, Débora Marques; Malloy-Diniz, Leandro Fernandes; Benda, Rodolfo Novellino; Ugrinowitsch, Herbert; Lage, Guilherme Menezes

    2016-06-01

    The functional role of the primary motor cortex (M1) in the production of movement parameters, such as length, direction and force, is well known; however, whether M1 is associated with the parametric adjustments in the absolute timing dimension of the task remains unknown. Previous studies have not applied tasks and analyses that could separate the absolute (variant) and relative (invariant) dimensions. We applied transcranial direct current stimulation (tDCS) to M1 before motor practice to facilitate motor learning. A sequential key-pressing task was practiced with two goals: learning the relative timing dimension and learning the absolute timing dimension. All effects of the stimulation of M1 were observed only in the absolute dimension of the task. Mainly, the stimulation was associated with better performance in the transfer test in the absolute dimension. Taken together, our results indicate that M1 is an important area for learning the absolute timing dimension of a motor sequence. PMID:27018089

  3. Transposition of structures in the Neoproterozoic Kaoko Belt (NW Namibia) and their absolute timing

    NASA Astrophysics Data System (ADS)

    Ulrich, Stanislav; Konopásek, Jiří; Jeřábek, Petr; Tajčmanová, Lucie

    2010-05-01

    The Neoproterozoic Kaoko Belt in Namibia is classical example of lower to middle crust transpressional orogeny developed between attenuated Congo Craton margin and the Coastal Terrane. The transpression has been described as a two phase event of early oblique thrusting followed by sinistral wrench shearing on the same foliation planes rotated into the subvertical orientations (e.g. Goscombe et al., 2003). Konopásek et al. (2005) argued that early fabric is not rotated, but intensely folded and the wrench stage operates on newly developed foliation parallel to axial planes of these folds. Three structural profiles across the Coastal Terrane, the Boundary Igneous Complex and the Orogen Core derived from the Congo Craton have been studied in order to assess mechanism of transpression and evaluate absolute timing of individual deformation events. The oldest known subhorizontal Si fabric occurs in the Coastal Terrane only, and is inherited from a pre-collisional HT-LP event dated at 650-630Ma. The S1 fabric occurs in all tectonic units, it is gently dipping to the W-SW and contains subhorizontal stretching lineation. Temperature as well as intensity of its development decreases westwards from penetrative granulite facies fabric in the Orogen Core to lower amphibolite facies axial plane cleavage in the Coastal Terrane. Associated kinematic criteria as S-C fabric in deformed granitoids of the Boundary Igneous Complex show very oblique, top-to-the-SE-oriented thrusting to sinistral shearing. Superimposed subvertical S2 fabric developed in axial planes of upright isoclinal folds, almost homogeneously reworking S1 fabric in the Orogen Core, whereas in the Boundary Igneous Complex and the Coastal Terrane, the S2 fabric is developed with increasing intensity from south to north. Temperature conditions of S2 development decrease westwards. Stretching lineations developed on S2 planes show the same orientation as those on the S1 planes and kinematic indicators associated with D

  4. Method and apparatus for detecting timing errors in a system oscillator

    DOEpatents

    Gliebe, Ronald J.; Kramer, William R.

    1993-01-01

    A method of detecting timing errors in a system oscillator for an electronic device, such as a power supply, includes the step of comparing a system oscillator signal with a delayed generated signal and generating a signal representative of the timing error when the system oscillator signal is not identical to the delayed signal. An LED indicates to an operator that a timing error has occurred. A hardware circuit implements the above-identified method.

  5. 20 CFR 410.671 - Revision for error or other reason; time limitation generally.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Revision for error or other reason; time....671 Revision for error or other reason; time limitation generally. (a) Initial, revised or... for a reason, and within the time period, prescribed in § 410.672. (b) Decision or revised decision...

  6. 20 CFR 410.671 - Revision for error or other reason; time limitation generally.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Revision for error or other reason; time....671 Revision for error or other reason; time limitation generally. (a) Initial, revised or... for a reason, and within the time period, prescribed in § 410.672. (b) Decision or revised decision...

  7. Space-Time Error Representation and Estimation in Navier-Stokes Calculations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2006-01-01

    The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.

  8. Distinguishing Error from Chaos in Ecological Time Series

    NASA Astrophysics Data System (ADS)

    Sugihara, George; Grenfell, Bryan; May, Robert M.

    1990-11-01

    Over the years, there has been much discussion about the relative importance of environmental and biological factors in regulating natural populations. Often it is thought that environmental factors are associated with stochastic fluctuations in population density, and biological ones with deterministic regulation. We revisit these ideas in the light of recent work on chaos and nonlinear systems. We show that completely deterministic regulatory factors can lead to apparently random fluctuations in population density, and we then develop a new method (that can be applied to limited data sets) to make practical distinctions between apparently noisy dynamics produced by low-dimensional chaos and population variation that in fact derives from random (high-dimensional)noise, such as environmental stochasticity or sampling error. To show its practical use, the method is first applied to models where the dynamics are known. We then apply the method to several sets of real data, including newly analysed data on the incidence of measles in the United Kingdom. Here the additional problems of secular trends and spatial effects are explored. In particular, we find that on a city-by-city scale measles exhibits low-dimensional chaos (as has previously been found for measles in New York City), whereas on a larger, country-wide scale the dynamics appear as a noisy two-year cycle. In addition to shedding light on the basic dynamics of some nonlinear biological systems, this work dramatizes how the scale on which data is collected and analysed can affect the conclusions drawn.

  9. Ambient Temperature Changes and the Impact to Time Measurement Error

    NASA Astrophysics Data System (ADS)

    Ogrizovic, V.; Gucevic, J.; Delcev, S.

    2012-12-01

    Measurements in Geodetic Astronomy are mainly outdoors and performed during a night, when the temperature often decreases very quickly. The time-keeping during a measuring session is provided by collecting UTC time ticks from a GPS receiver and transferring them to a laptop computer. An interrupt handler routine processes received UTC impulses in real-time and calculates the clock parameters. The characteristics of the computer quartz clock are influenced by temperature changes of the environment. We exposed the laptop to different environmental temperature conditions, and calculate the clock parameters for each environmental model. The results show that the laptop used for time-keeping in outdoor measurements should be kept in a stable temperature environment, at temperatures near 20° C.

  10. Intermediate time error growth and predictability: tropics versus mid-latitudes

    NASA Astrophysics Data System (ADS)

    Straus, David M.; Paolino, Dan

    2009-10-01

    The evolution of identical twin errors from an atmospheric general circulation model is studied in the linear range (small errors) through intermediate times and the approach to saturation. Between forecast day 1 and 7, the normalized error variance in the tropics is similar to that at higher latitudes. After that, tropical errors grow more slowly. The predictability time τ taken for tropical errors to reach half their saturation values is larger than that for mid-latitudes, especially for the planetary waves, thus implying greater potential predictability in the tropics. The discrepancy between mid-latitude and tropical τ is more pronounced at 850 hPa than at 200 hPa, is largest for the planetary waves, and is more pronounced for errors arising from wave phase differences (than from wave amplitude differences). The spectra of the error in 200 hPa zonal wind show that for forecast times up to about 5 d, the tropical error peaks at much shorter scales than the mid-latitude errors, but that subsequently tropical and mid-latitude error spectra look increasingly similar. The difference between upper and lower level tropical τ may be due to the greater influence of mid-latitudes at the upper levels.

  11. Nonlinearity error separation and self-correction methods for time grating displacement sensor

    NASA Astrophysics Data System (ADS)

    Liu, Xiaokang; Peng, Donglin; Wang, Xianquan; Yang, Wei

    2006-11-01

    A novel type of displacement sensor named time grating is introduced for measuring space with time. Multi-position probes measuring method is used to separate the non-linearity error of time grating, and Fourier series harmonic wave correction method is proposed to correct the error by software. Experiment results coming out from applications conform the remarkable effectiveness of these methods. A time grating displacement sensor with accuracy of +/-0.8" is developed. Test results show that high-precision measurement is achieved without high-precision manufacture. The realization of error self-correction endows time grating sensor with intelligence.

  12. Spike-timing error backpropagation in theta neuron networks.

    PubMed

    McKennoch, Sam; Voegtlin, Thomas; Bushnell, Linda

    2009-01-01

    The main contribution of this letter is the derivation of a steepest gradient descent learning rule for a multilayer network of theta neurons, a one-dimensional nonlinear neuron model. Central to our model is the assumption that the intrinsic neuron dynamics are sufficient to achieve consistent time coding, with no need to involve the precise shape of postsynaptic currents; this assumption departs from other related models such as SpikeProp and Tempotron learning. Our results clearly show that it is possible to perform complex computations by applying supervised learning techniques to the spike times and time response properties of nonlinear integrate and fire neurons. Networks trained with our multilayer training rule are shown to have similar generalization abilities for spike latency pattern classification as Tempotron learning. The rule is also able to train networks to perform complex regression tasks that neither SpikeProp or Tempotron learning appears to be capable of. PMID:19431278

  13. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

    2012-05-15

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  14. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

    2010-07-13

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  15. Correlated errors in geodetic time series: Implications for time-dependent deformation

    USGS Publications Warehouse

    Langbein, J.; Johnson, H.

    1997-01-01

    addition, the seasonal noise can be as large as 3 mm in amplitude but typically is less than 0.5 mm. Because of the presence of random-walk noise in these time series, modeling and interpretation of the geodetic data must account for this source of error. By way of example we show that estimating the time-varying strain tensor (a form of spatial averaging) from geodetic data having both random-walk and white noise error components results in seemingly significant variations in the rate of strain accumulation; spatial averaging does reduce the size of both noise components but not their relative influence on the resulting strain accumulation model. Copyright 1997 by the American Geophysical Union.

  16. Real-time detection and elimination of nonorthogonality error in interference fringe processing

    SciTech Connect

    Hu Haijiang; Zhang Fengdeng

    2011-05-20

    In the measurement system of interference fringe, the nonorthogonality error is a main error source that influences the precision and accuracy of the measurement system. The detection and elimination of the error has been an important target. A novel method that only uses the cross-zero detection and the counting is proposed to detect and eliminate the nonorthogonality error in real time. This method can be simply realized by means of the digital logic device, because it does not invoke trigonometric functions and inverse trigonometric functions. And it can be widely used in the bidirectional subdivision systems of a Moire fringe and other optical instruments.

  17. Finite Time Control Design for Bilateral Teleoperation System With Position Synchronization Error Constrained.

    PubMed

    Yang, Yana; Hua, Changchun; Guan, Xinping

    2016-03-01

    Due to the cognitive limitations of the human operator and lack of complete information about the remote environment, the work performance of such teleoperation systems cannot be guaranteed in most cases. However, some practical tasks conducted by the teleoperation system require high performances, such as tele-surgery needs satisfactory high speed and more precision control results to guarantee patient' health status. To obtain some satisfactory performances, the error constrained control is employed by applying the barrier Lyapunov function (BLF). With the constrained synchronization errors, some high performances, such as, high convergence speed, small overshoot, and an arbitrarily predefined small residual constrained synchronization error can be achieved simultaneously. Nevertheless, like many classical control schemes only the asymptotic/exponential convergence, i.e., the synchronization errors converge to zero as time goes infinity can be achieved with the error constrained control. It is clear that finite time convergence is more desirable. To obtain a finite-time synchronization performance, the terminal sliding mode (TSM)-based finite time control method is developed for teleoperation system with position error constrained in this paper. First, a new nonsingular fast terminal sliding mode (NFTSM) surface with new transformed synchronization errors is proposed. Second, adaptive neural network system is applied for dealing with the system uncertainties and the external disturbances. Third, the BLF is applied to prove the stability and the nonviolation of the synchronization errors constraints. Finally, some comparisons are conducted in simulation and experiment results are also presented to show the effectiveness of the proposed method. PMID:25823053

  18. Solid-state track recorder dosimetry device to measure absolute reaction rates and neutron fluence as a function of time

    DOEpatents

    Gold, Raymond; Roberts, James H.

    1989-01-01

    A solid state track recording type dosimeter is disclosed to measure the time dependence of the absolute fission rates of nuclides or neutron fluence over a period of time. In a primary species an inner recording drum is rotatably contained within an exterior housing drum that defines a series of collimating slit apertures overlying windows defined in the stationary drum through which radiation can enter. Film type solid state track recorders are positioned circumferentially about the surface of the internal recording drum to record such radiation or its secondary products during relative rotation of the two elements. In another species both the recording element and the aperture element assume the configuration of adjacent disks. Based on slit size of apertures and relative rotational velocity of the inner drum, radiation parameters within a test area may be measured as a function of time and spectra deduced therefrom.

  19. Absolute perfusion measurements and associated iodinated contrast agent time course in brain metastasis: a study for contrast-enhanced radiotherapy

    PubMed Central

    Obeid, Layal; Deman, Pierre; Tessier, Alexandre; Balosso, Jacques; Estève, François; Adam, Jean- François

    2014-01-01

    Contrast-enhanced radiotherapy is an innovative treatment that combines the selective accumulation of heavy elements in tumors with stereotactic irradiations using medium energy X-rays. The radiation dose enhancement depends on the absolute amount of iodine reached in the tumor and its time course. Quantitative, postinfusion iodine biodistribution and associated brain perfusion parameters were studied in human brain metastasis as key parameters for treatment feasibility and quality. Twelve patients received an intravenous bolus of iodinated contrast agent (CA) (40 mL, 4 mL/s), followed by a steady-state infusion (160 mL, 0.5 mL/s) to ensure stable intratumoral amounts of iodine during the treatment. Absolute iodine concentrations and quantitative perfusion maps were derived from 40 multislice dynamic computed tomography (CT) images of the brain. The postinfusion mean intratumoral iodine concentration (over 30 minutes) reached 1.94±0.12 mg/mL. Reasonable correlations were obtained between these concentrations and the permeability surface area product and the cerebral blood volume. To our knowledge, this is the first quantitative study of CA biodistribution versus time in brain metastasis. The study shows that suitable and stable amounts of iodine can be reached for contrast-enhanced radiotherapy. Moreover, the associated perfusion measurements provide useful information for the patient recruitment and management processes. PMID:24447951

  20. Automatic Time Stepping with Global Error Control for Groundwater Flow Models

    SciTech Connect

    Tang, Guoping

    2008-09-01

    An automatic time stepping with global error control is proposed for the time integration of the diffusion equation to simulate groundwater flow in confined aquifers. The scheme is based on an a posteriori error estimate for the discontinuous Galerkin (dG) finite element methods. A stability factor is involved in the error estimate and it is used to adapt the time step and control the global temporal error for the backward difference method. The stability factor can be estimated by solving a dual problem. The stability factor is not sensitive to the accuracy of the dual solution and the overhead computational cost can be minimized by solving the dual problem using large time steps. Numerical experiments are conducted to show the application and the performance of the automatic time stepping scheme. Implementation of the scheme can lead to improvement in accuracy and efficiency for groundwater flow models.

  1. Repeated quantum error correction on a continuously encoded qubit by real-time feedback.

    PubMed

    Cramer, J; Kalb, N; Rol, M A; Hensen, B; Blok, M S; Markham, M; Twitchen, D J; Hanson, R; Taminiau, T H

    2016-01-01

    Reliable quantum information processing in the face of errors is a major fundamental and technological challenge. Quantum error correction protects quantum states by encoding a logical quantum bit (qubit) in multiple physical qubits. To be compatible with universal fault-tolerant computations, it is essential that states remain encoded at all times and that errors are actively corrected. Here we demonstrate such active error correction on a continuously protected logical qubit using a diamond quantum processor. We encode the logical qubit in three long-lived nuclear spins, repeatedly detect phase errors by non-destructive measurements, and apply corrections by real-time feedback. The actively error-corrected qubit is robust against errors and encoded quantum superposition states are preserved beyond the natural dephasing time of the best physical qubit in the encoding. These results establish a powerful platform to investigate error correction under different types of noise and mark an important step towards fault-tolerant quantum information processing. PMID:27146630

  2. Repeated quantum error correction on a continuously encoded qubit by real-time feedback

    PubMed Central

    Cramer, J.; Kalb, N.; Rol, M. A.; Hensen, B.; Blok, M. S.; Markham, M.; Twitchen, D. J.; Hanson, R.; Taminiau, T. H.

    2016-01-01

    Reliable quantum information processing in the face of errors is a major fundamental and technological challenge. Quantum error correction protects quantum states by encoding a logical quantum bit (qubit) in multiple physical qubits. To be compatible with universal fault-tolerant computations, it is essential that states remain encoded at all times and that errors are actively corrected. Here we demonstrate such active error correction on a continuously protected logical qubit using a diamond quantum processor. We encode the logical qubit in three long-lived nuclear spins, repeatedly detect phase errors by non-destructive measurements, and apply corrections by real-time feedback. The actively error-corrected qubit is robust against errors and encoded quantum superposition states are preserved beyond the natural dephasing time of the best physical qubit in the encoding. These results establish a powerful platform to investigate error correction under different types of noise and mark an important step towards fault-tolerant quantum information processing. PMID:27146630

  3. Repeated quantum error correction by real-time feedback on continuously encoded qubits

    NASA Astrophysics Data System (ADS)

    Cramer, Julia; Kalb, Norbert; Rol, M. Adriaan; Hensen, Bas; Blok, Machiel S.; Markham, Matthew; Twitchen, Daniel J.; Hanson, Ronald; Taminiau, Tim H.

    Because quantum information is extremely fragile, large-scale quantum information processing requires constant error correction. To be compatible with universal fault-tolerant computations, it is essential that quantum states remain encoded at all times and that errors are actively corrected. I will present such active quantum error correction in a hybrid quantum system based on the nitrogen vacancy (NV) center in diamond. We encode a logical qubit in three long-lived nuclear spins, detect errors by multiple non-destructive measurements using the optically active NV electron spin and correct them by real-time feedback. By combining these new capabilities with recent advances in spin control, multiple cycles of error correction can be performed within the dephasing time. We investigate both coherent and incoherent errors and show that the error-corrected logical qubit can indeed store quantum states longer than the best spin used in the encoding. Furthermore, I will present our latest results on increasing the number of qubits in the encoding, required for quantum error correction for both phase- and bit-flip.

  4. A novel double-focusing time-of-flight mass spectrometer for absolute recoil ion cross sections measurements.

    PubMed

    Sigaud, L; de Jesus, V L B; Ferreira, Natalia; Montenegro, E C

    2016-08-01

    In this work, the inclusion of an Einzel-like lens inside the time-of-flight drift tube of a standard mass spectrometer coupled to a gas cell-to study ionization of atoms and molecules by electron impact-is described. Both this lens and a conical collimator are responsible for further focalization of the ions and charged molecular fragments inside the spectrometer, allowing a much better resolution at the time-of-flight spectra, leading to a separation of a single mass-to-charge unit up to 100 a.m.u. The procedure to obtain the overall absolute efficiency of the spectrometer and micro-channel plate detector is also discussed. PMID:27587105

  5. Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error

    NASA Astrophysics Data System (ADS)

    Jung, Insung; Koo, Lockjo; Wang, Gi-Nam

    2008-11-01

    The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.

  6. Searching for errors in solar system ephemeris with Parkes Pulsar Timing Array

    NASA Astrophysics Data System (ADS)

    Wang, Jingbo; Li, Liang; Hobbs, George; Coles, William; Guo, Li

    2015-08-01

    Pulsar timing analyses rely on a Solar System ephemeris to convert times of arrival (ToAs) of pulses measured atan observatory to the solar system barycenter. Error in the Solar System ephemeris will induce a signal with spatial dipolar signature in pulsar timing residuals. Pulsar timingarray (PTA) observations give an independent method of searching for errors in the Solar System ephemeris and estimating the accuracy of the ephemeris. Here we develop a search algorithm for such error in the solar system ephemeris and apply the algorithm to data from the Parkes Pulsar Timing Array and the Jet Propulsion Laboratory DE421 planetary ephemeris. No significant ephemeris error was detected with our data set.

  7. Simultaneous absolute timing of the Crab pulsar at radio and optical wavelengths

    NASA Astrophysics Data System (ADS)

    Oosterbroek, T.; Cognard, I.; Golden, A.; Verhoeve, P.; Martin, D. D. E.; Erd, C.; Schulz, R.; Stüwe, J. A.; Stankov, A.; Ho, T.

    2008-09-01

    Context: The Crab pulsar emits across a large part of the electromagnetic spectrum. Determining the time delay between the emission at different wavelengths will allow to better constrain the site and mechanism of the emission. We have simultaneously observed the Crab Pulsar in the optical with S-Cam, an instrument based on Superconducting Tunneling Junctions (STJs) with μs time resolution and at 2 GHz using the Nançay radio telescope with an instrument doing coherent dedispersion and able to record giant pulses data. Aims: We have studied the delay between the radio and optical pulse using simultaneously obtained data therefore reducing possible uncertainties present in previous observations. Methods: We determined the arrival times of the (mean) optical and radio pulse and compared them using the tempo2 software package. Results: We present the most accurate value for the optical-radio lag of 255 ± 21 μs and suggest the likelihood of a spectral dependence to the excess optical emission asociated with giant radio pulses.

  8. Improving HST Pointing & Absolute Astrometry

    NASA Astrophysics Data System (ADS)

    Lallo, Matthew; Nelan, E.; Kimmer, E.; Cox, C.; Casertano, S.

    2007-05-01

    Accurate absolute astrometry is becoming increasingly important in an era of multi-mission archives and virtual observatories. Hubble Space Telescope's (HST's) Guidestar Catalog II (GSC2) has reduced coordinate error to around 0.25 arcsecond, a factor 2 or more compared with GSC1. With this reduced catalog error, special attention must be given to calibrate and maintain the Fine Guidance Sensors (FGSs) and Science Instruments (SIs) alignments in HST to a level well below this in order to ensure that the accuracy of science product's astrometry keywords and target positioning are limited only by the catalog errors. After HST Servicing Mission 4, such calibrations' improvement in "blind" pointing accuracy will allow for more efficient COS acquisitions. Multiple SIs and FGSs each have their own footprints in the spatially shared HST focal plane. It is the small changes over time in primarily the whole-body positions & orientations of these instruments & guiders relative to one another that is addressed by this work. We describe the HST Cycle 15 program CAL/OTA 11021 which, along with future variants of it, determines and maintains positions and orientations of the SIs and FGSs to better than 50 milli- arcseconds and 0.04 to 0.004 degrees of roll, putting errors associated with the alignment sufficiently below GSC2 errors. We present recent alignment results and assess their errors, illustrate trends, and describe where and how the observer sees benefit from these calibrations when using HST.

  9. Finite-time normal mode disturbances and error growth during Southern Hemisphere blocking

    NASA Astrophysics Data System (ADS)

    Wei, Mozheng; Frederiksen, Jorgen S.

    2005-01-01

    The structural organization of initially random errors evolving in a barotropic tangent linear model, with time-dependent basic states taken from analyses, is examined for cases of block development, maturation and decay in the Southern Hemisphere atmosphere during April, November, and December 1989. The statistics of 100 evolved errors are studied for six-day periods and compared with the growth and structures of fast growing normal modes and finite-time normal modes (FTNMs). The amplification factors of most initially random errors are slightly less than those of the fastest growing FTNM for the same time interval. During their evolution, the standard deviations of the error fields become concentrated in the regions of rapid dynamical development, particularly associated with developing and decaying blocks. We have calculated probability distributions and the mean and standard deviations of pattern correlations between each of the 100 evolved error fields and the five fastest growing FTNMs for the same time interval. The mean of the largest pattern correlation, taken over the five fastest growing FTNMs, increases with increasing time interval to a value close to 0.6 or larger after six days. FTNM 1 generally, but not always, gives the largest mean pattern correlation with error fields. Corresponding pattern correlations with the fast growing normal modes of the instantaneous basic state flow are significant but lower than with FTNMs. Mean pattern correlations with fast growing FTNMs increase further when the time interval is increased beyond six days.

  10. Spectral characteristics of time-dependent orbit errors in altimeter height measurements

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Schlax, Michael G.

    1993-01-01

    A mean reference surface and time-dependent orbit errors are estimated simultaneously for each exact-repeat ground track from the first two years of Geosat sea level estimates based on the Goddard Earth model (GEM)-T2 orbits. Motivated by orbit theory and empirical analysis of Geosat data, the time-dependent orbit errors are modeled as 1 cycle per revolution (cpr) sinusoids with slowly varying amplitude and phase. The method recovers the known 'bow tie effect' introduced by the existence of force model errors within the precision orbit determination (POD) procedure used to generate the GEM-T2 orbits. The bow tie pattern of 1-cpr orbit errors is characterized by small amplitudes near the middle and larger amplitudes (up to 160 cm in the 2 yr of data considered here) near the ends of each 5- to 6-day orbit arc over which the POD force model is integrated. A detailed examination of these bow tie patterns reveals the existence of daily modulations of the amplitudes of the 1-cpr sinusoid orbit errors with typical and maximum peak-to-peak ranges of about 14 cm and 30 cm, respectively. The method also identifies a daily variation in the mean orbit error with typical and maximum peak-to-peak ranges of about 6 and 30 cm, respectively, that is unrelated to the predominant 1-cpr orbit error. Application of the simultaneous solution method to the much less accurate Geosat height estimates based on the Naval Astronautics Group orbits concludes that the accuracy of POD is not important for collinear altimetric studies of time-dependent mesoscale variability (wavelengths shorter than 1000 km), as long as the time-dependent orbit errors are dominated by 1-cpr variability and a long-arc (several orbital periods) orbit error estimation scheme such as that presented here is used.

  11. An Error Model for High-Time Resolution Satellite Precipitation Products

    NASA Astrophysics Data System (ADS)

    Maggioni, V.; Sapiano, M.; Adler, R. F.; Huffman, G. J.; Tian, Y.

    2013-12-01

    A new error scheme (PUSH: Precipitation Uncertainties for Satellite Hydrology) is presented to provide global estimates of errors for high time resolution, merged precipitation products. Errors are estimated for the widely used Tropical Rainfall Monitoring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) 3B42 product at daily/0.25° resolution, using the high quality NOAA CPC-UNI gauge analysis as the benchmark. Each of the following four scenarios is explored and explicitly modeled: correct no-precipitation detection (both satellite and gauges detect no precipitation), missed precipitation (satellite records a zero, but it is incorrect), false alarm (satellite detects precipitation, but the reference is zero), and hit (both satellite and gauges detect precipitation). Results over Oklahoma show that the estimated probability distributions are able to reproduce the probability density functions of the benchmark precipitation, in terms of both expected values and quantiles. PUSH adequately captures missed precipitation and false detection uncertainties, reproduces the spatial pattern of the error, and shows a good agreement between observed and estimated errors. The resulting error estimates could be attached to the standard products for the scientific community to use. Investigation is underway to: 1) test the approach in different regions of the world; 2) verify the ability of the model to discern the systematic and random components of the error; 3) and evaluate the model performance when higher time-resolution satellite products (i.e., 3-hourly) are employed.

  12. Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.

    PubMed

    Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał

    2016-08-01

    Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014. PMID:27416840

  13. Error criteria for cross validation in the context of chaotic time series prediction

    NASA Astrophysics Data System (ADS)

    Lim, Teck Por; Puthusserypady, Sadasivan

    2006-03-01

    The prediction of a chaotic time series over a long horizon is commonly done by iterating one-step-ahead prediction. Prediction can be implemented using machine learning methods, such as radial basis function networks. Typically, cross validation is used to select prediction models based on mean squared error. The bias-variance dilemma dictates that there is an inevitable tradeoff between bias and variance. However, invariants of chaotic systems are unchanged by linear transformations; thus, the bias component may be irrelevant to model selection in the context of chaotic time series prediction. Hence, the use of error variance for model selection, instead of mean squared error, is examined. Clipping is introduced, as a simple way to stabilize iterated predictions. It is shown that using the error variance for model selection, in combination with clipping, may result in better models.

  14. Real-time compensation for tool form errors in turning using computer vision

    SciTech Connect

    Nobel, G.; Donmez, M.A.; Burton, R.

    1990-01-01

    Deviations from the circular shape of the cutting edge of a single-point turning tool cause form errors in the workpiece during contour cutting. One can compensate for these tool-form errors by determining the size of the effective deviation at a particular instant during cutting, and then adjusting the position of the cutting tool accordingly. An algorithm for the compensation of tool-nose-radius errors in real time has been developed and implemented on a CNC fuming center. A previously developed computer-vision-based tool- inspection system is used to determine the size of the deviations. Information from this system is fed to the error compensation computer which modifies the tool path in real time. Workpieces were cut utilizing the compensation system and were inspected on a coordinate measuring machine. Significant improvements in workpiece form were obtained.

  15. Real-time compensation for tool form errors in turning using computer vision

    SciTech Connect

    Nobel, G.; Donmez, M.A.; Burton, R.

    1990-12-31

    Deviations from the circular shape of the cutting edge of a single-point turning tool cause form errors in the workpiece during contour cutting. One can compensate for these tool-form errors by determining the size of the effective deviation at a particular instant during cutting, and then adjusting the position of the cutting tool accordingly. An algorithm for the compensation of tool-nose-radius errors in real time has been developed and implemented on a CNC fuming center. A previously developed computer-vision-based tool- inspection system is used to determine the size of the deviations. Information from this system is fed to the error compensation computer which modifies the tool path in real time. Workpieces were cut utilizing the compensation system and were inspected on a coordinate measuring machine. Significant improvements in workpiece form were obtained.

  16. Real-time compensation for tool form errors in turning using computer vision

    NASA Astrophysics Data System (ADS)

    Nobel, Gary; Donmez, M. Alkan; Burton, Richard

    1990-11-01

    Deviations from the circular shape of the cutting edge of a single-point turning tool cause form errors in the workpiece during contour cutting. One can compensate for these tool-form errors by determining the size of the effective deviation at a particular instant during cutting and then adjusting the position of the cutting tool accordingly. An algorithm for the compensation of tool-nose-radius errors in real time has been developed and implemented on a CNC turning center. A previously developed computer-vision-based tool- inspection system is used to determine the size of the deviations. 1 Information from this system is fed to the error compensation computer which modifies the tool path in real time. Workpieces were cut utilizing the compensation system and were inspected on a coordinate measuring machine. Significant improvements in workpiece form were obtained. 1.

  17. Calibration of diffuse correlation spectroscopy with a time-resolved near-infrared technique to yield absolute cerebral blood flow measurements.

    PubMed

    Diop, Mamadou; Verdecchia, Kyle; Lee, Ting-Yim; St Lawrence, Keith

    2011-07-01

    A primary focus of neurointensive care is the prevention of secondary brain injury, mainly caused by ischemia. A noninvasive bedside technique for continuous monitoring of cerebral blood flow (CBF) could improve patient management by detecting ischemia before brain injury occurs. A promising technique for this purpose is diffuse correlation spectroscopy (DCS) since it can continuously monitor relative perfusion changes in deep tissue. In this study, DCS was combined with a time-resolved near-infrared technique (TR-NIR) that can directly measure CBF using indocyanine green as a flow tracer. With this combination, the TR-NIR technique can be used to convert DCS data into absolute CBF measurements. The agreement between the two techniques was assessed by concurrent measurements of CBF changes in piglets. A strong correlation between CBF changes measured by TR-NIR and changes in the scaled diffusion coefficient measured by DCS was observed (R(2) = 0.93) with a slope of 1.05 ± 0.06 and an intercept of 6.4 ± 4.3% (mean ± standard error). PMID:21750781

  18. Calibration of diffuse correlation spectroscopy with a time-resolved near-infrared technique to yield absolute cerebral blood flow measurements

    PubMed Central

    Diop, Mamadou; Verdecchia, Kyle; Lee, Ting-Yim; St Lawrence, Keith

    2011-01-01

    A primary focus of neurointensive care is the prevention of secondary brain injury, mainly caused by ischemia. A noninvasive bedside technique for continuous monitoring of cerebral blood flow (CBF) could improve patient management by detecting ischemia before brain injury occurs. A promising technique for this purpose is diffuse correlation spectroscopy (DCS) since it can continuously monitor relative perfusion changes in deep tissue. In this study, DCS was combined with a time-resolved near-infrared technique (TR-NIR) that can directly measure CBF using indocyanine green as a flow tracer. With this combination, the TR-NIR technique can be used to convert DCS data into absolute CBF measurements. The agreement between the two techniques was assessed by concurrent measurements of CBF changes in piglets. A strong correlation between CBF changes measured by TR-NIR and changes in the scaled diffusion coefficient measured by DCS was observed (R2 = 0.93) with a slope of 1.05 ± 0.06 and an intercept of 6.4 ± 4.3% (mean ± standard error). PMID:21750781

  19. Leptin in whales: validation and measurement of mRNA expression by absolute quantitative real-time PCR.

    PubMed

    Ball, Hope C; Holmes, Robert K; Londraville, Richard L; Thewissen, Johannes G M; Duff, Robert Joel

    2013-01-01

    Leptin is the primary hormone in mammals that regulates adipose stores. Arctic adapted cetaceans maintain enormous adipose depots, suggesting possible modifications of leptin or receptor function. Determining expression of these genes is the first step to understanding the extreme physiology of these animals, and the uniqueness of these animals presents special challenges in estimating and comparing expression levels of mRNA transcripts. Here, we compare expression of two model genes, leptin and leptin-receptor gene-related product (OB-RGRP), using two quantitative real-time PCR (qPCR) methods: "relative" and "absolute". To assess the expression of leptin and OB-RGRP in cetacean tissues, we first examined how relative expression of those genes might differ when normalized to four common endogenous control genes. We performed relative expression qPCR assays measuring the amplification of these two model target genes relative to amplification of 18S ribosomal RNA (18S), ubiquitously expressed transcript (Uxt), ribosomal protein 9 (Rs9) and ribosomal protein 15 (Rs15) endogenous controls. Results demonstrated significant differences in the expression of both genes when different control genes were employed; emphasizing a limitation of relative qPCR assays, especially in studies where differences in physiology and/or a lack of knowledge regarding levels and patterns of expression of common control genes may possibly affect data interpretation. To validate the absolute quantitative qPCR methods, we evaluated the effects of plasmid structure, the purity of the plasmid standard preparation and the influence of type of qPCR "background" material on qPCR amplification efficiencies and copy number determination of both model genes, in multiple tissues from one male bowhead whale. Results indicate that linear plasmids are more reliable than circular plasmid standards, no significant differences in copy number estimation based upon background material used, and that the use of

  20. Space Weather Prediction Error Bounding for Real-Time Ionospheric Threat Adaptation of GNSS Augmentation Systems

    NASA Astrophysics Data System (ADS)

    Lee, J.; Yoon, M.; Lee, J.

    2014-12-01

    Current Global Navigation Satellite Systems (GNSS) augmentation systems attempt to consider all possible ionospheric events in their correction computations of worst-case errors. This conservatism can be mitigated by subdividing anomalous conditions and using different values of ionospheric threat-model bounds for each class. A new concept of 'real-time ionospheric threat adaptation' that adjusts the threat model in real time instead of always using the same 'worst-case' model was introduced in my previous research. The concept utilizes predicted values of space weather indices for determining the corresponding threat model based on the pre-defined worst-case threat as a function of space weather indices. Since space weather prediction is not reliable due to prediction errors, prediction errors are needed to be bounded to the required level of integrity of the system being supported. The previous research performed prediction error bounding using disturbance, storm time (Dst) index. The distribution of Dst prediction error over the 15-year data was bounded by applying 'inflated-probability density function (pdf) Gaussian bounding'. Since the error distribution has thick and non-Gaussian tails, investigation on statistical distributions which properly describe heavy tails with less conservatism is required for the system performance. This paper suggests two potential approaches for improving space weather prediction error bounding. First, we suggest using different statistical models when fit the error distribution, such as the Laplacian distribution which has fat tails, and the folded Gaussian cumulative distribution function (cdf) distribution. Second approach is to bound the error distribution by segregating data based on the overall level of solar activity. Bounding errors using only solar minimum period data will have less uncertainty and it may allow the use of 'solar cycle prediction' provided by NASA when implementing to real-time threat adaptation. Lastly

  1. A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series

    PubMed Central

    Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan

    2015-01-01

    Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS. PMID:26193283

  2. A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series.

    PubMed

    Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan

    2015-01-01

    Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS. PMID:26193283

  3. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  4. Unavoidable Errors: A Spatio-Temporal Analysis of Time-Course and Neural Sources of Evoked Potentials Associated with Error Processing in a Speeded Task

    ERIC Educational Resources Information Center

    Vocat, Roland; Pourtois, Gilles; Vuilleumier, Patrik

    2008-01-01

    The detection of errors is known to be associated with two successive neurophysiological components in EEG, with an early time-course following motor execution: the error-related negativity (ERN/Ne) and late positivity (Pe). The exact cognitive and physiological processes contributing to these two EEG components, as well as their functional…

  5. Benchmarking flood models from space in near real-time: accommodating SRTM height measurement errors with low resolution flood imagery

    NASA Astrophysics Data System (ADS)

    Schumann, G.; di Baldassarre, G.; Alsdorf, D.; Bates, P. D.

    2009-04-01

    In February 2000, the Shuttle Radar Topography Mission (SRTM) measured the elevation of most of the Earth's surface with spatially continuous sampling and an absolute vertical accuracy greater than 9 m. The vertical error has been shown to change with topographic complexity, being less important over flat terrain. This allows water surface slopes to be measured and associated discharge volumes to be estimated for open channels in large basins, such as the Amazon. Building on these capabilities, this paper demonstrates that near real-time coarse resolution radar imagery of a recent flood event on a 98 km reach of the River Po (Northern Italy) combined with SRTM terrain height data leads to a water slope remarkably similar to that derived by combining the radar image with highly accurate airborne laser altimetry. Moreover, it is shown that this space-borne flood wave approximation compares well to a hydraulic model and thus allows the performance of the latter, calibrated on a previous event, to be assessed when applied to an event of different magnitude in near real-time. These results are not only of great importance to real-time flood management and flood forecasting but also support the upcoming Surface Water and Ocean Topography (SWOT) mission that will routinely provide water levels and slopes with higher precision around the globe.

  6. Empirical versus time stepping with embedded error control for density-driven flow in porous media

    NASA Astrophysics Data System (ADS)

    Younes, Anis; Ackerer, Philippe

    2010-08-01

    Modeling density-driven flow in porous media may require very long computational time due to the nonlinear coupling between flow and transport equations. Time stepping schemes are often used to adapt the time step size in order to reduce the computational cost of the simulation. In this work, the empirical time stepping scheme which adapts the time step size according to the performance of the iterative nonlinear solver is compared to an adaptive time stepping scheme where the time step length is controlled by the temporal truncation error. Results of the simulations of the Elder problem show that (1) the empirical time stepping scheme can lead to inaccurate results even with a small convergence criterion, (2) accurate results are obtained when the time step size selection is based on the truncation error control, (3) a non iterative scheme with proper time step management can be faster and leads to more accurate solution than the standard iterative procedure with the empirical time stepping and (4) the temporal truncation error can have a significant effect on the results and can be considered as one of the reasons for the differences observed in the Elder numerical results.

  7. Electrical brain imaging reveals the expression and timing of altered error monitoring functions in major depression.

    PubMed

    Aarts, Kristien; Vanderhasselt, Marie-Anne; Otte, Georges; Baeken, Chris; Pourtois, Gilles

    2013-11-01

    Major depressive disorder (MDD) is characterized by disturbances in affect, motivation, and cognitive control processes, including error detection. However, the expression and timing of the impairments during error monitoring remain unclear in MDD. The behavior and event-related brain responses (ERPs) of 20 patients with MDD were compared with those of 20 healthy controls (HCs), while they performed a Go/noGo task. Errors during this task were associated with 2 ERP components, the error-related negativity (ERN/Ne) and the error positivity (Pe). Results show that the ERN/Ne-correct-related negativity (CRN) amplitude difference was significantly larger in MDD patients (after controlling for speed), compared with HCs, although MDD patients exhibited overactive medial frontal cortex (MFC) activation. By comparison, the subsequent Pe component was smaller in MDD patients compared with HCs and this effect was accompanied by a reduced activation of ventral anterior cingulate cortex (ACC) regions. These results suggest that MDD has multiple cascade effects on early error monitoring brain mechanisms. PMID:24364597

  8. Structure and dating errors in the geologic time scale and periodicity in mass extinctions

    NASA Technical Reports Server (NTRS)

    Stothers, Richard B.

    1989-01-01

    Structure in the geologic time scale reflects a partly paleontological origin. As a result, ages of Cenozoic and Mesozoic stage boundaries exhibit a weak 28-Myr periodicity that is similar to the strong 26-Myr periodicity detected in mass extinctions of marine life by Raup and Sepkoski. Radiometric dating errors in the geologic time scale, to which the mass extinctions are stratigraphically tied, do not necessarily lessen the likelihood of a significant periodicity in mass extinctions, but do spread the acceptable values of the period over the range 25-27 Myr for the Harland et al. time scale or 25-30 Myr for the DNAG time scale. If the Odin time scale is adopted, acceptable periods fall between 24 and 33 Myr, but are not robust against dating errors. Some indirect evidence from independently-dated flood-basalt volcanic horizons tends to favor the Odin time scale.

  9. Detection and absolute quantitation of Tomato torrado virus (ToTV) by real time RT-PCR.

    PubMed

    Herrera-Vásquez, José Angel; Rubio, Luis; Alfaro-Fernández, Ana; Debreczeni, Diana Elvira; Font-San-Ambrosio, Isabel; Falk, Bryce W; Ferriol, Inmaculada

    2015-09-01

    Tomato torrado virus (ToTV) causes serious damage to the tomato industry and significant economic losses. A quantitative real-time reverse-transcription polymerase chain reaction (RT-qPCR) method using primers and a specific TaqMan(®) MGB probe for ToTV was developed for sensitive detection and quantitation of different ToTV isolates. A standard curve using RNA transcripts enabled absolute quantitation, with a dynamic range from 10(4) to 10(10) ToTV RNA copies/ng of total RNA. The specificity of the RT-qPCR was tested with twenty-three ToTV isolates from tomato (Solanum lycopersicum L.), and black nightshade (Solanum nigrum L.) collected in Spain, Australia, Hungary and France, which covered the genetic variation range of this virus. This new RT-qPCR assay enables a reproducible, sensitive and specific detection and quantitation of ToTV, which can be a valuable tool in disease management programs and epidemiological studies. PMID:25956672

  10. Covariate measurement error correction methods in mediation analysis with failure time data.

    PubMed

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469

  11. On the effect of timing errors in run length codes. [redundancy removal algorithms for digital channels

    NASA Technical Reports Server (NTRS)

    Wilkins, L. C.; Wintz, P. A.

    1975-01-01

    Many redundancy removal algorithms employ some sort of run length code. Blocks of timing words are coded with synchronization words inserted between blocks. The probability of incorrectly reconstructing a sample because of a channel error in the timing data is a monotonically nondecreasing function of time since the last synchronization word. In this paper we compute the 'probability that the accumulated magnitude of timing errors equal zero' as a function of time since the last synchronization word for a zero-order predictor (ZOP). The result is valid for any data source that can be modeled by a first-order Markov chain and any digital channel that can be modeled by a channel transition matrix. An example is presented.

  12. Absolute Zero

    NASA Astrophysics Data System (ADS)

    Donnelly, Russell J.; Sheibley, D.; Belloni, M.; Stamper-Kurn, D.; Vinen, W. F.

    2006-12-01

    Absolute Zero is a two hour PBS special attempting to bring to the general public some of the advances made in 400 years of thermodynamics. It is based on the book “Absolute Zero and the Conquest of Cold” by Tom Shachtman. Absolute Zero will call long-overdue attention to the remarkable strides that have been made in low-temperature physics, a field that has produced 27 Nobel Prizes. It will explore the ongoing interplay between science and technology through historical examples including refrigerators, ice machines, frozen foods, liquid oxygen and nitrogen as well as much colder fluids such as liquid hydrogen and liquid helium. A website has been established to promote the series: www.absolutezerocampaign.org. It contains information on the series, aimed primarily at students at the middle school level. There is a wealth of material here and we hope interested teachers will draw their student’s attention to this website and its substantial contents, which have been carefully vetted for accuracy.

  13. Optimizing Tile Concentrations to Minimize Errors and Time for DNA Tile Self-assembly Systems

    NASA Astrophysics Data System (ADS)

    Chen, Ho-Lin; Kao, Ming-Yang

    DNA tile self-assembly has emerged as a rich and promising primitive for nano-technology. This paper studies the problems of minimizing assembly time and error rate by changing the tile concentrations because changing the tile concentrations is easy to implement in actual lab experiments. We prove that setting the concentration of tile T i proportional to the square root of N i where N i is the number of times T i appears outside the seed structure in the final assembled shape minimizes the rate of growth errors for rectilinear tile systems. We also show that the same concentrations minimize the expected assembly time for a feasible class of tile systems. Moreover, for general tile systems, given tile concentrations, we can approximate the expected assembly time with high accuracy and probability by running only a polynomial number of simulations in the size of the target shape.

  14. Cortical delta activity reflects reward prediction error and related behavioral adjustments, but at different times.

    PubMed

    Cavanagh, James F

    2015-04-15

    Recent work has suggested that reward prediction errors elicit a positive voltage deflection in the scalp-recorded electroencephalogram (EEG); an event sometimes termed a reward positivity. However, a strong test of this proposed relationship remains to be defined. Other important questions remain unaddressed: such as the role of the reward positivity in predicting future behavioral adjustments that maximize reward. To answer these questions, a three-armed bandit task was used to investigate the role of positive prediction errors during trial-by-trial exploration and task-set based exploitation. The feedback-locked reward positivity was characterized by delta band activities, and these related EEG features scaled with the degree of a computationally derived positive prediction error. However, these phenomena were also dissociated: the computational model predicted exploitative action selection and related response time speeding whereas the feedback-locked EEG features did not. Compellingly, delta band dynamics time-locked to the subsequent bandit (the P3) successfully predicted these behaviors. These bandit-locked findings included an enhanced parietal to motor cortex delta phase lag that correlated with the degree of response time speeding, suggesting a mechanistic role for delta band activities in motivating action selection. This dissociation in feedback vs. bandit locked EEG signals is interpreted as a differentiation in hierarchically distinct types of prediction error, yielding novel predictions about these dissociable delta band phenomena during reinforcement learning and decision making. PMID:25676913

  15. The impact of navigation satellite ephemeris error on common-view time transfer.

    PubMed

    Sun, Hongwei; Yuan, Haibo; Zhang, Hong

    2010-01-01

    The impact of navigation satellite ephemeris error on satellite common-view time transfer was analyzed. The impact varies depending on the angle in view of a satellite relative to a user (elevation) and the baseline distance between 2 users. Some extents of the impact were figured out for several elevations and different baseline. As an example, results from several common-view time transfer links in China via Compass satellite were given. PMID:20040439

  16. Mitigation of Second-Order Ionospheric Error for Real-Time PPP Users in Europe

    NASA Astrophysics Data System (ADS)

    Abdelazeem, Mohamed

    2016-07-01

    Currently, the international global navigation satellite system (GNSS) real-time service (IGS-RTS) products are used extensively for real-time precise point positioning and ionosphere modeling applications. The major challenge of the dual frequency real-time precise point positioning (RT-PPP) is that the solution requires relatively long time to converge to the centimeter-level accuracy. This relatively long convergence time results essentially from the un-modeled high-order ionospheric errors. To overcome this challenge, a method for the second-order ionospheric delay mitigation, which represents the bulk of the high-order ionospheric errors, is proposed for RT-PPP users in Europe. A real-time regional ionospheric model (RT-RIM) over Europe is developed using the IGS-RTS precise satellite orbit and clock products. GPS observations from a regional network consisting of 60 IGS and EUREF reference stations are processed using the Bernese 5.2 software package in order to extract the real-time vertical total electron content (RT-VTEC). The proposed RT-RIM has spatial and temporal resolution of 1º×1º and 15 minutes, respectively. In order to investigate the effect of the second-order ionospheric delay on the RT-PPP solution, new GPS data sets from another reference stations are used. The examined stations are selected to represent different latitudes. The GPS observations are corrected from the second-order ionospheric errors using the extracted RT-VTEC values. In addition, the IGS-RTS precise orbit and clock products are used to account for the satellite orbit and clock errors, respectively. It is shown that the RT-PPP convergence time and positioning accuracy are improved when the second-order ionospheric delay is accounted for.

  17. Contributing to a precise and accurate chronostratigraphic time scale for climatic records: Absolute dating and paleomagnetism in lavas

    NASA Astrophysics Data System (ADS)

    Sasco, Romain; Guillou, Herve; Kissel, Catherine; Wandres, Camille; Carracedo, Juan-Carlos; Perez Torrado, Francisco Jose

    2014-05-01

    Understanding climatic mechanisms requires a robust and precise timescale allowing long-distance and multi-archives correlations. A unique tool to construct such time scales is provided by the Earth magnetic field (EMF), which is independent from climatic variations and the past evolution of which is recorded in most of the geological/climatic archives. Sedimentary sequences provide continuous records of relative intensities of the EMF on stratigraphic time scales, usually based on orbital tuning. They are transferred onto absolute intensity scale and chronological time scale using robust tie points available for the past ~40 ka. However, for older periods this calibration remains poorly constrained. Our study reports on new tie points over the last 200 ka by combining paleomagnetic and geochronological (K/Ar and 40Ar-39Ar dating) studies on lavas. Based on the K-Ar LSCE age database, a set of 18 lava flows corresponding to potential geomagnetic excursions and/or highs and lows in the paleomagnetic intensity as observed from sediments and occurring in the studied time-window were selected in the Canary Islands (Tenerife, La Palma and Gran Canaria). A total of 205 oriented cores were taken from these 18 lava flows. Rock magnetic experiments include thermomagnetic analyses on each core, hysteresis loop and First Order Reversal Curves. Stepwise thermal demagnetizations in zero-field provided reliable mean-site paleomagnetic direction of the EMF for 15 of the flows. Paleointensity values were determined using the original Thellier and Thellier method. Based on previous experiments, 170 samples were analyzed, among which 51% provided reliable paleointensity values (determined using PICRIT-03 criteria). The geochronological study focused on 40Ar-39Ar dating. Based on preliminary paleomagnetic results, 13 flows were analyzed and 11 provided ages consistent at the 2 sigma level with the already available K-Ar ages. This coupled K/Ar - 40Ar-39Ar results strongly constrain

  18. Interference peak detection based on FPGA for real-time absolute distance ranging with dual-comb lasers

    NASA Astrophysics Data System (ADS)

    Ni, Kai; Dong, Hao; Zhou, Qian; Xu, Mingfei; Li, Xinghui; Wu, Guanhao

    2015-08-01

    Absolute distance measurement using dual femtosecond comb lasers can achieve higher accuracy and faster measurement speed, which makes it more and more attractive. The data processing flow consists of four steps: interference peak detection, fast Fourier transform (FFT), phase fitting and compensation of index of refraction. A realtime data processing system based on Field-Programmable Gate Array (FPGA) for dual-comb ranging has been newly developed. The design and implementation of the interference peak detection algorithm by FPGA and Verilog language is introduced in this paper, which is viewed as the most complicated part and an important guarantee for system precision and reliability. An adaptive sliding window for scanning is used to detect peaks. In the process of detection, the algorithm stores 16 sample data as a detection unit and calculates the average of each unit. The average result is used to determine the vertical center height of the sliding window. The algorithm estimates the noise intensity of each detection unit, and then calculates the average of the noise strength of successive 128 units. The noise average is used to calculate the signal to noise ratio of the current working environment, which is used to adjust the height of the sliding window. This adaptive sliding window helps to eliminate fake peaks caused by noise. The whole design is based on the way of pipeline, which can improves the real-time throughput of the overall peak detection module. Its execution speed is up to 140MHz in the FPGA, and the peak can be detected in 16 clock cycle when it appears.

  19. Leptin in Whales: Validation and Measurement of mRNA Expression by Absolute Quantitative Real-Time PCR

    PubMed Central

    Ball, Hope C.; Holmes, Robert K.; Londraville, Richard L.; Thewissen, Johannes G. M.; Duff, Robert Joel

    2013-01-01

    Leptin is the primary hormone in mammals that regulates adipose stores. Arctic adapted cetaceans maintain enormous adipose depots, suggesting possible modifications of leptin or receptor function. Determining expression of these genes is the first step to understanding the extreme physiology of these animals, and the uniqueness of these animals presents special challenges in estimating and comparing expression levels of mRNA transcripts. Here, we compare expression of two model genes, leptin and leptin-receptor gene-related product (OB-RGRP), using two quantitative real-time PCR (qPCR) methods: “relative” and “absolute”. To assess the expression of leptin and OB-RGRP in cetacean tissues, we first examined how relative expression of those genes might differ when normalized to four common endogenous control genes. We performed relative expression qPCR assays measuring the amplification of these two model target genes relative to amplification of 18S ribosomal RNA (18S), ubiquitously expressed transcript (Uxt), ribosomal protein 9 (Rs9) and ribosomal protein 15 (Rs15) endogenous controls. Results demonstrated significant differences in the expression of both genes when different control genes were employed; emphasizing a limitation of relative qPCR assays, especially in studies where differences in physiology and/or a lack of knowledge regarding levels and patterns of expression of common control genes may possibly affect data interpretation. To validate the absolute quantitative qPCR methods, we evaluated the effects of plasmid structure, the purity of the plasmid standard preparation and the influence of type of qPCR “background” material on qPCR amplification efficiencies and copy number determination of both model genes, in multiple tissues from one male bowhead whale. Results indicate that linear plasmids are more reliable than circular plasmid standards, no significant differences in copy number estimation based upon background material used, and

  20. Height Estimation and Error Assessment of Inland Water Level Time Series calculated by a Kalman Filter Approach using Multi-Mission Satellite Altimetry

    NASA Astrophysics Data System (ADS)

    Schwatke, Christian; Dettmering, Denise; Boergens, Eva

    2015-04-01

    compare our results with gauges and external inland altimeter databases (e.g. Hydroweb). We yield very high correlations between absolute water level height time series from altimetry and gauges. Moreover, the comparisons of water level heights are also used for the validation of the error assessment. More than 200 water level time series were already computed and made public available via the "Database for Hydrological Time Series of Inland Waters" (DAHITI) which is available via http://dahiti.dgfi.tum.de .

  1. Error correction for free-space optical interconnects: space-time resource optimization.

    PubMed

    Neifeld, M A; Kostuk, R K

    1998-01-10

    We study the joint optimization of time and space resources withinfree-space optical interconnect (FSOI) systems. Both analyticaland simulation results are presented to support this optimization studyfor two different models of FSOI cross-talk noise: diffraction froma rectangular aperture and Gaussian propagation. Under realisticpower and signal-to-noise ratio constraints, optimum designs based onthe Gaussian propagation model achieve a capacity of 2.91 x10(15) bits s(-1) m(-2), while therectangular model offers a smaller capacity of 1.91 x10(13) bits s(-1) m(-2). We alsostudy the use of error-correction codes (ECC) within FSOIsystems. We present optimal Reed-Solomon codes of various length, and their use is shown to facilitate an increase in both spatialdensity and data rate, resulting in FSOI capacity gains in excess of8.2 for the rectangular model and 3.7 for the Gaussian case. Atolerancing study of FSOI systems shows that ECC can provide toleranceto implementational error sources. We find that optimally codedFSOI systems can fail when system errors become large, and we present acompromise solution that results in a balanced design in time, space, and error-correction resources. PMID:18268585

  2. 5 CFR 1605.22 - Claims for correction of Board or TSP record keeper errors; time limitations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 3 2012-01-01 2012-01-01 false Claims for correction of Board or TSP record keeper errors; time limitations. 1605.22 Section 1605.22 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD CORRECTION OF ADMINISTRATIVE ERRORS Board or TSP Record Keeper Errors § 1605.22 Claims for correction of Board or...

  3. Absolute Summ

    NASA Astrophysics Data System (ADS)

    Phillips, Alfred, Jr.

    Summ means the entirety of the multiverse. It seems clear, from the inflation theories of A. Guth and others, that the creation of many universes is plausible. We argue that Absolute cosmological ideas, not unlike those of I. Newton, may be consistent with dynamic multiverse creations. As suggested in W. Heisenberg's uncertainty principle, and with the Anthropic Principle defended by S. Hawking, et al., human consciousness, buttressed by findings of neuroscience, may have to be considered in our models. Predictability, as A. Einstein realized with Invariants and General Relativity, may be required for new ideas to be part of physics. We present here a two postulate model geared to an Absolute Summ. The seedbed of this work is part of Akhnaton's philosophy (see S. Freud, Moses and Monotheism). Most important, however, is that the structure of human consciousness, manifest in Kenya's Rift Valley 200,000 years ago as Homo sapiens, who were the culmination of the six million year co-creation process of Hominins and Nature in Africa, allows us to do the physics that we do. .

  4. A diagnostic study of time variations of regionally averaged background error covariances

    NASA Astrophysics Data System (ADS)

    Monteiro, Maria; Berre, LoïK.

    2010-12-01

    In variational data assimilation systems, background error covariances are often estimated from a temporal and spatial average. For a limited area model such as the Aire Limited Adaptation Dynamique Developpment International (ALADIN)/France, the spatial average is calculated over the regional computation domain, which covers western Europe. The purpose of this study is to revise the temporal stationarity assumption by diagnosing time variations of such regionally averaged covariances. This is done through examination of covariance changes as a function of season (winter versus summer), day (in connection with the synoptic situation), and hour (related to the diurnal cycle), with the ALADIN/France regional ensemble Three-Dimensional Variational analysis (3D-Var) system. In summer, compared to winter, average error variances are larger, and spatial correlation functions are sharper horizontally but broader vertically. Daily changes in covariances are particularly strong during the winter period, with larger variances and smaller-scale error structures when an unstable low-pressure system is present in the regional domain. Diurnal variations are also significant in the boundary layer in particular, and, as expected, they tend to be more pronounced in summer. Moreover, the comparison between estimates provided by two independent ensembles indicates that these covariance time variations are estimated in a robust way from a six-member ensemble. All these results support the idea of representing these time variations by using a real-time ensemble assimilation system.

  5. Error correction in short time steps during the application of quantum gates

    NASA Astrophysics Data System (ADS)

    de Castro, L. A.; Napolitano, R. d. J.

    2016-04-01

    We propose a modification of the standard quantum error-correction method to enable the correction of errors that occur due to the interaction with a noisy environment during quantum gates without modifying the codification used for memory qubits. Using a perturbation treatment of the noise that allows us to separate it from the ideal evolution of the quantum gate, we demonstrate that in certain cases it is necessary to divide the logical operation in short time steps intercalated by correction procedures. A prescription of how these gates can be constructed is provided, as well as a proof that, even for the cases when the division of the quantum gate in short time steps is not necessary, this method may be advantageous for reducing the total duration of the computation.

  6. Accelerating Time-Varying Hardware Volume Rendering Using TSP Trees and Color-Based Error Metrics

    NASA Technical Reports Server (NTRS)

    Ellsworth, David; Chiang, Ling-Jen; Shen, Han-Wei; Kwak, Dochan (Technical Monitor)

    2000-01-01

    This paper describes a new hardware volume rendering algorithm for time-varying data. The algorithm uses the Time-Space Partitioning (TSP) tree data structure to identify regions within the data that have spatial or temporal coherence. By using this coherence, the rendering algorithm can improve performance when the volume data is larger than the texture memory capacity by decreasing the amount of textures required. This coherence can also allow improved speed by appropriately rendering flat-shaded polygons instead of textured polygons, and by not rendering transparent regions. To reduce the polygonization overhead caused by the use of the hierarchical data structure, we introduce an optimization method using polygon templates. The paper also introduces new color-based error metrics, which more accurately identify coherent regions compared to the earlier scalar-based metrics. By showing experimental results from runs using different data sets and error metrics, we demonstrate that the new methods give substantial improvements in volume rendering performance.

  7. A time dependent approach for removing the cell boundary error in elliptic homogenization problems

    NASA Astrophysics Data System (ADS)

    Arjmand, Doghonay; Runborg, Olof

    2016-06-01

    This paper concerns the cell-boundary error present in multiscale algorithms for elliptic homogenization problems. Typical multiscale methods have two essential components: a macro and a micro model. The micro model is used to upscale parameter values which are missing in the macro model. To solve the micro model, boundary conditions are required on the boundary of the microscopic domain. Imposing a naive boundary condition leads to O (ε / η) error in the computation, where ε is the size of the microscopic variations in the media and η is the size of the micro-domain. The removal of this error in modern multiscale algorithms still remains an important open problem. In this paper, we present a time-dependent approach which is general in terms of dimension. We provide a theorem which shows that we have arbitrarily high order convergence rates in terms of ε / η in the periodic setting. Additionally, we present numerical evidence showing that the method improves the O (ε / η) error to O (ε) in general non-periodic media.

  8. Separable responses to error, ambiguity, and reaction time in cingulo-opercular task control regions.

    PubMed

    Neta, Maital; Schlaggar, Bradley L; Petersen, Steven E

    2014-10-01

    The dorsal anterior cingulate (dACC), along with the closely affiliated anterior insula/frontal operculum, have been demonstrated to show three types of task control signals across a wide variety of tasks. One of these signals, a transient signal that is thought to represent performance feedback, shows greater activity to error than correct trials. Other work has found similar effects for uncertainty/ambiguity or conflict, though some argue that dACC activity is, instead, modulated primarily by other processes more reflected in reaction time. Here, we demonstrate that, rather than a single explanation, multiple information processing operations are crucial to characterizing the function of these brain regions, by comparing operations within a single paradigm. Participants performed two tasks in an fMRI experimental session: (1) deciding whether or not visually presented word pairs rhyme, and (2) rating auditorily presented single words as abstract or concrete. A pilot was used to identify ambiguous stimuli for both tasks (e.g., word pair: BASS/GRACE; single word: CHANGE). We found greater cingulo-opercular activity for errors and ambiguous trials than clear/correct trials, with a robust effect of reaction time. The effects of error and ambiguity remained when reaction time was regressed out, although the differences decreased. Further stepwise regression of response consensus (agreement across participants for each stimulus; a proxy for ambiguity) decreased differences between ambiguous and clear trials, but left error-related differences almost completely intact. These observations suggest that trial-wise responses in cingulo-opercular regions monitor multiple performance indices, including accuracy, ambiguity, and reaction time. PMID:24887509

  9. Measurement of the Absolute Magnitude and Time Courses of Mitochondrial Membrane Potential in Primary and Clonal Pancreatic Beta-Cells

    PubMed Central

    Gerencser, Akos A.; Mookerjee, Shona A.; Jastroch, Martin; Brand, Martin D.

    2016-01-01

    The aim of this study was to simplify, improve and validate quantitative measurement of the mitochondrial membrane potential (ΔψM) in pancreatic β-cells. This built on our previously introduced calculation of the absolute magnitude of ΔψM in intact cells, using time-lapse imaging of the non-quench mode fluorescence of tetramethylrhodamine methyl ester and a bis-oxonol plasma membrane potential (ΔψP) indicator. ΔψM is a central mediator of glucose-stimulated insulin secretion in pancreatic β-cells. ΔψM is at the crossroads of cellular energy production and demand, therefore precise assay of its magnitude is a valuable tool to study how these processes interplay in insulin secretion. Dispersed islet cell cultures allowed cell type-specific, single-cell observations of cell-to-cell heterogeneity of ΔψM and ΔψP. Glucose addition caused hyperpolarization of ΔψM and depolarization of ΔψP. The hyperpolarization was a monophasic step increase, even in cells where the ΔψP depolarization was biphasic. The biphasic response of ΔψP was associated with a larger hyperpolarization of ΔψM than the monophasic response. Analysis of the relationships between ΔψP and ΔψM revealed that primary dispersed β-cells responded to glucose heterogeneously, driven by variable activation of energy metabolism. Sensitivity analysis of the calibration was consistent with β-cells having substantial cell-to-cell variations in amounts of mitochondria, and this was predicted not to impair the accuracy of determinations of relative changes in ΔψM and ΔψP. Finally, we demonstrate a significant problem with using an alternative ΔψM probe, rhodamine 123. In glucose-stimulated and oligomycin-inhibited β-cells the principles of the rhodamine 123 assay were breached, resulting in misleading conclusions. PMID:27404273

  10. Time lapse imaging of water content with geoelectrical methods: on the interest of working with absolute water content data

    NASA Astrophysics Data System (ADS)

    Dumont, Gaël; Pilawski, Tamara; Robert, Tanguy; Hermans, Thomas; Garré, Sarah; Nguyen, Frederic

    2016-04-01

    The electrical resistivity tomography is a suitable method to estimate the water content of a waste material and detect changes in water content. Various ERT profiles, both static data and time-lapse, where acquired on a landfill during the Minerve project. In the literature, the relative change of resistivity (Δρ/ρ) is generally computed. For saline or heat tracer tests in the saturated zone, the Δρ/ρ can be easily translated into pore water conductivity or underground temperature changes (provided that the initial salinity or temperature condition is homogeneous over the ERT panel extension). For water content changes in the vadose zone resulting of an infiltration event or injection experiment, many authors also work with the Δρ/ρ or relative changes of water content Δθ/θ (linked to the change of resistivity through one single parameter: the Archie's law exponent "m"). This parameter is not influenced by the underground temperature and pore fluid conductivity (ρ¬w) condition but is influenced by the initial water content distribution. Therefore, you never know if the loss of Δθ/θ signal is representative of the limit of the infiltration front or more humid initial condition. Another approach for the understanding of the infiltration process is the assessment of the absolute change of water content (Δθ). This requires the direct computation of the water content of the waste from the resistivity data. For that purpose, we used petrophysical laws calibrated with laboratory experiments and our knowledge of the in situ temperature and pore fluid conductivity parameters. Then, we investigated water content changes in the waste material after a rainfall event (Δθ= Δθ/θ* θ). This new observation is really representatives of the quantity of water infiltrated in the waste material. However, the uncertainty in the pore fluid conductivity value may influence the computed water changes (Δθ=k*m√(ρw) ; where "m" is the Archie's law exponent

  11. Measurement of the Absolute Magnitude and Time Courses of Mitochondrial Membrane Potential in Primary and Clonal Pancreatic Beta-Cells.

    PubMed

    Gerencser, Akos A; Mookerjee, Shona A; Jastroch, Martin; Brand, Martin D

    2016-01-01

    The aim of this study was to simplify, improve and validate quantitative measurement of the mitochondrial membrane potential (ΔψM) in pancreatic β-cells. This built on our previously introduced calculation of the absolute magnitude of ΔψM in intact cells, using time-lapse imaging of the non-quench mode fluorescence of tetramethylrhodamine methyl ester and a bis-oxonol plasma membrane potential (ΔψP) indicator. ΔψM is a central mediator of glucose-stimulated insulin secretion in pancreatic β-cells. ΔψM is at the crossroads of cellular energy production and demand, therefore precise assay of its magnitude is a valuable tool to study how these processes interplay in insulin secretion. Dispersed islet cell cultures allowed cell type-specific, single-cell observations of cell-to-cell heterogeneity of ΔψM and ΔψP. Glucose addition caused hyperpolarization of ΔψM and depolarization of ΔψP. The hyperpolarization was a monophasic step increase, even in cells where the ΔψP depolarization was biphasic. The biphasic response of ΔψP was associated with a larger hyperpolarization of ΔψM than the monophasic response. Analysis of the relationships between ΔψP and ΔψM revealed that primary dispersed β-cells responded to glucose heterogeneously, driven by variable activation of energy metabolism. Sensitivity analysis of the calibration was consistent with β-cells having substantial cell-to-cell variations in amounts of mitochondria, and this was predicted not to impair the accuracy of determinations of relative changes in ΔψM and ΔψP. Finally, we demonstrate a significant problem with using an alternative ΔψM probe, rhodamine 123. In glucose-stimulated and oligomycin-inhibited β-cells the principles of the rhodamine 123 assay were breached, resulting in misleading conclusions. PMID:27404273

  12. A Power Grid Optimization Algorithm by Observing Timing Error Risk by IR Drop

    NASA Astrophysics Data System (ADS)

    Kawakami, Yoshiyuki; Terao, Makoto; Fukui, Masahiro; Tsukiyama, Shuji

    With the advent of the deep submicron age, circuit performance is strongly impacted by process variations and the influence on the circuit delay to the power-supply voltage increases more and more due to CMOS feature size shrinkage. Power grid optimization which considers the timing error risk caused by the variations and IR drop becomes very important for stable and hi-speed operation of system-on-chip. Conventionally, a lot of power grid optimization algorithms have been proposed, and most of them use IR drop as their object functions. However, the IR drop is an indirect metric and we suspect that it is vague metric for the real goal of LSI design. In this paper, first, we propose an approach which uses the “timing error risk caused by IR drop” as a direct objective function. Second, the critical path map is introduced to express the existence of critical paths distributed in the entire chip. The timing error risk is decreased by using the critical path map and the new objective function. Some experimental results show the effectiveness.

  13. Real-Time Baseline Error Estimation and Correction for GNSS/Strong Motion Seismometer Integration

    NASA Astrophysics Data System (ADS)

    Li, C. Y. N.; Groves, P. D.; Ziebart, M. K.

    2014-12-01

    Accurate and rapid estimation of permanent surface displacement is required immediately after a slip event for earthquake monitoring or tsunami early warning. It is difficult to achieve the necessary accuracy and precision at high- and low-frequencies using GNSS or seismometry alone. GNSS and seismic sensors can be integrated to overcome the limitations of each. Kalman filter algorithms with displacement and velocity states have been developed to combine GNSS and accelerometer observations to obtain the optimal displacement solutions. However, the sawtooth-like phenomena caused by the bias or tilting of the sensor decrease the accuracy of the displacement estimates. A three-dimensional Kalman filter algorithm with an additional baseline error state has been developed. An experiment with both a GNSS receiver and a strong motion seismometer mounted on a movable platform and subjected to known displacements was carried out. The results clearly show that the additional baseline error state enables the Kalman filter to estimate the instrument's sensor bias and tilt effects and correct the state estimates in real time. Furthermore, the proposed Kalman filter algorithm has been validated with data sets from the 2010 Mw 7.2 El Mayor-Cucapah Earthquake. The results indicate that the additional baseline error state can not only eliminate the linear and quadratic drifts but also reduce the sawtooth-like effects from the displacement solutions. The conventional zero-mean baseline-corrected results cannot show the permanent displacements after an earthquake; the two-state Kalman filter can only provide stable and optimal solutions if the strong motion seismometer had not been moved or tilted by the earthquake. Yet the proposed Kalman filter can achieve the precise and accurate displacements by estimating and correcting for the baseline error at each epoch. The integration filters out noise-like distortions and thus improves the real-time detection and measurement capability

  14. Errors in visuo-haptic and haptic-haptic location matching are stable over long periods of time.

    PubMed

    Kuling, Irene A; Brenner, Eli; Smeets, Jeroen B J

    2016-05-01

    People make systematic errors when they move their unseen dominant hand to a visual target (visuo-haptic matching) or to their other unseen hand (haptic-haptic matching). Why they make such errors is still unknown. A key question in determining the reason is to what extent individual participants' errors are stable over time. To examine this, we developed a method to quantify the consistency. With this method, we studied the stability of systematic matching errors across time intervals of at least a month. Within this time period, individual subjects' matches were as consistent as one could expect on the basis of the variability in the individual participants' performance within each session. Thus individual participants make quite different systematic errors, but in similar circumstances they make the same errors across long periods of time. PMID:27043253

  15. Mixed control for perception and action: timing and error correction in rhythmic ball-bouncing.

    PubMed

    Siegler, I A; Bazile, C; Warren, W H

    2013-05-01

    The task of bouncing a ball on a racket was adopted as a model system for investigating the behavioral dynamics of rhythmic movement, specifically how perceptual information modulates the dynamics of action. Two experiments, with sixteen participants each, were carried out to definitively answer the following questions: How are passive stability and active stabilization combined to produce stable behavior? What informational quantities are used to actively regulate the two main components of the action-the timing of racket oscillation and the correction of errors in bounce height? We used a virtual ball-bouncing setup to simultaneously perturb gravity (g) and ball launch velocity (v b) at impact. In Experiment 1, we tested the control of racket timing by varying the ball's upward half-period t up while holding its peak height h p constant. Conversely, in Experiment 2, we tested error correction by varying h p while holding t up constant. Participants adopted a mixed control mode in which information in the ball's trajectory is used to actively stabilize behavior on a cycle-by-cycle basis, in order to keep the system within or near the passively stable region. The results reveal how these adjustments are visually controlled: the period of racket oscillation is modulated by the half-period of the ball's upward flight, and the change in racket velocity from the previous impact (via a change in racket amplitude) is governed by the error to the target. PMID:23515627

  16. Finite-approximation-error-based discrete-time iterative adaptive dynamic programming.

    PubMed

    Wei, Qinglai; Wang, Fei-Yue; Liu, Derong; Yang, Xiong

    2014-12-01

    In this paper, a new iterative adaptive dynamic programming (ADP) algorithm is developed to solve optimal control problems for infinite horizon discrete-time nonlinear systems with finite approximation errors. First, a new generalized value iteration algorithm of ADP is developed to make the iterative performance index function converge to the solution of the Hamilton-Jacobi-Bellman equation. The generalized value iteration algorithm permits an arbitrary positive semi-definite function to initialize it, which overcomes the disadvantage of traditional value iteration algorithms. When the iterative control law and iterative performance index function in each iteration cannot accurately be obtained, for the first time a new "design method of the convergence criteria" for the finite-approximation-error-based generalized value iteration algorithm is established. A suitable approximation error can be designed adaptively to make the iterative performance index function converge to a finite neighborhood of the optimal performance index function. Neural networks are used to implement the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the developed method. PMID:25265640

  17. Effects of dating errors on nonparametric trend analyses of speleothem time series

    NASA Astrophysics Data System (ADS)

    Mudelsee, M.; Fohlmeister, J.; Scholz, D.

    2012-10-01

    A fundamental problem in paleoclimatology is to take fully into account the various error sources when examining proxy records with quantitative methods of statistical time series analysis. Records from dated climate archives such as speleothems add extra uncertainty from the age determination to the other sources that consist in measurement and proxy errors. This paper examines three stalagmite time series of oxygen isotopic composition (δ18O) from two caves in western Germany, the series AH-1 from the Atta Cave and the series Bu1 and Bu4 from the Bunker Cave. These records carry regional information about past changes in winter precipitation and temperature. U/Th and radiocarbon dating reveals that they cover the later part of the Holocene, the past 8.6 thousand years (ka). We analyse centennial- to millennial-scale climate trends by means of nonparametric Gasser-Müller kernel regression. Error bands around fitted trend curves are determined by combining (1) block bootstrap resampling to preserve noise properties (shape, autocorrelation) of the δ18O residuals and (2) timescale simulations (models StalAge and iscam). The timescale error influences on centennial- to millennial-scale trend estimation are not excessively large. We find a "mid-Holocene climate double-swing", from warm to cold to warm winter conditions (6.5 ka to 6.0 ka to 5.1 ka), with warm-cold amplitudes of around 0.5‰ δ18O; this finding is documented by all three records with high confidence. We also quantify the Medieval Warm Period (MWP), the Little Ice Age (LIA) and the current warmth. Our analyses cannot unequivocally support the conclusion that current regional winter climate is warmer than that during the MWP.

  18. Effects of dating errors on nonparametric trend analyses of speleothem time series

    NASA Astrophysics Data System (ADS)

    Mudelsee, M.; Fohlmeister, J.; Scholz, D.

    2012-05-01

    A fundamental problem in paleoclimatology is to take fully into account the various error sources when examining proxy records with quantitative methods of statistical time series analysis. Records from dated climate archives such as speleothems add extra uncertainty from the age determination to the other sources that consist in measurement and proxy errors. This paper examines three stalagmite time series of oxygen isotopic composition (δ18O) from two caves in Western Germany, the series AH-1 from the Atta cave and the series Bu1 and Bu4 from the Bunker cave. These records carry regional information about past changes in winter precipitation and temperature. U/Th and radiocarbon dating reveals that they cover the later part of the Holocene, the past 8.6 thousand years (ka). We analyse centennial- to millennial-scale climate trends by means of nonparametric Gasser-Müller kernel regression. Error bands around fitted trend curves are determined by combining (1) block bootstrap resampling to preserve noise properties (shape, autocorrelation) of the δ18O residuals and (2) timescale simulations (models StalAge and iscam). The timescale error influences on centennial- to millennial-scale trend estimation are not excessively large. We find a "mid-Holocene climate double-swing", from warm to cold to warm winter conditions (6.5 ka to 6.0 ka to 5.1 ka), with warm-cold amplitudes of around 0.5‰ δ18O; this finding is documented by all three records with high confidence. We also quantify the Medieval Warm Period (MWP), the Little Ice Age (LIA) and the current warmth. Our analyses cannot unequivocally support the conclusion that current regional winter climate is warmer than that during the MWP.

  19. Accounting for baseline differences and measurement error in the analysis of change over time.

    PubMed

    Braun, Julia; Held, Leonhard; Ledergerber, Bruno

    2014-01-15

    If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy. PMID:23900718

  20. Adaptive correction method for an OCXO and investigation of analytical cumulative time error upper bound.

    PubMed

    Zhou, Hui; Kunz, Thomas; Schwartz, Howard

    2011-01-01

    Traditional oscillators used in timing modules of CDMA and WiMAX base stations are large and expensive. Applying cheaper and smaller, albeit more inaccurate, oscillators in timing modules is an interesting research challenge. An adaptive control algorithm is presented to enhance the oscillators to meet the requirements of base stations during holdover mode. An oscillator frequency stability model is developed for the adaptive control algorithm. This model takes into account the control loop which creates the correction signal when the timing module is in locked mode. A recursive prediction error method is used to identify the system model parameters. Simulation results show that an oscillator enhanced by our adaptive control algorithm improves the oscillator performance significantly, compared with uncorrected oscillators. Our results also show the benefit of explicitly modeling the control loop. Finally, the cumulative time error upper bound of such enhanced oscillators is investigated analytically and comparison results between the analytical and simulated upper bound are provided. The results show that the analytical upper bound can serve as a practical guide for system designers. PMID:21244973

  1. The relative and absolute timing accuracy of the EPIC-pn camera on XMM-Newton, from X-ray pulsations of the Crab and other pulsars

    NASA Astrophysics Data System (ADS)

    Martin-Carrillo, A.; Kirsch, M. G. F.; Caballero, I.; Freyberg, M. J.; Ibarra, A.; Kendziorra, E.; Lammers, U.; Mukerjee, K.; Schönherr, G.; Stuhlinger, M.; Saxton, R. D.; Staubert, R.; Suchy, S.; Wellbrock, A.; Webb, N.; Guainazzi, M.

    2012-09-01

    Aims: Reliable timing calibration is essential for the accurate comparison of XMM-Newton light curves with those from other observatories, to ultimately use them to derive precise physical quantities. The XMM-Newton timing calibration is based on pulsar analysis. However, because pulsars show both timing noise and glitches, it is essential to monitor these calibration sources regularly. To this end, the XMM-Newton observatory performs observations twice a year of the Crab pulsar to monitor the absolute timing accuracy of the EPIC-pn camera in the fast timing and burst modes. We present the results of this monitoring campaign, comparing XMM-Newton data from the Crab pulsar (PSR B0531+21) with radio measurements. In addition, we use five pulsars (PSR J0537-69, PSR B0540-69, PSR B0833-45, PSR B1509-58, and PSR B1055-52) with periods ranging from 16 ms to 197 ms to verify the relative timing accuracy. Methods: We analysed 38 XMM-Newton observations (0.2-12.0 keV) of the Crab taken over the first ten years of the mission and 13 observations from the five complementary pulsars. All data were processed with SAS, the XMM-Newton Scientific Analysis Software, version 9.0. Epoch-folding techniques coupled with χ2 tests were used to derive relative timing accuracies. The absolute timing accuracy was determined using the Crab data and comparing the time shift between the main X-ray and radio peaks in the phase-folded light curves. Results: The relative timing accuracy of XMM-Newton is found to be better than 10-8. The strongest X-ray pulse peak precedes the corresponding radio peak by 306 ± 9 μs, which agrees with other high-energy observatories such as Chandra, INTEGRAL and RXTE. The derived absolute timing accuracy from our analysis is ± 48 μs.

  2. Dynamic time warping in phoneme modeling for fast pronunciation error detection.

    PubMed

    Miodonska, Zuzanna; Bugdol, Marcin D; Krecichwost, Michal

    2016-02-01

    The presented paper describes a novel approach to the detection of pronunciation errors. It makes use of the modeling of well-pronounced and mispronounced phonemes by means of the Dynamic Time Warping (DTW) algorithm. Four approaches that make use of the DTW phoneme modeling were developed to detect pronunciation errors: Variations of the Word Structure (VoWS), Normalized Phoneme Distances Thresholding (NPDT), Furthest Segment Search (FSS) and Normalized Furthest Segment Search (NFSS). The performance evaluation of each module was carried out using a speech database of correctly and incorrectly pronounced words in the Polish language, with up to 10 patterns of every trained word from a set of 12 words having different phonetic structures. The performance of DTW modeling was compared to Hidden Markov Models (HMM) that were used for the same four approaches (VoWS, NPDT, FSS, NFSS). The average error rate (AER) was the lowest for DTW with NPDT (AER=0.287) and scored better than HMM with FSS (AER=0.473), which was the best result for HMM. The DTW modeling was faster than HMM for all four approaches. This technique can be used for computer-assisted pronunciation training systems that can work with a relatively small training speech corpus (less than 20 patterns per word) to support speech therapy at home. PMID:26739104

  3. Absolute calibration of optical flats

    DOEpatents

    Sommargren, Gary E.

    2005-04-05

    The invention uses the phase shifting diffraction interferometer (PSDI) to provide a true point-by-point measurement of absolute flatness over the surface of optical flats. Beams exiting the fiber optics in a PSDI have perfect spherical wavefronts. The measurement beam is reflected from the optical flat and passed through an auxiliary optic to then be combined with the reference beam on a CCD. The combined beams include phase errors due to both the optic under test and the auxiliary optic. Standard phase extraction algorithms are used to calculate this combined phase error. The optical flat is then removed from the system and the measurement fiber is moved to recombine the two beams. The newly combined beams include only the phase errors due to the auxiliary optic. When the second phase measurement is subtracted from the first phase measurement, the absolute phase error of the optical flat is obtained.

  4. Refining measurements of lateral channel movement from image time series by quantifying spatial variations in registration error

    NASA Astrophysics Data System (ADS)

    Lea, Devin M.; Legleiter, Carl J.

    2016-04-01

    Remotely sensed data provides information on river morphology useful for examining channel change at yearly-to-decadal time scales. Although previous studies have emphasized the need to distinguish true geomorphic change from errors associated with image registration, standard metrics for assessing and summarizing these errors, such as the root-mean-square error (RMSE) and 90th percentile of the distribution of ground control point (GCP) error, fail to incorporate the spatial structure of this uncertainty. In this study, we introduce a framework for evaluating whether observations of lateral channel migration along a meandering channel are statistically significant, given the spatial distribution of registration error. An iterative leave-one-out cross-validation approach was used to produce local error metrics for an image time series from Savery Creek, Wyoming, USA, and to evaluate various transformation equations, interpolation methods, and GCP placement strategies. Interpolated error surfaces then were used to create error ellipses representing spatially variable buffers of detectable change. Our results show that, for all five sequential image pairs we examined, spatially distributed estimates of registration error enabled detection of a greater number of statistically significant lateral migration vectors than the spatially uniform RMSE or 90th percentile of GCP error. Conversely, spatially distributed error metrics prevented changes from being mistaken as real in areas of greater registration error. Our results also support the findings of previous studies: second-order polynomial functions on average yield the lowest RMSE, and errors are reduced by placing GCPs on the floodplain rather than on hillslopes. This study highlights the importance of characterizing the spatial distribution of image registration errors in the analysis of channel change.

  5. Efficient simulation for fixed-receiver bistatic SAR with time and frequency synchronization errors

    NASA Astrophysics Data System (ADS)

    Yan, Feifei; Chang, Wenge; Li, Xiangyang

    2015-12-01

    Raw signal simulation is a useful tool for synthetic aperture radar (SAR) system design, mission planning, processing algorithm testing, and inversion algorithm design. Time and frequency synchronization is the key technique of bistatic SAR (BiSAR) system, and raw data simulation is an effective tool for verifying the time and frequency synchronization techniques. According to the two-dimensional (2-D) frequency spectrum of fixed-receiver BiSAR, a rapid raw data simulation approach with time and frequency synchronization errors is proposed in this paper. Through 2-D inverse Stolt transform in 2-D frequency domain and phase compensation in range-Doppler frequency domain, this method can significantly improve the efficiency of scene raw data simulation. Simulation results of point targets and extended scene are presented to validate the feasibility and efficiency of the proposed simulation approach.

  6. Adaptive error detection for HDR/PDR brachytherapy: Guidance for decision making during real-time in vivo point dosimetry

    SciTech Connect

    Kertzscher, Gustavo Andersen, Claus E.; Tanderup, Kari

    2014-05-15

    Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusive dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was

  7. Representation of layer-counted proxy records as probability densities on error-free time axes

    NASA Astrophysics Data System (ADS)

    Boers, Niklas; Goswami, Bedartha; Ghil, Michael

    2016-04-01

    Time series derived from paleoclimatic proxy records exhibit substantial dating uncertainties in addition to the measurement errors of the proxy values. For radiometrically dated proxy archives, Goswami et al. [1] have recently introduced a framework rooted in Bayesian statistics that successfully propagates the dating uncertainties from the time axis to the proxy axis. The resulting proxy record consists of a sequence of probability densities over the proxy values, conditioned on prescribed age values. One of the major benefits of this approach is that the proxy record is represented on an accurate, error-free time axis. Such unambiguous dating is crucial, for instance, in comparing different proxy records. This approach, however, is not directly applicable to proxy records with layer-counted chronologies, as for example ice cores, which are typically dated by counting quasi-annually deposited ice layers. Hence the nature of the chronological uncertainty in such records is fundamentally different from that in radiometrically dated ones. Here, we introduce a modification of the Goswami et al. [1] approach that is specifically designed for layer-counted proxy records, instead of radiometrically dated ones. We apply our method to isotope ratios and dust concentrations in the NGRIP core, using a published 60,000-year chronology [2]. It is shown that the further one goes into the past, the more the layer-counting errors accumulate and lead to growing uncertainties in the probability density sequence for the proxy values that results from the proposed approach. For the older parts of the record, these uncertainties affect more and more a statistically sound estimation of proxy values. This difficulty implies that great care has to be exercised when comparing and in particular aligning specific events among different layer-counted proxy records. On the other hand, when attempting to derive stochastic dynamical models from the proxy records, one is only interested in the

  8. [Design and implementation of real-time processing platform for movement error correction of hyperspectrual imaging].

    PubMed

    Yu, Tao; Hu, Bing-liang; Gao, Xiao-hui; Wei, Ru-yi; Jing, Juan-juan

    2012-08-01

    The approach that deals with compressed and packed image data transmitted from satellite to the ground is too slow for real-time application occasion, it also has huge image, multi-processing step and complexity recovery arithmetic synchronously, so it is urgent to build accurate and fast data processing platform for real-time processing. For the moment, the platform for data recovery and error correction is much less, the so-called successful platform may directly affect the effect of target detection and identification because of processing speed, precision, flexibility, configuration and upgrade. The platform we build is to set spatial modulation spectrometer as the research goal, We design and implement a hardware platform based on Xilinx Virtex-5 FPGA, It is combined with ISE IP soft-core resources which is configurable, high-precision and flexible by focusing on analyzing key aspects of the hardware platform. And the relevant test data were drawn, then a good way for spectrum recovery and error correction was explored. PMID:23156797

  9. Post-event human decision errors: operator action tree/time reliability correlation

    SciTech Connect

    Hall, R E; Fragola, J; Wreathall, J

    1982-11-01

    This report documents an interim framework for the quantification of the probability of errors of decision on the part of nuclear power plant operators after the initiation of an accident. The framework can easily be incorporated into an event tree/fault tree analysis. The method presented consists of a structure called the operator action tree and a time reliability correlation which assumes the time available for making a decision to be the dominating factor in situations requiring cognitive human response. This limited approach decreases the magnitude and complexity of the decision modeling task. Specifically, in the past, some human performance models have attempted prediction by trying to emulate sequences of human actions, or by identifying and modeling the information processing approach applicable to the task. The model developed here is directed at describing the statistical performance of a representative group of hypothetical individuals responding to generalized situations.

  10. Bound on quantum computation time: Quantum error correction in a critical environment

    SciTech Connect

    Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.

    2010-08-15

    We obtain an upper bound on the time available for quantum computation for a given quantum computer and decohering environment with quantum error correction implemented. First, we derive an explicit quantum evolution operator for the logical qubits and show that it has the same form as that for the physical qubits but with a reduced coupling strength to the environment. Using this evolution operator, we find the trace distance between the real and ideal states of the logical qubits in two cases. For a super-Ohmic bath, the trace distance saturates, while for Ohmic or sub-Ohmic baths, there is a finite time before the trace distance exceeds a value set by the user.

  11. Mean square displacements with error estimates from non-equidistant time-step kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Leetmaa, Mikael; Skorodumova, Natalia V.

    2015-06-01

    We present a method to calculate mean square displacements (MSD) with error estimates from kinetic Monte Carlo (KMC) simulations of diffusion processes with non-equidistant time-steps. An analytical solution for estimating the errors is presented for the special case of one moving particle at fixed rate constant. The method is generalized to an efficient computational algorithm that can handle any number of moving particles or different rates in the simulated system. We show with examples that the proposed method gives the correct statistical error when the MSD curve describes pure Brownian motion and can otherwise be used as an upper bound for the true error.

  12. Distributions in the Error Space: Goal-Directed Movements Described in Time and State-Space Representations

    PubMed Central

    Fisher, Moria E.; Huang, Felix C.; Wright, Zachary A.; Patton, James L.

    2016-01-01

    Manipulation of error feedback has been of great interest to recent studies in motor control and rehabilitation. Typically, motor adaptation is shown as a change in performance with a single scalar metric for each trial, yet such an approach might overlook details about how error evolves through the movement. We believe that statistical distributions of movement error through the extent of the trajectory can reveal unique patterns of adaption and possibly reveal clues to how the motor system processes information about error. This paper describes different possible ordinate domains, focusing on representations in time and state-space, used to quantify reaching errors. We hypothesized that the domain with the lowest amount of variability would lead to a predictive model of reaching error with the highest accuracy. Here we showed that errors represented in a time domain demonstrate the least variance and allow for the highest predictive model of reaching errors. These predictive models will give rise to more specialized methods of robotic feedback and improve previous techniques of error augmentation. PMID:25571595

  13. 5 CFR 1605.16 - Claims for correction of employing agency errors; time limitations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... it, but, in any event, the agency must act promptly in doing so. (b) Participant's discovery of error. If an agency fails to discover an error of which a participant has knowledge involving the correct...

  14. 5 CFR 1605.16 - Claims for correction of employing agency errors; time limitations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... it, but, in any event, the agency must act promptly in doing so. (b) Participant's discovery of error. If an agency fails to discover an error of which a participant has knowledge involving the correct...

  15. Real-time GPS seismology using a single receiver: method comparison, error analysis and precision validation

    NASA Astrophysics Data System (ADS)

    Li, Xingxing

    2014-05-01

    displacements is accompanied by a drift due to the potential uncompensated errors. Li et al. (2013) presented a temporal point positioning (TPP) method to quickly capture coseismic displacements with a single GPS receiver in real-time. The TPP approach can overcome the convergence problem of precise point positioning (PPP), and also avoids the integration and de-trending process of the variometric approach. The performance of TPP is demonstrated to be at few centimeters level of displacement accuracy for even twenty minutes interval with real-time precise orbit and clock products. In this study, we firstly present and compare the observation models and processing strategies of the current existing single-receiver methods for real-time GPS seismology. Furthermore, we propose several refinements to the variometric approach in order to eliminate the drift trend in the integrated coseismic displacements. The mathematical relationship between these methods is discussed in detail and their equivalence is also proved. The impact of error components such as satellite ephemeris, ionospheric delay, tropospheric delay, and geometry change on the retrieved displacements are carefully analyzed and investigated. Finally, the performance of these single-receiver approaches for real-time GPS seismology is validated using 1 Hz GPS data collected during the Tohoku-Oki earthquake (Mw 9.0, March 11, 2011) in Japan. It is shown that few centimeters accuracy of coseismic displacements is achievable. Keywords: High-rate GPS; real-time GPS seismology; a single receiver; PPP; variometric approach; temporal point positioning; error analysis; coseismic displacement; fault slip inversion;

  16. Real-Time Determination of Absolute Frequency in Continuous-Wave Terahertz Radiation with a Photocarrier Terahertz Frequency Comb Induced by an Unstabilized Femtosecond Laser

    NASA Astrophysics Data System (ADS)

    Minamikawa, Takeo; Hayashi, Kenta; Mizuguchi, Tatsuya; Hsieh, Yi-Da; Abdelsalam, Dahi Ghareab; Mizutani, Yasuhiro; Yamamoto, Hirotsugu; Iwata, Tetsuo; Yasui, Takeshi

    2016-05-01

    A practical method for the absolute frequency measurement of continuous-wave terahertz (CW-THz) radiation uses a photocarrier terahertz frequency comb (PC-THz comb) because of its ability to realize real-time, precise measurement without the need for cryogenic cooling. However, the requirement for precise stabilization of the repetition frequency ( f rep) and/or use of dual femtosecond lasers hinders its practical use. In this article, based on the fact that an equal interval between PC-THz comb modes is always maintained regardless of the fluctuation in f rep, the PC-THz comb induced by an unstabilized laser was used to determine the absolute frequency f THz of CW-THz radiation. Using an f rep-free-running PC-THz comb, the f THz of the frequency-fixed or frequency-fluctuated active frequency multiplier chain CW-THz source was determined at a measurement rate of 10 Hz with a relative accuracy of 8.2 × 10-13 and a relative precision of 8.8 × 10-12 to a rubidium frequency standard. Furthermore, f THz was correctly determined even when fluctuating over a range of 20 GHz. The proposed method enables the use of any commercial femtosecond laser for the absolute frequency measurement of CW-THz radiation.

  17. Cerebellar Symptoms Are Associated With Omission Errors and Variability of Response Time in Children With ADHD.

    PubMed

    Goetz, Michal; Schwabova, Jaroslava; Hlavka, Zdenek; Ptacek, Radek; Zumrova, Alena; Hort, Vladimír; Doyle, Robert

    2014-01-10

    Objective: We examined the presence of cerebellar symptoms in ADHD and their association with behavioral markers of this disorder. Method: Sixty-two children with ADHD and 62 typically developing (TD) children were examined for cerebellar symptoms using the ataxia rating scale and tested using Conners' Continuous Performance Test. Results: Children with ADHD had significantly more cerebellar symptoms compared with the TD children. Cerebellar symptom scores decreased with age in the ADHD group; in the TD group remained stable. In both groups, cerebellar symptoms were associated with parent-rated hyperactive/impulsive symptoms, variability of response time standard error (RT-SE) and increase of RT-SE as the test progresses. More variables were associated with cerebellar symptoms in the ADHD group including omission errors, overall RT-SE and its increase for prolonged interstimulus intervals. Conclusion: Our results highlight the importance of research into motor functions in children with ADHD and indicate a role for cerebellar impairment in this disorder. (J. of Att. Dis. XXXX; XX(X) 1-XX). PMID:24412970

  18. EEG error potentials detection and classification using time-frequency features for robot reinforcement learning.

    PubMed

    Boubchir, Larbi; Touati, Youcef; Daachi, Boubaker; Chérif, Arab Ali

    2015-08-01

    In thought-based steering of robots, error potentials (ErrP) can appear when the action resulting from the brain-machine interface (BMI) classifier/controller does not correspond to the user's thought. Using the Steady State Visual Evoked Potentials (SSVEP) techniques, ErrP, which appear when a classification error occurs, are not easily recognizable by only examining the temporal or frequency characteristics of EEG signals. A supplementary classification process is therefore needed to identify them in order to stop the course of the action and back up to a recovery state. This paper presents a set of time-frequency (t-f) features for the detection and classification of EEG ErrP in extra-brain activities due to misclassification observed by a user exploiting non-invasive BMI and robot control in the task space. The proposed features are able to characterize and detect ErrP activities in the t-f domain. These features are derived from the information embedded in the t-f representation of EEG signals, and include the Instantaneous Frequency (IF), t-f information complexity, SVD information, energy concentration and sub-bands' energies. The experiment results on real EEG data show that the use of the proposed t-f features for detecting and classifying EEG ErrP achieved an overall classification accuracy up to 97% for 50 EEG segments using 2-class SVM classifier. PMID:26736619

  19. Discrete time interval measurement system: fundamentals, resolution and errors in the measurement of angular vibrations

    NASA Astrophysics Data System (ADS)

    Gómez de León, F. C.; Meroño Pérez, P. A.

    2010-07-01

    The traditional method for measuring the velocity and the angular vibration in the shaft of rotating machines using incremental encoders is based on counting the pulses at given time intervals. This method is generically called the time interval measurement system (TIMS). A variant of this method that we have developed in this work consists of measuring the corresponding time of each pulse from the encoder and sampling the signal by means of an A/D converter as if it were an analog signal, that is to say, in discrete time. For this reason, we have denominated this method as the discrete time interval measurement system (DTIMS). This measurement system provides a substantial improvement in the precision and frequency resolution compared with the traditional method of counting pulses. In addition, this method permits modification of the width of some pulses in order to obtain a mark-phase on every lap. This paper explains the theoretical fundamentals of the DTIMS and its application for measuring the angular vibrations of rotating machines. It also displays the required relationship between the sampling rate of the signal, the number of pulses of the encoder and the rotating velocity in order to obtain the required resolution and to delimit the methodological errors in the measurement.

  20. Absolute, time-resolved emission of non-LTE L-shell spectra from Ti-doped aerogels

    NASA Astrophysics Data System (ADS)

    Back, C. A.; Feldman, U.; Weaver, J. L.; Seely, J. F.; Constantin, C.; Holland, G.; Lee, R. W.; Chung, H.-K.; Scott, H. A.

    2006-05-01

    Outstanding discrepancies between data and calculations of laser-produced plasmas in recombination have been observed since the 1980s. Although improvements in hydrodynamic modeling may reduce the discrepancies, there are indications that non-LTE atomic kinetics may be the dominant cause. Experiments to investigate non-LTE effects were recently performed at the NIKE KrF laser on low-density Ti-doped aerogels. The laser irradiated a 2 mm diameter, cylindrical sample of various lengths with a 4-ns square pulse to create a volumetrically heated plasma. Ti L-shell spectra spanning a range of 0.47 3 keV were obtained with a transmission grating coupled to Si photodiodes. The diagnostic can be configured to provide 1-dimensional spatial resolution at a single photon energy, or 18 discrete energies with a resolving power, λ/δλ of 3 20. The data are examined and compared to calculations to develop absolute emission measurements that can provide new tests of the non-LTE physics.

  1. Time-dependent Neural Processing of Auditory Feedback during Voice Pitch Error Detection

    PubMed Central

    Behroozmand, Roozbeh; Liu, Hanjun; Larson, Charles R.

    2012-01-01

    The neural responses to sensory consequences of a self-produced motor act are suppressed compared with those in response to a similar but externally generated stimulus. Previous studies in the somatosensory and auditory systems have shown that the motor-induced suppression of the sensory mechanisms is sensitive to delays between the motor act and the onset of the stimulus. The present study investigated time-dependent neural processing of auditory feedback in response to self-produced vocalizations. ERPs were recorded in response to normal and pitch-shifted voice auditory feedback during active vocalization and passive listening to the playback of the same vocalizations. The pitch-shifted stimulus was delivered to the subjects’ auditory feedback after a randomly chosen time delay between the vocal onset and the stimulus presentation. Results showed that the neural responses to delayed feedback perturbations were significantly larger than those in response to the pitch-shifted stimulus occurring at vocal onset. Active vocalization was shown to enhance neural responsiveness to feedback alterations only for nonzero delays compared with passive listening to the playback. These findings indicated that the neural mechanisms of auditory feedback processing are sensitive to timing between the vocal motor commands and the incoming auditory feedback. Time-dependent neural processing of auditory feedback may be an important feature of the audio-vocal integration system that helps to improve the feedback-based monitoring and control of voice structure through vocal error detection and correction. PMID:20146608

  2. Statistical modelling of forecast errors for multiple lead-times and a system of reservoirs

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjorn; Steinsland, Ingelin; Kolberg, Sjur

    2010-05-01

    Water resources management, e.g. operation of reservoirs, is amongst others based on forecasts of inflow provided by a precipitation-runoff model. The forecasted inflow is normally given as one value, even though it is an uncertain value. There is a growing interest to account for uncertain information in decision support systems, e.g. how to operate a hydropower reservoir to maximize the gain. One challenge is to develop decision support systems that can use uncertain information. The contribution from the hydrological modeler is to derive a forecast distribution (from which uncertainty intervals can be computed) for the inflow predictions. In this study we constructed a statistical model for the forecast errors for daily inflow into a system of four hydropower reservoirs in Ulla-Førre in Western Norway. A distributed hydrological model was applied to generate the inflow forecasts using weather forecasts provided by ECM for lead-times up to 10 days. The precipitation forecasts were corrected for systematic bias. A statistical model based on auto-regressive innovations for Box-Cox-transformed observations and forecasts was constructed for the forecast errors. The parameters of the statistical model were conditioned on climate and the internal snow state in the hydrological model. The model was evaluated according to the reliability of the forecast distribution, the width of the forecast distribution, and efficiency of the median forecast for the 10 lead times and the four catchments. The interpretation of the results had to be done carefully since the inflow data have a large uncertainty.

  3. Audibility of dispersion error in room acoustic finite-difference time-domain simulation as a function of simulation distance.

    PubMed

    Saarelma, Jukka; Botts, Jonathan; Hamilton, Brian; Savioja, Lauri

    2016-04-01

    Finite-difference time-domain (FDTD) simulation has been a popular area of research in room acoustics due to its capability to simulate wave phenomena in a wide bandwidth directly in the time-domain. A downside of the method is that it introduces a direction and frequency dependent error to the simulated sound field due to the non-linear dispersion relation of the discrete system. In this study, the perceptual threshold of the dispersion error is measured in three-dimensional FDTD schemes as a function of simulation distance. Dispersion error is evaluated for three different explicit, non-staggered FDTD schemes using the numerical wavenumber in the direction of the worst-case error of each scheme. It is found that the thresholds for the different schemes do not vary significantly when the phase velocity error level is fixed. The thresholds are found to vary significantly between the different sound samples. The measured threshold for the audibility of dispersion error at the probability level of 82% correct discrimination for three-alternative forced choice is found to be 9.1 m of propagation in a free field, that leads to a maximum group delay error of 1.8 ms at 20 kHz with the chosen phase velocity error level of 2%. PMID:27106330

  4. Characterization of ambient air pollution measurement error in a time-series health study using a geostatistical simulation approach

    NASA Astrophysics Data System (ADS)

    Goldman, Gretchen T.; Mulholland, James A.; Russell, Armistead G.; Gass, Katherine; Strickland, Matthew J.; Tolbert, Paige E.

    2012-09-01

    In recent years, geostatistical modeling has been used to inform air pollution health studies. In this study, distributions of daily ambient concentrations were modeled over space and time for 12 air pollutants. Simulated pollutant fields were produced for a 6-year time period over the 20-county metropolitan Atlanta area using the Stanford Geostatistical Modeling Software (SGeMS). These simulations incorporate the temporal and spatial autocorrelation structure of ambient pollutants, as well as season and day-of-week temporal and spatial trends; these fields were considered to be the true ambient pollutant fields for the purposes of the simulations that followed. Simulated monitor data at the locations of actual monitors were then generated that contain error representative of instrument imprecision. From the simulated monitor data, four exposure metrics were calculated: central monitor and unweighted, population-weighted, and area-weighted averages. For each metric, the amount and type of error relative to the simulated pollutant fields are characterized and the impact of error on an epidemiologic time-series analysis is predicted. The amount of error, as indicated by a lack of spatial autocorrelation, is greater for primary pollutants than for secondary pollutants and is only moderately reduced by averaging across monitors; more error will result in less statistical power in the epidemiologic analysis. The type of error, as indicated by the correlations of error with the monitor data and with the true ambient concentration, varies with exposure metric, with error in the central monitor metric more of the classical type (i.e., independent of the monitor data) and error in the spatial average metrics more of the Berkson type (i.e., independent of the true ambient concentration). Error type will affect the bias in the health risk estimate, with bias toward the null and away from the null predicted depending on the exposure metric; population-weighting yielded the

  5. Real-time lossy compression of hyperspectral images using iterative error analysis on graphics processing units

    NASA Astrophysics Data System (ADS)

    Sánchez, Sergio; Plaza, Antonio

    2012-06-01

    Hyperspectral image compression is an important task in remotely sensed Earth Observation as the dimensionality of this kind of image data is ever increasing. This requires on-board compression in order to optimize the donwlink connection when sending the data to Earth. A successful algorithm to perform lossy compression of remotely sensed hyperspectral data is the iterative error analysis (IEA) algorithm, which applies an iterative process which allows controlling the amount of information loss and compression ratio depending on the number of iterations. This algorithm, which is based on spectral unmixing concepts, can be computationally expensive for hyperspectral images with high dimensionality. In this paper, we develop a new parallel implementation of the IEA algorithm for hyperspectral image compression on graphics processing units (GPUs). The proposed implementation is tested on several different GPUs from NVidia, and is shown to exhibit real-time performance in the analysis of an Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) data sets collected over different locations. The proposed algorithm and its parallel GPU implementation represent a significant advance towards real-time onboard (lossy) compression of hyperspectral data where the quality of the compression can be also adjusted in real-time.

  6. Evaluating load model errors by comparison to a global GPS time series solution (Invited)

    NASA Astrophysics Data System (ADS)

    van Dam, T. M.; Collilieux, X.; Rebischung, P.; Ray, J.; Altamimi, Z.

    2013-12-01

    Various space geodetic studies over the past two decades have shown that temporal variations in the distribution of non-tidal oceanic, atmospheric, and continental water masses cause small, but detectable vertical displacements of the Earth's surface. Unlike most past research that focused only on the vertical load component, we have included the horizontal, as well as vertical, components and considered non-tidal atmosphere, ocean, and surface water load models. Our geodetic solution is the most current reprocessed station time series from the International GNSS Service (IGS) for a global set of 706 stations, each having more than 100 weekly observations. The long-term stacking of the weekly frame solutions has taken utmost care to minimize aliasing of local load signals into the frame parameters to ensure reliable time series of individual station motions. Our reference load model consists of components from NCEP atmosphere (corrected for high resolution topographic variations), ECCO non-tidal ocean, and GLDAS surface water (cubic detrended over 1998 to 2011 to remove inter-annual artifacts), then combined, linearly detrended, and averaged to the middle of each GPS week as a posteriori corrections. This reference model reduces the WRMS scatters of about 72, 63, and 87% of GPS station dN, dE, and dU components, respectively. Alternative load models, for individual components or the total, can be tested against the same set of GPS time series to determine their relative accuracy. For example, not removing a cubic trend from the GLDAS surface water loads causes a global average quadratic increase in WRMS scatters of about 0.1, 0.1, and 0.5 mm in dN, dE, and dU. The method is sensitive to load model error differences at the level of about 0.1 mm in the horizontal components and about 0.2 to 0.3 mm in the vertical due to residual load aliasing and other sources of systematic error in the GPS time series. We will report relative accuracy differences for a range of load

  7. Error propagation in relative real-time reverse transcription polymerase chain reaction quantification models: the balance between accuracy and precision.

    PubMed

    Nordgård, Oddmund; Kvaløy, Jan Terje; Farmen, Ragne Kristin; Heikkilä, Reino

    2006-09-15

    Real-time reverse transcription polymerase chain reaction (RT-PCR) has gained wide popularity as a sensitive and reliable technique for mRNA quantification. The development of new mathematical models for such quantifications has generally paid little attention to the aspect of error propagation. In this study we evaluate, both theoretically and experimentally, several recent models for relative real-time RT-PCR quantification of mRNA with respect to random error accumulation. We present error propagation expressions for the most common quantification models and discuss the influence of the various components on the total random error. Normalization against a calibrator sample to improve comparability between different runs is shown to increase the overall random error in our system. On the other hand, normalization against multiple reference genes, introduced to improve accuracy, does not increase error propagation compared to normalization against a single reference gene. Finally, we present evidence that sample-specific amplification efficiencies determined from individual amplification curves primarily increase the random error of real-time RT-PCR quantifications and should be avoided. Our data emphasize that the gain of accuracy associated with new quantification models should be validated against the corresponding loss of precision. PMID:16899212

  8. A wearable device for real-time motion error detection and vibrotactile instructional cuing.

    PubMed

    Lee, Beom-Chan; Chen, Shu; Sienko, Kathleen H

    2011-08-01

    We have developed a mobile instrument for motion instruction and correction (MIMIC) that enables an expert (i.e., physical therapist) to map his/her movements to a trainee (i.e., patient) in a hands-free fashion. MIMIC comprises an expert module (EM) and a trainee module (TM). Both the EM and TM are composed of six-degree-of-freedom inertial measurement units, microcontrollers, and batteries. The TM also has an array of actuators that provide the user with vibrotactile instructional cues. The expert wears the EM, and his/her relevant body position is computed by an algorithm based on an extended Kalman filter that provides asymptotic state estimation. The captured expert body motion information is transmitted wirelessly to the trainee, and based on the computed difference between the expert and trainee motion, directional instructions are displayed via vibrotactile stimulation to the skin. The trainee is instructed to move in the direction of the vibration sensation until the vibration is eliminated. Two proof-of-concept studies involving young, healthy subjects were conducted using a simplified version of the MIMIC system (pre-specified target trajectories representing ideal expert movements and only two actuators) during anterior-posterior trunk movements. The first study was designed to investigate the effects of changing the expert-trainee error thresholds (0.5(°), 1.0(°), and 1.5(°)) and varying the nature of the control signal (proportional, proportional plus derivative). Expert-subject cross-correlation values were maximized (0.99) and average position errors (0.33(°)) and time delays (0.2 s) were minimized when the controller used a 0.5(°) error threshold and proportional plus derivative feedback control signal. The second study used the best performing activation threshold and control signal determined from the first study to investigate subject performance when the motion task complexity and speed were varied. Subject performance decreased as motion

  9. Detecting and Correcting Errors in Rapid Aiming Movements: Effects of Movement Time, Distance, and Velocity

    ERIC Educational Resources Information Center

    Sherwood, David E.

    2010-01-01

    According to closed-loop accounts of motor control, movement errors are detected by comparing sensory feedback to an acquired reference state. Differences between the reference state and the movement-produced feedback results in an error signal that serves as a basis for a correction. The main question addressed in the current study was how…

  10. Satellite-station time synchronization information based real-time orbit error monitoring and correction of navigation satellite in Beidou System

    NASA Astrophysics Data System (ADS)

    He, Feng; Zhou, ShanShi; Hu, XiaoGong; Zhou, JianHua; Liu, Li; Guo, Rui; Li, XiaoJie; Wu, Shan

    2014-07-01

    Satellite-station two-way time comparison is a typical design in Beidou System (BDS) which is significantly different from other satellite navigation systems. As a type of two-way time comparison method, BDS time synchronization is hardly influenced by satellite orbit error, atmosphere delay, tracking station coordinate error and measurement model error. Meanwhile, single-way time comparison can be realized through the method of Multi-satellite Precision Orbit Determination (MPOD) with pseudo-range and carrier phase of monitor receiver. It is proved in the constellation of 3GEO/2IGSO that the radial orbit error can be reflected in the difference between two-way time comparison and single-way time comparison, and that may lead to a substitute for orbit evaluation by SLR. In this article, the relation between orbit error and difference of two-way and single-way time comparison is illustrated based on the whole constellation of BDS. Considering the all-weather and real-time operation mode of two-way time comparison, the orbit error could be quantifiably monitored in a real-time mode through comparing two-way and single-way time synchronization. In addition, the orbit error can be predicted and corrected in a short time based on its periodic characteristic. It is described in the experiments of GEO and IGSO that the prediction accuracy of space signal can be obviously improved when the prediction orbit error is sent to the users through navigation message, and then the UERE including terminal error can be reduced from 0.1 m to 0.4 m while the average accuracy can be improved more than 27%. Though it is still hard to make accuracy improvement for Precision Orbit Determination (POD) and orbit prediction because of the confined tracking net and the difficulties in dynamic model optimization, in this paper, a practical method for orbit accuracy improvement is proposed based on two-way time comparison which can result in the reflection of orbit error.

  11. Correcting incompatible DN values and geometric errors in nighttime lights time series images

    SciTech Connect

    Zhao, Naizhuo; Zhou, Yuyu; Samson, Eric L.

    2014-09-19

    The Defense Meteorological Satellite Program’s Operational Linescan System (DMSP-OLS) nighttime lights imagery has proven to be a powerful remote sensing tool to monitor urbanization and assess socioeconomic activities at large scales. However, the existence of incompatible digital number (DN) values and geometric errors severely limit application of nighttime light image data on multi-year quantitative research. In this study we extend and improve previous studies on inter-calibrating nighttime lights image data to obtain more compatible and reliable nighttime lights time series (NLT) image data for China and the United States (US) through four steps: inter-calibration, geometric correction, steady increase adjustment, and population data correction. We then use gross domestic product (GDP) data to test the processed NLT image data indirectly and find that sum light (summed DN value of pixels in a nighttime light image) maintains apparent increase trends with relatively large GDP growth rates but does not increase or decrease with relatively small GDP growth rates. As nighttime light is a sensitive indicator for economic activity, the temporally consistent trends between sum light and GDP growth rate imply that brightness of nighttime lights on the ground is correctly represented by the processed NLT image data. Finally, through analyzing the corrected NLT image data from 1992 to 2008, we find that China experienced apparent nighttime lights development in 1992-1997 and 2001-2008 respectively and the US suffered from nighttime lights decay in large areas after 2001.

  12. Teaching Absolute Value Meaningfully

    ERIC Educational Resources Information Center

    Wade, Angela

    2012-01-01

    What is the meaning of absolute value? And why do teachers teach students how to solve absolute value equations? Absolute value is a concept introduced in first-year algebra and then reinforced in later courses. Various authors have suggested instructional methods for teaching absolute value to high school students (Wei 2005; Stallings-Roberts…

  13. Real-time quality control of pipes using neural network prediction error signals for defect detection in time area

    NASA Astrophysics Data System (ADS)

    Akhmetshin, Alexander M.; Gvozdak, Andrey P.

    1999-08-01

    The magnetic-induction method of quality control of seamless pipes in real-time characterized by a high level of structural noises having the composite law of an elementary probability law varying from batch to a batch, of a varying form. The traditional method of a detection of defects of pipes is depend to usage of ethanol defects. However shape of actual defects is casual, that does not allow to use methods of an optimum filtration for their detection. Usage of adaptive variants of a Kalman filter not ensures the solutions of a problem of a detection because of poor velocity of adaptation and small relation a signal/the correlated noise. For the solution of a problem was used structural Adaptive Neuro-Fuzzy Inference System (ANFIS) which was trained by delivery of every possible variants of signals without defects of sites of pipes filed by transducer system. As an analyzable signal the error signal of the prognosis ANFIS was considered. The carried out experiments have shown, that the method allows to ooze a signal of casual extended defects even in situations when a signal-noise ratio was less unity and the traditional amplitudes methods of selection of signals of defects did not determine.

  14. Coupling Modified Constitutive Relation Error, Model Reduction and Kalman Filtering Algorithms for Real-Time Parameters Identification

    NASA Astrophysics Data System (ADS)

    Marchand, Basile; Chamoin, Ludovic; Rey, Christian

    2015-11-01

    In this work we propose a new identification strategy based on the coupling between a probabilistic data assimilation method and a deterministic inverse problem approach using the modified Constitutive Relation Error energy functional. The idea is thus to offer efficient identification despite of highly corrupted data for time-dependent systems. In order to perform real-time identification, the modified Constitutive Relation Error is here associated to a model reduction method based on Proper Generalized Decomposition. The proposed strategy is applied to two thermal problems with identification of time-dependent boundary conditions, or material parameters.

  15. AMF phasing—A precise control of the ignition timing of AKM for reduction of the injection error

    NASA Astrophysics Data System (ADS)

    Homma, Masanori; Utajima, Masayoshi; Okamoto, Toshio; Hiraishi, Kenji; Takezawa, Susumu

    This paper deals with a new concept, named "AMF Phasing", which intends to minimize the effect of injection error that would result during apogee motor firing (AMF) of the spinning spacecraft. The characteristic of velocity increment error is derived analytically, based on the disturbed spinning motion during AMF. In order to precisely estimate the amount of fuel required for post-AMF orbital correction maneuvers, a probability model is proposed which estimates the total injection error probability combining the dominant error factors, i.e. pre-AMF attitude determination error and velocity increment error during AMF. It is shown that a substantial saving in fuel normally consumed for post-AMF can be expected, when the resultant velocity increment error contribution, which otherwise would be randomly directed in inertial space, is controlled so that it appears in the direction of local right ascension by igniting AKM at the proper instant (AMF Phasing). The procedure for AMF phasing, using a Sun pulse as a reference signal for the ignition timing is described in this paper. It was actually applied for GMS-2, Japan's second Geostationary Meteorological Satellite (HIMAWARI-II). The HIMAWARI-II post-AMF orbit determination shows that AMF Phasing worked successfully and it is concluded that a substantial fuel saving was achieved.

  16. Sub-micron absolute distance measurements in sub-millisecond times with dual free-running femtosecond Er fiber-lasers.

    PubMed

    Liu, Tze-An; Newbury, Nathan R; Coddington, Ian

    2011-09-12

    We demonstrate a simplified dual-comb LIDAR setup for precision absolute ranging that can achieve a ranging precision of 2 μm in 140 μs acquisition time. With averaging, the precision drops below 1 μm at 0.8 ms and below 200 nm at 20 ms. The system can measure the distance to multiple targets with negligible dead zones and a ranging ambiguity of 1 meter. The system is much simpler than a previous coherent dual-comb LIDAR because the two combs are replaced by free-running, saturable-absorber-based femtosecond Er fiber lasers, rather than tightly phase-locked combs, with the entire time base provided by a single 10-digit frequency counter. Despite the simpler design, the system provides a factor of three improved performance over the previous coherent dual comb LIDAR system. PMID:21935219

  17. Analysis of transmission error effects on the transfer of real-time simulation data

    NASA Technical Reports Server (NTRS)

    Credeur, L.

    1977-01-01

    An analysis was made to determine the effect of transmission errors on the quality of data transferred from the Terminal Area Air Traffic Model to a remote site. Data formating schemes feasible within the operational constraints of the data link were proposed and their susceptibility to both random bit error and to noise burst were investigated. It was shown that satisfactory reliability is achieved by a scheme formating the simulation output into three data blocks which has the priority data triply redundant in the first block in addition to having a retransmission priority on that first block when it is received in error.

  18. Optical laboratory solution and error model simulation of a linear time-varying finite element equation

    NASA Technical Reports Server (NTRS)

    Taylor, B. K.; Casasent, D. P.

    1989-01-01

    The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.

  19. Parallel determination of absolute distances to multiple targets by time-of-flight measurement using femtosecond light pulses.

    PubMed

    Han, Seongheum; Kim, Young-Jin; Kim, Seung-Woo

    2015-10-01

    Distances to multiple targets are measured simultaneously using a single femtosecond pulse laser split through a diffractive optical element. Pulse arrival from each target is detected by means of balanced cross-correlation of second harmonics generated using a PPKTP crystal. Time-of-flight of each returning pulse is counted by dual-comb interferometry with 0.01 ps timing resolution at a 2 kHz update rate. This multi-target ranging capability is demonstrated by performing multi-degree of freedom (m-DOF) sensing of a rigid-body motion simulating a satellite operating in orbit. This method is applicable to diverse terrestrial and space applications requiring concurrent multiple distance measurements with high precision. PMID:26480101

  20. An evaluation and regional error modeling methodology for near-real-time satellite rainfall data over Australia

    NASA Astrophysics Data System (ADS)

    Pipunic, Robert C.; Ryu, Dongryeol; Costelloe, Justin F.; Su, Chun-Hsu

    2015-10-01

    In providing uniform spatial coverage, satellite-based rainfall estimates can potentially benefit hydrological modeling, particularly for flood prediction. Maximizing the value of information from such data requires knowledge of its error. The most recent Tropical Rainfall Measuring Mission (TRMM) 3B42RT (TRMM-RT) satellite product version 7 (v7) was used for examining evaluation procedures against in situ gauge data across mainland Australia at a daily time step, over a 9 year period. This provides insights into estimating uncertainty and informing quantitative error model development, with methodologies relevant to the recently operational Global Precipitation Measurement mission that builds upon the TRMM legacy. Important error characteristics highlighted for daily aggregated TRMM-RT v7 include increasing (negative) bias and error variance with increasing daily gauge totals and more reliability at detecting larger gauge totals with a probability of detection of <0.5 for rainfall < ~3 mm/d. Additionally, pixel location within clusters of spatially contiguous TRMM-RT v7 rainfall pixels (representing individual rain cloud masses) has predictive ability for false alarms. Differences between TRMM-RT v7 and gauge data have increasing (positive) bias and error variance with increasing TRMM-RT estimates. Difference errors binned within 10 mm/d increments of TRMM-RT v7 estimates highlighted negatively skewed error distributions for all bins, suitably approximated by the generalized extreme value distribution. An error model based on this distribution enables bias correction and definition of quantitative uncertainty bounds, which are expected to be valuable for hydrological modeling and/or merging with other rainfall products. These error characteristics are also an important benchmark for assessing if/how future satellite rainfall products have improved.

  1. A pharmacometric case study regarding the sensitivity of structural model parameter estimation to error in patient reported dosing times.

    PubMed

    Knights, Jonathan; Rohatagi, Shashank

    2015-12-01

    Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times. PMID:26209956

  2. Impact of Habitat-Specific GPS Positional Error on Detection of Movement Scales by First-Passage Time Analysis

    PubMed Central

    Williams, David M.; Dechen Quinn, Amy; Porter, William F.

    2012-01-01

    Advances in animal tracking technologies have reduced but not eliminated positional error. While aware of such inherent error, scientists often proceed with analyses that assume exact locations. The results of such analyses then represent one realization in a distribution of possible outcomes. Evaluating results within the context of that distribution can strengthen or weaken our confidence in conclusions drawn from the analysis in question. We evaluated the habitat-specific positional error of stationary GPS collars placed under a range of vegetation conditions that produced a gradient of canopy cover. We explored how variation of positional error in different vegetation cover types affects a researcher's ability to discern scales of movement in analyses of first-passage time for white-tailed deer (Odocoileus virginianus). We placed 11 GPS collars in 4 different vegetative canopy cover types classified as the proportion of cover above the collar (0–25%, 26–50%, 51–75%, and 76–100%). We simulated the effect of positional error on individual movement paths using cover-specific error distributions at each location. The different cover classes did not introduce any directional bias in positional observations (1 m≤mean≤6.51 m, 0.24≤p≤0.47), but the standard deviation of positional error of fixes increased significantly with increasing canopy cover class for the 0–25%, 26–50%, 51–75% classes (SD = 2.18 m, 3.07 m, and 4.61 m, respectively) and then leveled off in the 76–100% cover class (SD = 4.43 m). We then added cover-specific positional errors to individual deer movement paths and conducted first-passage time analyses on the noisy and original paths. First-passage time analyses were robust to habitat-specific error in a forest-agriculture landscape. For deer in a fragmented forest-agriculture environment, and species that move across similar geographic extents, we suggest that first-passage time analysis is robust with regard to

  3. Global distributions, time series and error characterization of atmospheric ammonia (NH3) from IASI satellite observations

    NASA Astrophysics Data System (ADS)

    Van Damme, M.; Clarisse, L.; Heald, C. L.; Hurtmans, D.; Ngadi, Y.; Clerbaux, C.; Dolman, A. J.; Erisman, J. W.; Coheur, P. F.

    2014-03-01

    Ammonia (NH3) emissions in the atmosphere have increased substantially over the past decades, largely because of intensive livestock production and use of fertilizers. As a short-lived species, NH3 is highly variable in the atmosphere and its concentration is generally small, except near local sources. While ground-based measurements are possible, they are challenging and sparse. Advanced infrared sounders in orbit have recently demonstrated their capability to measure NH3, offering a new tool to refine global and regional budgets. In this paper we describe an improved retrieval scheme of NH3 total columns from the measurements of the Infrared Atmospheric Sounding Interferometer (IASI). It exploits the hyperspectral character of this instrument by using an extended spectral range (800-1200 cm-1) where NH3 is optically active. This scheme consists of the calculation of a dimensionless spectral index from the IASI level1C radiances, which is subsequently converted to a total NH3 column using look-up tables built from forward radiative transfer model simulations. We show how to retrieve the NH3 total columns from IASI quasi-globally and twice daily above both land and sea without large computational resources and with an improved detection limit. The retrieval also includes error characterization of the retrieved columns. Five years of IASI measurements (1 November 2007 to 31 October 2012) have been processed to acquire the first global and multiple-year data set of NH3 total columns, which are evaluated and compared to similar products from other retrieval methods. Spatial distributions from the five years data set are provided and analyzed at global and regional scales. In particular, we show the ability of this method to identify smaller emission sources than those previously reported, as well as transport patterns over the ocean. The five-year time series is further examined in terms of seasonality and interannual variability (in particular as a function of fire

  4. Motoneuron axon pathfinding errors in zebrafish: Differential effects related to concentration and timing of nicotine exposure

    SciTech Connect

    Menelaou, Evdokia; Paul, Latoya T.; Perera, Surangi N.; Svoboda, Kurt R.

    2015-04-01

    Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. - Highlights: • Embryonic nicotine exposure can specifically affect secondary motoneuron axons in a dose-dependent manner.

  5. Analysis of potential errors in real-time streamflow data and methods of data verification by digital computer

    USGS Publications Warehouse

    Lystrom, David J.

    1972-01-01

    Various methods of verifying real-time streamflow data are outlined in part II. Relatively large errors (those greater than 20-30 percent) can be detected readily by use of well-designed verification programs for a digital computer, and smaller errors can be detected only by discharge measurements and field observations. The capability to substitute a simulated discharge value for missing or erroneous data is incorporated in some of the verification routines described. The routines represent concepts ranging from basic statistical comparisons to complex watershed modeling and provide a selection from which real-time data users can choose a suitable level of verification.

  6. Continuous Gravity Monitoring in South America with Superconducting and Absolute Gravimeters: More than 12 years time series at station TIGO/Concepcion (Chile)

    NASA Astrophysics Data System (ADS)

    Wziontek, Hartmut; Falk, Reinhard; Hase, Hayo; Armin, Böer; Andreas, Güntner; Rongjiang, Wang

    2016-04-01

    As part of the Transportable Integrated Geodetic Observatory (TIGO) of BKG, the superconducting gravimeter SG 038 was set up in December 2002 at station Concepcion / Chile to record temporal gravity variations with highest precision. Since May 2006 the time series was supported by weekly observations with the absolute gravimeter FG5-227, proving the large seasonal variations of up to 30 μGal and establishing a gravity reference station in South America. With the move of the whole observatory to the new location near to La Plata / Argentina the series was terminated. Results of almost continuously monitoring gravity variations for more than 12 years are presented. Seasonal variations are interpreted with respect of global and local water storage changes and the impact of the 8.8 Maule Earthquake in February 2010 is discussed.

  7. Using a Novel Absolute Ontogenetic Age Determination Technique to Calculate the Timing of Tooth Eruption in the Saber-Toothed Cat, Smilodon fatalis

    PubMed Central

    Wysocki, M. Aleksander; Feranec, Robert S.; Tseng, Zhijie Jack; Bjornsson, Christopher S.

    2015-01-01

    Despite the superb fossil record of the saber-toothed cat, Smilodon fatalis, ontogenetic age determination for this and other ancient species remains a challenge. The present study utilizes a new technique, a combination of data from stable oxygen isotope analyses and micro-computed tomography, to establish the eruption rate for the permanent upper canines in Smilodon fatalis. The results imply an eruption rate of 6.0 millimeters per month, which is similar to a previously published average enamel growth rate of the S. fatalis upper canines (5.8 millimeters per month). Utilizing the upper canine growth rate, the upper canine eruption rate, and a previously published tooth replacement sequence, this study calculates absolute ontogenetic age ranges of tooth development and eruption in S. fatalis. The timing of tooth eruption is compared between S. fatalis and several extant conical-toothed felids, such as the African lion (Panthera leo). Results suggest that the permanent dentition of S. fatalis, except for the upper canines, was fully erupted by 14 to 22 months, and that the upper canines finished erupting at about 34 to 41 months. Based on these developmental age calculations, S. fatalis individuals less than 4 to 7 months of age were not typically preserved at Rancho La Brea. On the whole, S. fatalis appears to have had delayed dental development compared to dental development in similar-sized extant felids. This technique for absolute ontogenetic age determination can be replicated in other ancient species, including non-saber-toothed taxa, as long as the timing of growth initiation and growth rate can be determined for a specific feature, such as a tooth, and that growth period overlaps with the development of the other features under investigation. PMID:26132165

  8. SkyProbe: Real-Time Precision Monitoring in the Optical of the Absolute Atmospheric Absorption on the Telescope Science and Calibration Fields

    NASA Astrophysics Data System (ADS)

    Cuillandre, J.-C.; Magnier, E.; Sabin, D.; Mahoney, B.

    2016-05-01

    Mauna Kea is known for its pristine seeing conditions but sky transparency can be an issue for science operations since at least 25% of the observable (i.e. open dome) nights are not photometric, an effect mostly due to high-altitude cirrus. Since 2001, the original single channel SkyProbe mounted in parallel on the Canada-France-Hawaii Telescope (CFHT) has gathered one V-band exposure every minute during each observing night using a small CCD camera offering a very wide field of view (35 sq. deg.) encompassing the region pointed by the telescope for science operations, and exposures long enough (40 seconds) to capture at least 100 stars of Hipparcos' Tycho catalog at high galactic latitudes (and up to 600 stars at low galactic latitudes). The measurement of the true atmospheric absorption is achieved within 2%, a key advantage over all-sky direct thermal infrared imaging detection of clouds. The absolute measurement of the true atmospheric absorption by clouds and particulates affecting the data being gathered by the telescope's main science instrument has proven crucial for decision making in the CFHT queued service observing (QSO) representing today all of the telescope time. Also, science exposures taken in non-photometric conditions are automatically registered for a new observation at a later date at 1/10th of the original exposure time in photometric conditions to ensure a proper final absolute photometric calibration. Photometric standards are observed only when conditions are reported as being perfectly stable by SkyProbe. The more recent dual color system (simultaneous B & V bands) will offer a better characterization of the sky properties above Mauna Kea and should enable a better detection of the thinnest cirrus (absorption down to 0.01 mag., or 1%).

  9. Sustained Attention is Associated with Error Processing Impairment: Evidence from Mental Fatigue Study in Four-Choice Reaction Time Task

    PubMed Central

    Xiao, Yi; Ma, Feng; Lv, Yixuan; Cai, Gui; Teng, Peng; Xu, FengGang; Chen, Shanguang

    2015-01-01

    Attention is important in error processing. Few studies have examined the link between sustained attention and error processing. In this study, we examined how error-related negativity (ERN) of a four-choice reaction time task was reduced in the mental fatigue condition and investigated the role of sustained attention in error processing. Forty-one recruited participants were divided into two groups. In the fatigue experiment group, 20 subjects performed a fatigue experiment and an additional continuous psychomotor vigilance test (PVT) for 1 h. In the normal experiment group, 21 subjects only performed the normal experimental procedures without the PVT test. Fatigue and sustained attention states were assessed with a questionnaire. Event-related potential results showed that ERN (p < 0.005) and peak (p < 0.05) mean amplitudes decreased in the fatigue experiment. ERN amplitudes were significantly associated with the attention and fatigue states in electrodes Fz, FC1, Cz, and FC2. These findings indicated that sustained attention was related to error processing and that decreased attention is likely the cause of error processing impairment. PMID:25756780

  10. Positional error and time-activity patterns in near-highway proximity studies: an exposure misclassification analysis

    PubMed Central

    2013-01-01

    Background The growing interest in research on the health effects of near-highway air pollutants requires an assessment of potential sources of error in exposure assignment techniques that rely on residential proximity to roadways. Methods We compared the amount of positional error in the geocoding process for three different data sources (parcels, TIGER and StreetMap USA) to a “gold standard” residential geocoding process that used ortho-photos, large multi-building parcel layouts or large multi-unit building floor plans. The potential effect of positional error for each geocoding method was assessed as part of a proximity to highway epidemiological study in the Boston area, using all participants with complete address information (N = 703). Hourly time-activity data for the most recent workday/weekday and non-workday/weekend were collected to examine time spent in five different micro-environments (inside of home, outside of home, school/work, travel on highway, and other). Analysis included examination of whether time-activity patterns were differentially distributed either by proximity to highway or across demographic groups. Results Median positional error was significantly higher in street network geocoding (StreetMap USA = 23 m; TIGER = 22 m) than parcel geocoding (8 m). When restricted to multi-building parcels and large multi-unit building parcels, all three geocoding methods had substantial positional error (parcels = 24 m; StreetMap USA = 28 m; TIGER = 37 m). Street network geocoding also differentially introduced greater amounts of positional error in the proximity to highway study in the 0–50 m proximity category. Time spent inside home on workdays/weekdays differed significantly by demographic variables (age, employment status, educational attainment, income and race). Time-activity patterns were also significantly different when stratified by proximity to highway, with those participants residing in the 0–50 m

  11. Space-time structure and dynamics of the forecast error in a coastal circulation model of the Gulf of Lions

    NASA Astrophysics Data System (ADS)

    Auclair, Francis; Marsaleix, Patrick; De Mey, Pierre

    2003-02-01

    The probability density function (pdf) of forecast errors due to several possible error sources is investigated in a coastal ocean model driven by the atmosphere and a larger-scale ocean solution using an Ensemble (Monte Carlo) technique. An original method to generate dynamically adjusted perturbation of the slope current is proposed. The model is a high-resolution 3D primitive equation model resolving topographic interactions, river runoff and wind forcing. The Monte Carlo approach deals with model and observation errors in a natural way. It is particularly well-adapted to coastal non-linear studies. Indeed higher-order moments are implicitly retained in the covariance equation. Statistical assumptions are made on the uncertainties related to the various forcings (wind stress, open boundary conditions, etc.), to the initial state and to other model parameters, and randomly perturbed forecasts are carried out in accordance with the a priori error pdf. The evolution of these errors is then traced in space and time and the a posteriori error pdf can be explored. Third- and fourth-order moments of the pdf are computed to evaluate the normal or Gaussian behaviour of the distribution. The calculation of Central Empirical Orthogonal Functions (Ceofs) of the forecast Ensemble covariances eventually leads to a physical description of the model forecast error subspace in model state space. The time evolution of the projection of the Reference forecast onto the first Ceofs clearly shows the existence of specific model regimes associated to particular forcing conditions. The Ceofs basis is also an interesting candidate to define the Reduced Control Subspace for assimilation and in particular to explore transitions in model state space. We applied the above methodology to study the penetration of the Liguro-Provençal Catalan Current over the shelf of the Gulf of Lions in north-western Mediterranean together with the discharge of the Rhône river. This region is indeed well

  12. Lunch-time food choices in preschoolers: Relationships between absolute and relative intakes of different food categories, and appetitive characteristics and weight.

    PubMed

    Carnell, S; Pryor, K; Mais, L A; Warkentin, S; Benson, L; Cheng, R

    2016-08-01

    Children's appetitive characteristics measured by parent-report questionnaires are reliably associated with body weight, as well as behavioral tests of appetite, but relatively little is known about relationships with food choice. As part of a larger preloading study, we served 4-5year olds from primary school classes five school lunches at which they were presented with the same standardized multi-item meal. Parents completed Child Eating Behavior Questionnaire (CEBQ) sub-scales assessing satiety responsiveness (CEBQ-SR), food responsiveness (CEBQ-FR) and enjoyment of food (CEBQ-EF), and children were weighed and measured. Despite differing preload conditions, children showed remarkable consistency of intake patterns across all five meals with day-to-day intra-class correlations in absolute and percentage intake of each food category ranging from 0.78 to 0.91. Higher CEBQ-SR was associated with lower mean intake of all food categories across all five meals, with the weakest association apparent for snack foods. Higher CEBQ-FR was associated with higher intake of white bread and fruits and vegetables, and higher CEBQ-EF was associated with greater intake of all categories, with the strongest association apparent for white bread. Analyses of intake of each food group as a percentage of total intake, treated here as an index of the child's choice to consume relatively more or relatively less of each different food category when composing their total lunch-time meal, further suggested that children who were higher in CEBQ-SR ate relatively more snack foods and relatively less fruits and vegetables, while children with higher CEBQ-EF ate relatively less snack foods and relatively more white bread. Higher absolute intakes of white bread and snack foods were associated with higher BMI z score. CEBQ sub-scale associations with food intake variables were largely unchanged by controlling for daily metabolic needs. However, descriptive comparisons of lunch intakes with

  13. Errors in determination of soil water content using time-domain reflectometry caused by soil compaction around wave guides

    SciTech Connect

    Ghezzehei, T.A.

    2008-05-29

    Application of time domain reflectometry (TDR) in soil hydrology often involves the conversion of TDR-measured dielectric permittivity to water content using universal calibration equations (empirical or physically based). Deviations of soil-specific calibrations from the universal calibrations have been noted and are usually attributed to peculiar composition of soil constituents, such as high content of clay and/or organic matter. Although it is recognized that soil disturbance by TDR waveguides may have impact on measurement errors, to our knowledge, there has not been any quantification of this effect. In this paper, we introduce a method that estimates this error by combining two models: one that describes soil compaction around cylindrical objects and another that translates change in bulk density to evolution of soil water retention characteristics. Our analysis indicates that the compaction pattern depends on the mechanical properties of the soil at the time of installation. The relative error in water content measurement depends on the compaction pattern as well as the water content and water retention properties of the soil. Illustrative calculations based on measured soil mechanical and hydrologic properties from the literature indicate that the measurement errors of using a standard three-prong TDR waveguide could be up to 10%. We also show that the error scales linearly with the ratio of rod radius to the interradius spacing.

  14. Errors in determination of soil water content using time domain reflectometry caused by soil compaction around waveguides

    NASA Astrophysics Data System (ADS)

    Ghezzehei, Teamrat A.

    2008-08-01

    Application of time domain reflectometry (TDR) in soil hydrology often involves the conversion of TDR-measured dielectric permittivity to water content using universal calibration equations (empirical or physically based). Deviations of soil-specific calibrations from the universal calibrations have been noted and are usually attributed to peculiar composition of soil constituents, such as high content of clay and/or organic matter. Although it is recognized that soil disturbance by TDR waveguides may have impact on measurement errors, to our knowledge, there has not been any quantification of this effect. In this paper, we introduce a method that estimates this error by combining two models: one that describes soil compaction around cylindrical objects and another that translates change in bulk density to evolution of soil water retention characteristics. Our analysis indicates that the compaction pattern depends on the mechanical properties of the soil at the time of installation. The relative error in water content measurement depends on the compaction pattern as well as the water content and water retention properties of the soil. Illustrative calculations based on measured soil mechanical and hydrologic properties from the literature indicate that the measurement errors of using a standard three-prong TDR waveguide could be up to 10%. We also show that the error scales linearly with the ratio of rod radius to the interradius spacing.

  15. Errors in determination of soil water content using time-domain reflectometry caused by soil compaction around wave guides

    NASA Astrophysics Data System (ADS)

    Ghezzehei, T. A.

    2007-12-01

    Application of time-domain reflectometry (TDR) in soil hydrology often involves the conversion of TDR-measured dielectric permittivity to water content using universal calibration equations (empirical or physically based). Deviations of soil-specific calibrations from the universal calibrations have been noted and are usually attributed with peculiar composition of soil constituents, such as high content of clay and/or organic matter. Although it is recognized that soil disturbance by TDR wave guides may have impact on measurement errors, to our knowledge, there has not been any quantification of this effect. In this presentation, we introduce a combined mechanical-hydrological method that estimates the measurement error. Our analysis indicates that soil compaction pattern depends on the mechanical properties of the soil at the time of installation. The relative error in water content measurement depends on the compaction pattern as well as the water content and water retention characteristics of the soil. Illustrative calculations based on measured soil mechanical and hydrologic properties from the literature show that the measurement errors of using a standard three-prong TDR wave guide could be up to 10 percent. We also show that the error scales linearly with the ratio of rod radius to the inter- radius spacing.

  16. Non-contrast 3D time-of-flight magnetic resonance angiography for visualization of intracranial aneurysms in patients with absolute contraindications to CT or MRI contrast

    PubMed Central

    Yanamadala, Vijay; Sheth, Sameer A.; Walcott, Brian P.; Buchbinder, Bradley R.; Buckley, Deidre; Ogilvy, Christopher S.

    2013-01-01

    The preoperative evaluation in patients with intracranial aneurysms typically includes a contrast-enhanced vascular study, such as computed tomography angiography (CTA), magnetic resonance angiography (MRA), or digital subtraction angiography. However, there are numerous absolute and relative contraindications to the administration of imaging contrast agents, including pregnancy, severe contrast allergy, and renal insufficiency. Evaluation of patients with contrast contraindications thus presents a unique challenge. We identified three patients with absolute contrast contraindications who presented with intracranial aneurysms. One patient was pregnant, while the other two had previous severe anaphylactic reactions to iodinated contrast. Because of these contraindications to intravenous contrast, we performed non-contrast time-of-flight MRA with 3D reconstruction (TOF MRA with 3DR) with maximum intensity projections and volume renderings as part of the preoperative evaluation prior to successful open surgical clipping of the aneurysms. In the case of one paraclinoid aneurysm, a high-resolution non-contrast CT scan was also performed to assess the relationship of the aneurysm to the anterior clinoid process. TOF MRA with 3DR successfully identified the intracranial aneurysms and adequately depicted the surrounding microanatomy. Intraoperative findings were as predicted by the preoperative imaging studies. The aneurysms were successfully clip-obliterated, and the patients had uneventful post-operative courses. These cases demonstrate that non-contrast imaging is a viable modality to assess intracranial aneurysms as part of the surgical planning process in patients with contrast contraindications. TOF MRA with 3DR, in conjunction with high-resolution non-contrast CT when indicated, provides adequate visualization of the microanatomy of the aneurysm and surrounding structures. PMID:23685107

  17. Nonlinear forecasting as a way of distinguishing chaos from measurement error in time series

    NASA Astrophysics Data System (ADS)

    Sugihara, George; May, Robert M.

    1990-04-01

    An approach is presented for making short-term predictions about the trajectories of chaotic dynamical systems. The method is applied to data on measles, chickenpox, and marine phytoplankton populations, to show how apparent noise associated with deterministic chaos can be distinguished from sampling error and other sources of externally induced environmental noise.

  18. Continued Driving and Time to Transition to Nondriver Status through Error-Specific Driving Restrictions

    ERIC Educational Resources Information Center

    Freund, Barbara; Petrakos, Davithoula

    2008-01-01

    We developed driving restrictions that are linked to specific driving errors, allowing cognitively impaired individuals to continue to independently meet mobility needs while minimizing risk to themselves and others. The purpose of this project was to evaluate the efficacy and duration expectancy of these restrictions in promoting safe continued…

  19. Eosinophil count - absolute

    MedlinePlus

    Eosinophils; Absolute eosinophil count ... the white blood cell count to give the absolute eosinophil count. ... than 500 cells per microliter (cells/mcL). Normal value ranges may vary slightly among different laboratories. Talk ...

  20. Motoneuron axon pathfinding errors in zebrafish: differential effects related to concentration and timing of nicotine exposure.

    PubMed

    Menelaou, Evdokia; Paul, Latoya T; Perera, Surangi N; Svoboda, Kurt R

    2015-04-01

    Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15-30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. PMID:25668718

  1. Real-Time PPP Based on the Coupling Estimation of Clock Bias and Orbit Error with Broadcast Ephemeris.

    PubMed

    Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan

    2015-01-01

    Satellite orbit error and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the

  2. Real-Time PPP Based on the Coupling Estimation of Clock Bias and Orbit Error with Broadcast Ephemeris

    PubMed Central

    Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan

    2015-01-01

    Satellite orbit error and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the

  3. Accuracy of travel time distribution (TTD) models as affected by TTD complexity, observation errors, and model and tracer selection

    USGS Publications Warehouse

    Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.

    2014-01-01

    Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.

  4. Impacts of real-time satellite clock errors on GPS precise point positioning-based troposphere zenith delay estimation

    NASA Astrophysics Data System (ADS)

    Shi, Junbo; Xu, Chaoqian; Li, Yihe; Gao, Yang

    2015-08-01

    Global Positioning System (GPS) has become a cost-effective tool to determine troposphere zenith total delay (ZTD) with accuracy comparable to other atmospheric sensors such as the radiosonde, the water vapor radiometer, the radio occultation and so on. However, the high accuracy of GPS troposphere ZTD estimates relies on the precise satellite orbit and clock products available with various latencies. Although the International GNSS Service (IGS) can provide predicted orbit and clock products for real-time applications, the predicted clock accuracy of 3 ns cannot always guarantee the high accuracy of troposphere ZTD estimates. Such limitations could be overcome by the use of the newly launched IGS real-time service which provides 5 cm orbit and 0.2-1.0 ns (an equivalent range error of 6-30 cm) clock products in real time. Considering the relatively larger magnitude of the clock error than that of the orbit error, this paper investigates the effect of real-time satellite clock errors on the GPS precise point positioning (PPP)-based troposphere ZTD estimation. Meanwhile, how the real-time satellite clock errors impact the GPS PPP-based troposphere ZTD estimation has also been studied to obtain the most precise ZTD solutions. First, two types of real-time satellite clock products are assessed with respect to the IGS final clock product in terms of accuracy and precision. Second, the real-time GPS PPP-based troposphere ZTD estimation is conducted using data from 34 selected IGS stations over three independent weeks in April, July and October, 2013. Numerical results demonstrate that the precision, rather than the accuracy, of the real-time satellite clock products impacts the real-time PPP-based ZTD solutions more significantly. In other words, the real-time satellite clock product with better precision leads to more precise real-time PPP-based troposphere ZTD solutions. Therefore, it is suggested that users should select and apply real-time satellite products with

  5. Color-Coded Prefilled Medication Syringes Decrease Time to Delivery and Dosing Error in Simulated Emergency Department Pediatric Resuscitations

    PubMed Central

    Moreira, Maria E.; Hernandez, Caleb; Stevens, Allen D.; Jones, Seth; Sande, Margaret; Blumen, Jason R.; Hopkins, Emily; Bakes, Katherine; Haukoos, Jason S.

    2016-01-01

    Study objective The Institute of Medicine has called on the US health care system to identify and reduce medical errors. Unfortunately, medication dosing errors remain commonplace and may result in potentially life-threatening outcomes, particularly for pediatric patients when dosing requires weight-based calculations. Novel medication delivery systems that may reduce dosing errors resonate with national health care priorities. Our goal was to evaluate novel, prefilled medication syringes labeled with color-coded volumes corresponding to the weight-based dosing of the Broselow Tape, compared with conventional medication administration, in simulated pediatric emergency department (ED) resuscitation scenarios. Methods We performed a prospective, block-randomized, crossover study in which 10 emergency physician and nurse teams managed 2 simulated pediatric arrest scenarios in situ, using either prefilled, color-coded syringes (intervention) or conventional drug administration methods (control). The ED resuscitation room and the intravenous medication port were video recorded during the simulations. Data were extracted from video review by blinded, independent reviewers. Results Median time to delivery of all doses for the conventional and color-coded delivery groups was 47 seconds (95% confidence interval [CI] 40 to 53 seconds) and 19 seconds (95% CI 18 to 20 seconds), respectively (difference=27 seconds; 95% CI 21 to 33 seconds). With the conventional method, 118 doses were administered, with 20 critical dosing errors (17%); with the color-coded method, 123 doses were administered, with 0 critical dosing errors (difference=17%; 95% CI 4% to 30%). Conclusion A novel color-coded, prefilled syringe decreased time to medication administration and significantly reduced critical dosing errors by emergency physician and nurse teams during simulated pediatric ED resuscitations. PMID:25701295

  6. Non-iterative adaptive time-stepping scheme with temporal truncation error control for simulating variable-density flow

    NASA Astrophysics Data System (ADS)

    Hirthe, Eugenia M.; Graf, Thomas

    2012-12-01

    The automatic non-iterative second-order time-stepping scheme based on the temporal truncation error proposed by Kavetski et al. [Kavetski D, Binning P, Sloan SW. Non-iterative time-stepping schemes with adaptive truncation error control for the solution of Richards equation. Water Resour Res 2002;38(10):1211, http://dx.doi.org/10.1029/2001WR000720.] is implemented into the code of the HydroGeoSphere model. This time-stepping scheme is applied for the first time to the low-Rayleigh-number thermal Elder problem of free convection in porous media [van Reeuwijk M, Mathias SA, Simmons CT, Ward JD. Insights from a pseudospectral approach to the Elder problem. Water Resour Res 2009;45:W04416, http://dx.doi.org/10.1029/2008WR007421.], and to the solutal [Shikaze SG, Sudicky EA, Schwartz FW. Density-dependent solute transport in discretely-fractured geological media: is prediction possible? J Contam Hydrol 1998;34:273-91] problem of free convection in fractured-porous media. Numerical simulations demonstrate that the proposed scheme efficiently limits the temporal truncation error to a user-defined tolerance by controlling the time-step size. The non-iterative second-order time-stepping scheme can be applied to (i) thermal and solutal variable-density flow problems, (ii) linear and non-linear density functions, and (iii) problems including porous and fractured-porous media.

  7. Directional errors of movements and their correction in a discrete tracking task. [pilot reaction time and sensorimotor performance

    NASA Technical Reports Server (NTRS)

    Jaeger, R. J.; Agarwal, G. C.; Gottlieb, G. L.

    1978-01-01

    Subjects can correct their own errors of movement more quickly than they can react to external stimuli by using three general categories of feedback: (1) knowledge of results, primarily visually mediated; (2) proprioceptive or kinaesthetic such as from muscle spindles and joint receptors, and (3) corollary discharge or efference copy within the central nervous system. The effects of these feedbacks on simple reaction time, choice reaction time, and error correction time were studied in four normal human subjects. The movement used was plantarflexion and dorsiflexion of the ankle joint. The feedback loops were modified, by changing the sign of the visual display to alter the subject's perception of results, and by applying vibration at 100 Hz simultaneously to both the agonist and antagonist muscles of the ankle joint. The central processing was interfered with when the subjects were given moderate doses of alcohol (blood alcohol concentration levels of up to 0.07%). Vibration and alcohol increase both the simple and choice reaction times but not the error correction time.

  8. Sampling errors for satellite-derived tropical rainfall: Monte Carlo study using a space-time stochastic model

    SciTech Connect

    Bell, T.L. ); Abdullah, A.; Martin, R.L. ); North, G.R. )

    1990-02-28

    Estimates of monthly average rainfall based on satellite observations from a low Earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The authors estimate the size of this error for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). They first examine in detail the statistical description of rainfall on scales from 1 to 10{sup 3} km, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10% of the mean for rainfall averaged over a 500 {times} 500 km{sup 2} area.

  9. Absolute geostrophic currents in global tropical oceans

    NASA Astrophysics Data System (ADS)

    Yang, Lina; Yuan, Dongliang

    2016-03-01

    A set of absolute geostrophic current (AGC) data for the period January 2004 to December 2012 are calculated using the P-vector method based on monthly gridded Argo profiles in the world tropical oceans. The AGCs agree well with altimeter geostrophic currents, Ocean Surface Current Analysis-Real time currents, and moored current-meter measurements at 10-m depth, based on which the classical Sverdrup circulation theory is evaluated. Calculations have shown that errors of wind stress calculation, AGC transport, and depth ranges of vertical integration cannot explain non-Sverdrup transport, which is mainly in the subtropical western ocean basins and equatorial currents near the Equator in each ocean basin (except the North Indian Ocean, where the circulation is dominated by monsoons). The identified non-Sverdrup transport is thereby robust and attributed to the joint effect of baroclinicity and relief of the bottom (JEBAR) and mesoscale eddy nonlinearity.

  10. Implants as absolute anchorage.

    PubMed

    Rungcharassaeng, Kitichai; Kan, Joseph Y K; Caruso, Joseph M

    2005-11-01

    Anchorage control is essential for successful orthodontic treatment. Each tooth has its own anchorage potential as well as propensity to move when force is applied. When teeth are used as anchorage, the untoward movements of the anchoring units may result in the prolonged treatment time, and unpredictable or less-than-ideal outcome. To maximize tooth-related anchorage, techniques such as differential torque, placing roots into the cortex of the bone, the use of various intraoral devices and/or extraoral appliances have been implemented. Implants, as they are in direct contact with bone, do not possess a periodontal ligament. As a result, they do not move when orthodontic/orthopedic force is applied, and therefore can be used as "absolute anchorage." This article describes different types of implants that have been used as orthodontic anchorage. Their clinical applications and limitations are also discussed. PMID:16463910

  11. Sampling errors for satellite-derived tropical rainfall - Monte Carlo study using a space-time stochastic model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.

    1990-01-01

    Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.

  12. Real-time prediction of atmospheric Lagrangian coherent structures based on forecast data: An application and error analysis

    NASA Astrophysics Data System (ADS)

    BozorgMagham, Amir E.; Ross, Shane D.; Schmale, David G.

    2013-09-01

    The language of Lagrangian coherent structures (LCSs) provides a new means for studying transport and mixing of passive particles advected by an atmospheric flow field. Recent observations suggest that LCSs govern the large-scale atmospheric motion of airborne microorganisms, paving the way for more efficient models and management strategies for the spread of infectious diseases affecting plants, domestic animals, and humans. In addition, having reliable predictions of the timing of hyperbolic LCSs may contribute to improved aerobiological sampling of microorganisms with unmanned aerial vehicles and LCS-based early warning systems. Chaotic atmospheric dynamics lead to unavoidable forecasting errors in the wind velocity field, which compounds errors in LCS forecasting. In this study, we reveal the cumulative effects of errors of (short-term) wind field forecasts on the finite-time Lyapunov exponent (FTLE) fields and the associated LCSs when realistic forecast plans impose certain limits on the forecasting parameters. Objectives of this paper are to (a) quantify the accuracy of prediction of FTLE-LCS features and (b) determine the sensitivity of such predictions to forecasting parameters. Results indicate that forecasts of attracting LCSs exhibit less divergence from the archive-based LCSs than the repelling features. This result is important since attracting LCSs are the backbone of long-lived features in moving fluids. We also show under what circumstances one can trust the forecast results if one merely wants to know if an LCS passed over a region and does not need to precisely know the passage time.

  13. Placing Absolute Timing on Basin Incision Adjacent to the Colorado Front Range: Results from Meteoric and in Situ 10BE Dating

    NASA Astrophysics Data System (ADS)

    Duehnforth, M.; Anderson, R. S.; Ward, D.

    2010-12-01

    A sequence of six levels of gravel-capped surfaces, mapped as Pliocene to Holocene in age, are cut into Cretaceous shale in the northwestern part of the Denver Basin immediately adjacent to the Colorado Front Range (CFR). The existing relative age constraints and terrace correlations suggest that the incision of the Denver Basin occurred at a steady and uniform rate of 0.1 mm yr-1 since the Pliocene. As absolute ages in this landscape are rare, they have the potential to test the reliability of the existing chronology, and to illuminate the detailed history of incision. We explore the timing of basin incision and the variability of geomorphic process rates through time by dating the three highest surfaces at the northwestern edge of the Denver Basin using both in situ and meteoric 10Be concentrations. As the tectonic conditions have not changed since the Pliocene, much of the variability of generation and abandonment of alluvial surfaces likely reflects the influence of glacial-interglacial climate variations. We selected Gunbarrel Hill (mapped as pre-Rocky Flats (Pliocene)), Table Mountain (mapped as Rocky Flats (early Pleistocene)), and the Pioneer surface (mapped as Verdos (Pleistocene, ~640 ka)) as sample locations. We took two amalgamated clast samples on the Gunbarrel Hill surface, and dated depth profiles using meteoric and in situ 10Be on the Table Mountain and Pioneer surfaces. In addition, we measured the in situ 10Be concentrations of 6 boulder samples from the Table Mountain surface. We find that all three surfaces are significantly younger than expected and that in situ and meteoric age measurements largely agree with each other. The samples from the pre-Rocky Flats site (Gunbarrel Hill) show ages of 250 and 310 ka, ignoring post-depositional surface erosion. The ages of the Table Mountain and Pioneer sites fall within the 120 to 150 ka window. These absolute ages overlap with the timing of the penultimate glaciation during marine isotope stage (MIS) 6

  14. Phase Error Correction in Time-Averaged 3D Phase Contrast Magnetic Resonance Imaging of the Cerebral Vasculature

    PubMed Central

    MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard

    2016-01-01

    Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600

  15. Simplified formula for mean cycle-slip time of phase-locked loops with steady-state phase error.

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1972-01-01

    Previous work shows that the mean time from lock to a slipped cycle of a phase-locked loop is given by a certain double integral. Accurate numerical evaluation of this formula for the second-order loop is extremely vexing because the difference between exponentially large quantities is involved. The presented article demonstrates a method in which a much-reduced precision program can be used to obtain the mean first-cycle slip time for a loop of arbitrary degree tracking at a specified SNR and steady-state phase error. It also presents a simple approximate formula that is asymptotically tight at higher loop SNR.

  16. Characterization of data error in time-domain induced polarization tomography based on the analysis of decay curves

    NASA Astrophysics Data System (ADS)

    Gallistl, Jakob; Flores-Orozco, Adrián; Bücker, Matthias; Williams, Kenneth H.

    2015-04-01

    Time-domain induced polarization (TDIP) measurements are based on the recording of remnant voltages after current switch off and thus typically suffer from low signal-to-noise ratios. The analysis of the discrepancy between normal and reciprocal measurements has demonstrated to be a suitable method to quantify the data error in TDIP data sets, permitting to compute images with enhanced resolution. However, due to time constraints, it is not always possible to collect reciprocal measurements. Hence, we propose an alternative methodology to quantify data error in TDIP, which is based on fitting model curves to the measured IP decay. Based on the goodness of the fit, we can identify outliers and derive error parameters for the inversion of the tomographic TDIP data. In order to assess the practicability of our approach, we present a comparison of imaging results obtained based on the fitting of decay curves with those obtained based on the analysis of repeated measurements and normal-reciprocal measurements. Inversion results presented here were computed for extensive field data sets collected at the Rifle (CO) and Shiprock (NM) test sites. These data sets include TDIP data collected with different devices and using different IP windows.

  17. Enhanced multi-hop operation using hybrid optoelectronic router with time-to-live-based selective forward error correction.

    PubMed

    Nakahara, Tatsushi; Suzaki, Yasumasa; Urata, Ryohei; Segawa, Toru; Ishikawa, Hiroshi; Takahashi, Ryo

    2011-12-12

    Multi-hop operation is demonstrated with a prototype hybrid optoelectronic router for optical packet switched networks. The router is realized by combining key optical/optoelectronic device/sub-system technologies and complementary metal-oxide-semiconductor electronics. Using the hop count monitored via the time-to-live field in the packet label, the optoelectronic buffer of the router performs buffering with forward error correction selectively for packets degraded due to multiple hopping every N hops. Experimental results for 10-Gb/s optical packets confirm that the scheme can expand the number of hops while keeping the bit error rate low without the need for optical 3R regenerators at each node. PMID:22274034

  18. Performance analysis for time-frequency MUSIC algorithm in presence of both additive noise and array calibration errors

    NASA Astrophysics Data System (ADS)

    Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim

    2012-12-01

    This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.

  19. Comparison of error-amplification and haptic-guidance training techniques for learning of a timing-based motor task by healthy individuals.

    PubMed

    Milot, Marie-Hélène; Marchal-Crespo, Laura; Green, Christopher S; Cramer, Steven C; Reinkensmeyer, David J

    2010-03-01

    Performance errors drive motor learning for many tasks. Some researchers have suggested that reducing performance errors with haptic guidance can benefit learning by demonstrating correct movements, while others have suggested that artificially increasing errors will force faster and more complete learning. This study compared the effect of these two techniques--haptic guidance and error amplification--as healthy subjects learned to play a computerized pinball-like game. The game required learning to press a button using wrist movement at the correct time to make a flipper hit a falling ball to a randomly positioned target. Errors were decreased or increased using a robotic device that retarded or accelerated wrist movement, based on sensed movement initiation timing errors. After training with either error amplification or haptic guidance, subjects significantly reduced their timing errors and generalized learning to untrained targets. However, for a subset of more skilled subjects, training with amplified errors produced significantly greater learning than training with the reduced errors associated with haptic guidance, while for a subset of less skilled subjects, training with haptic guidance seemed to benefit learning more. These results suggest that both techniques help enhanced performance of a timing task, but learning is optimized if training subjects with the appropriate technique based on their baseline skill level. PMID:19787345

  20. Absolute High-Precision Localisation of an Unmanned Ground Vehicle by Using Real-Time Aerial Video Imagery for Geo-referenced Orthophoto Registration

    NASA Astrophysics Data System (ADS)

    Kuhnert, Lars; Ax, Markus; Langer, Matthias; Nguyen van, Duong; Kuhnert, Klaus-Dieter

    This paper describes an absolute localisation method for an unmanned ground vehicle (UGV) if GPS is unavailable for the vehicle. The basic idea is to combine an unmanned aerial vehicle (UAV) to the ground vehicle and use it as an external sensor platform to achieve an absolute localisation of the robotic team. Beside the discussion of the rather naive method directly using the GPS position of the aerial robot to deduce the ground robot's position the main focus of this paper lies on the indirect usage of the telemetry data of the aerial robot combined with live video images of an onboard camera to realise a registration of local video images with apriori registered orthophotos. This yields to a precise driftless absolute localisation of the unmanned ground vehicle. Experiments with our robotic team (AMOR and PSYCHE) successfully verify this approach.

  1. Time-resolved in vivo luminescence dosimetry for online error detection in pulsed dose-rate brachytherapy

    SciTech Connect

    Andersen, Claus E.; Nielsen, Soeren Kynde; Lindegaard, Jacob Christian; Tanderup, Kari

    2009-11-15

    Purpose: The purpose of this study is to present and evaluate a dose-verification protocol for pulsed dose-rate (PDR) brachytherapy based on in vivo time-resolved (1 s time resolution) fiber-coupled luminescence dosimetry. Methods: Five cervix cancer patients undergoing PDR brachytherapy (Varian GammaMed Plus with {sup 192}Ir) were monitored. The treatments comprised from 10 to 50 pulses (1 pulse/h) delivered by intracavitary/interstitial applicators (tandem-ring systems and/or needles). For each patient, one or two dosimetry probes were placed directly in or close to the tumor region using stainless steel or titanium needles. Each dosimeter probe consisted of a small aluminum oxide crystal attached to an optical fiber cable (1 mm outer diameter) that could guide radioluminescence (RL) and optically stimulated luminescence (OSL) from the crystal to special readout instrumentation. Positioning uncertainty and hypothetical dose-delivery errors (interchanged guide tubes or applicator movements from {+-}5 to {+-}15 mm) were simulated in software in order to assess the ability of the system to detect errors. Results: For three of the patients, the authors found no significant differences (P>0.01) for comparisons between in vivo measurements and calculated reference values at the level of dose per dwell position, dose per applicator, or total dose per pulse. The standard deviations of the dose per pulse were less than 3%, indicating a stable dose delivery and a highly stable geometry of applicators and dosimeter probes during the treatments. For the two other patients, the authors noted significant deviations for three individual pulses and for one dosimeter probe. These deviations could have been due to applicator movement during the treatment and one incorrectly positioned dosimeter probe, respectively. Computer simulations showed that the likelihood of detecting a pair of interchanged guide tubes increased by a factor of 10 or more for the considered patients when

  2. The advantage of absolute quantification in comparative hormone research as indicated by a newly established real-time RT-PCR: GH, IGF-I, and IGF-II gene expression in the tilapia, Oreochromis niloticus.

    PubMed

    Eppler, Elisabeth; Caelers, Antje; Berishvili, Giorgi; Reinecke, Manfred

    2005-04-01

    We have developed a real-time RT-PCR that absolutely quantifies the gene expression of hormones using the standard curve method. The method avoids cloning procedures by using primer extension to create templates containing a T7 promoter gene sequence. It is rapid since neither separate reverse transcriptions nor postamplification steps are necessary, and its low detection level (2 pg/mug total RNA) allows precise absolute quantification. Using the method, we have quantified the gene expression of GH, IGF-I, and IGF-II in the tilapia. PMID:15891047

  3. Assessing error sources for Landsat time series analysis for tropical test sites in Viet Nam and Ethiopia

    NASA Astrophysics Data System (ADS)

    Schultz, Michael; Verbesselt, Jan; Herold, Martin; Avitabile, Valerio

    2013-10-01

    Researchers who use remotely sensed data can spend half of their total effort analysing prior data. If this data preprocessing does not match the application, this time spent on data analysis can increase considerably and can lead to inaccuracies. Despite the existence of a number of methods for pre-processing Landsat time series, each method has shortcomings, particularly for mapping forest changes under varying illumination, data availability and atmospheric conditions. Based on the requirements of mapping forest changes as defined by the United Nations (UN) Reducing Emissions from Forest Degradation and Deforestation (REDD) program, the accurate reporting of the spatio-temporal properties of these changes is necessary. We compared the impact of three fundamentally different radiometric preprocessing techniques Moderate Resolution Atmospheric TRANsmission (MODTRAN), Second Simulation of a Satellite Signal in the Solar Spectrum (6S) and simple Dark Object Subtraction (DOS) on mapping forest changes using Landsat time series data. A modification of Breaks For Additive Season and Trend (BFAST) monitor was used to jointly map the spatial and temporal agreement of forest changes at test sites in Ethiopia and Viet Nam. The suitability of the pre-processing methods for the occurring forest change drivers was assessed using recently captured Ground Truth and high resolution data (1000 points). A method for creating robust generic forest maps used for the sampling design is presented. An assessment of error sources has been performed identifying haze as a major source for time series analysis commission error.

  4. An integrated error parameter estimation and lag-aware data assimilation scheme for real-time flood forecasting

    NASA Astrophysics Data System (ADS)

    Li, Yuan; Ryu, Dongryeol; Western, Andrew W.; Wang, Q. J.; Robertson, David E.; Crow, Wade T.

    2014-11-01

    For operational flood forecasting, discharge observations may be assimilated into a hydrologic model to improve forecasts. However, the performance of conventional filtering schemes can be degraded by ignoring the time lag between soil moisture and discharge responses. This has led to ongoing development of more appropriate ways to implement sequential data assimilation. In this paper, an ensemble Kalman smoother (EnKS) with fixed time window is implemented for the GR4H hydrologic model (modèle du Génie Rural à 4 paramètres Horaire) to update current and antecedent model states. Model and observation error parameters are estimated through the maximum a posteriori method constrained by prior information drawn from flow gauging data. When evaluated in a hypothetical forecasting mode using observed rainfall, the EnKS is found to be more stable and produce more accurate discharge forecasts than a standard ensemble Kalman filter (EnKF) by reducing the mean of the ensemble root mean squared error (MRMSE) by 13-17%. The latter tends to over-correct current model states and leads to spurious peaks and oscillations in discharge forecasts. When evaluated in a real-time forecasting mode using rainfall forecasts from a numerical weather prediction model, the benefit of the EnKS is reduced as uncertainty in rainfall forecasts becomes dominant, especially at large forecast lead time.

  5. Secondary task for full flight simulation incorporating tasks that commonly cause pilot error: Time estimation

    NASA Technical Reports Server (NTRS)

    Rosch, E.

    1975-01-01

    The task of time estimation, an activity occasionally performed by pilots during actual flight, was investigated with the objective of providing human factors investigators with an unobtrusive and minimally loading additional task that is sensitive to differences in flying conditions and flight instrumentation associated with the main task of piloting an aircraft simulator. Previous research indicated that the duration and consistency of time estimates is associated with the cognitive, perceptual, and motor loads imposed by concurrent simple tasks. The relationships between the length and variability of time estimates and concurrent task variables under a more complex situation involving simulated flight were clarified. The wrap-around effect with respect to baseline duration, a consequence of mode switching at intermediate levels of concurrent task distraction, should contribute substantially to estimate variability and have a complex effect on the shape of the resulting distribution of estimates.

  6. Error analysis of real time and post processed or bit determination of GFO using GPS tracking

    NASA Technical Reports Server (NTRS)

    Schreiner, William S.

    1991-01-01

    The goal of the Navy's GEOSAT Follow-On (GFO) mission is to map the topography of the world's oceans in both real time (operational) and post processed modes. Currently, the best candidate for supplying the required orbit accuracy is the Global Positioning System (GPS). The purpose of this fellowship was to determine the expected orbit accuracy for GFO in both the real time and post-processed modes when using GPS tracking. This report presents the work completed through the ending date of the fellowship.

  7. Measurement error analysis of taxi meter

    NASA Astrophysics Data System (ADS)

    He, Hong; Li, Dan; Li, Hang; Zhang, Da-Jian; Hou, Ming-Feng; Zhang, Shi-pu

    2011-12-01

    The error test of the taximeter is divided into two aspects: (1) the test about time error of the taximeter (2) distance test about the usage error of the machine. The paper first gives the working principle of the meter and the principle of error verification device. Based on JJG517 - 2009 "Taximeter Verification Regulation ", the paper focuses on analyzing the machine error and test error of taxi meter. And the detect methods of time error and distance error are discussed as well. In the same conditions, standard uncertainty components (Class A) are evaluated, while in different conditions, standard uncertainty components (Class B) are also evaluated and measured repeatedly. By the comparison and analysis of the results, the meter accords with JJG517-2009, "Taximeter Verification Regulation ", thereby it improves the accuracy and efficiency largely. In actual situation, the meter not only makes up the lack of accuracy, but also makes sure the deal between drivers and passengers fair. Absolutely it enriches the value of the taxi as a way of transportation.

  8. To Err is Normable: The Computation of Frequency-Domain Error Bounds from Time-Domain Data

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.; Veillette, Robert J.; DeAbreuGarcia, J. Alexis; Chicatelli, Amy; Hartmann, Richard

    1998-01-01

    This paper exploits the relationships among the time-domain and frequency-domain system norms to derive information useful for modeling and control design, given only the system step response data. A discussion of system and signal norms is included. The proposed procedures involve only simple numerical operations, such as the discrete approximation of derivatives and integrals, and the calculation of matrix singular values. The resulting frequency-domain and Hankel-operator norm approximations may be used to evaluate the accuracy of a given model, and to determine model corrections to decrease the modeling errors.

  9. Different Types of Errors in Saccadic Task Are Sensitive to Either Time of Day or Chronic Sleep Restriction

    PubMed Central

    Wachowicz, Barbara; Beldzik, Ewa; Domagalik, Aleksandra; Fafrowicz, Magdalena; Gawlowska, Magda; Janik, Justyna; Lewandowska, Koryna; Oginska, Halszka; Marek, Tadeusz

    2015-01-01

    Circadian rhythms and restricted sleep length affect cognitive functions and, consequently, the performance of day to day activities. To date, no more than a few studies have explored the consequences of these factors on oculomotor behaviour. We have implemented a spatial cuing paradigm in an eye tracking experiment conducted four times of the day after one week of rested wakefulness and after one week of chronic partial sleep restriction. Our aim was to verify whether these conditions affect the number of a variety of saccadic task errors. Interestingly, we found that failures in response selection, i.e. premature responses and direction errors, were prone to time of day variations, whereas failures in response execution, i.e. omissions and commissions, were considerably affected by sleep deprivation. The former can be linked to the cue facilitation mechanism, while the latter to wake state instability and the diminished ability of top-down inhibition. Together, these results may be interpreted in terms of distinctive sensitivity of orienting and alerting systems to fatigue. Saccadic eye movements proved to be a novel and effective measure with which to study the susceptibility of attentional systems to time factors, thus, this approach is recommended for future research. PMID:26010673

  10. Different types of errors in saccadic task are sensitive to either time of day or chronic sleep restriction.

    PubMed

    Wachowicz, Barbara; Beldzik, Ewa; Domagalik, Aleksandra; Fafrowicz, Magdalena; Gawlowska, Magda; Janik, Justyna; Lewandowska, Koryna; Oginska, Halszka; Marek, Tadeusz

    2015-01-01

    Circadian rhythms and restricted sleep length affect cognitive functions and, consequently, the performance of day to day activities. To date, no more than a few studies have explored the consequences of these factors on oculomotor behaviour. We have implemented a spatial cuing paradigm in an eye tracking experiment conducted four times of the day after one week of rested wakefulness and after one week of chronic partial sleep restriction. Our aim was to verify whether these conditions affect the number of a variety of saccadic task errors. Interestingly, we found that failures in response selection, i.e. premature responses and direction errors, were prone to time of day variations, whereas failures in response execution, i.e. omissions and commissions, were considerably affected by sleep deprivation. The former can be linked to the cue facilitation mechanism, while the latter to wake state instability and the diminished ability of top-down inhibition. Together, these results may be interpreted in terms of distinctive sensitivity of orienting and alerting systems to fatigue. Saccadic eye movements proved to be a novel and effective measure with which to study the susceptibility of attentional systems to time factors, thus, this approach is recommended for future research. PMID:26010673

  11. An error-resilient approach for real-time packet communications by HF-channel diversity

    NASA Astrophysics Data System (ADS)

    Navarro, Antonio; Rodrigues, Rui; Angeja, Joao; Tavares, Joao; Carvalho, Luis; Perdigao, Fernando

    2004-08-01

    This paper evaluates the performance of a high frequency (HF) wireless network for transporting packet multimedia services. Beyond of allowing civil/amateur communications, HF bands are also used for long distance wireless military communications. Therefore, our work is based on NATO Link and Physical layer standards, STANAG 5066 and STANAG 4539 respectively. At each HF channel, a typical transmission bandwidth is about 3 kHz with the resulting throughput bit rate up to 12800 bps. This very low bit rate by itself imposes serious challenges for reliable and low delay real time multimedia communications. Thus, this paper discusses the performance of a real time communication system designed to allow an end-to-end communication through "best effort" networks. With HF channel diversity, the packet loss percentage, on average considering three channel conditions, is decreased by 16% in the channel SNR range from 0 to 45 dB.

  12. Real-time minimal bit error probability decoding of convolutional codes

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1973-01-01

    A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.

  13. Statistical error analysis in CCD time-resolved photometry with applications to variable stars and quasars

    NASA Technical Reports Server (NTRS)

    Howell, Steve B.; Warnock, Archibald, III; Mitchell, Kenneth J.

    1988-01-01

    Differential photometric time series obtained from CCD frames are tested for intrinsic variability using a newly developed analysis of variance technique. In general, the objects used for differential photometry will not all be of equal magnitude, so the techniques derived here explicitly correct for differences in the measured variances due to photon statistics. Other random-noise terms are also considered. The technique tests for the presence of intrinsic variability without regard to its random or periodic nature. It is then applied to observations of the variable stars ZZ Ceti and US 943 and the active extragalactic objects OQ 530, US 211, US 844, LB 9743, and OJ 287.

  14. [The error analysis and experimental verification of laser radar spectrum detection and terahertz time domain spectroscopy].

    PubMed

    Liu, Wen-Tao; Li, Jing-Wen; Sun, Zhi-Hui

    2010-03-01

    Terahertz waves (THz, T-ray) lie between far-infrared and microwave in electromagnetic spectrum with frequency from 0.1 to 10 THz. Many chemical agent explosives show characteristic spectral features in the terahertz. Compared with conventional methods of detecting a variety of threats, such as weapons and chemical agent, THz radiation is low frequency and non-ionizing, and does not give rise to safety concerns. The present paper summarizes the latest progress in the application of terahertz time domain spectroscopy (THz-TDS) to chemical agent explosives. A kind of device on laser radar detecting and real time spectrum measuring was designed which measures the laser spectrum on the bases of Fourier optics and optical signal processing. Wedge interferometer was used as the beam splitter to wipe off the background light and detect the laser and measure the spectrum. The result indicates that 10 ns laser radar pulse can be detected and many factors affecting experiments are also introduced. The combination of laser radar spectrum detecting, THz-TDS, modern pattern recognition and signal processing technology is the developing trend of remote detection for chemical agent explosives. PMID:20496663

  15. Absolute biological needs.

    PubMed

    McLeod, Stephen

    2014-07-01

    Absolute needs (as against instrumental needs) are independent of the ends, goals and purposes of personal agents. Against the view that the only needs are instrumental needs, David Wiggins and Garrett Thomson have defended absolute needs on the grounds that the verb 'need' has instrumental and absolute senses. While remaining neutral about it, this article does not adopt that approach. Instead, it suggests that there are absolute biological needs. The absolute nature of these needs is defended by appeal to: their objectivity (as against mind-dependence); the universality of the phenomenon of needing across the plant and animal kingdoms; the impossibility that biological needs depend wholly upon the exercise of the abilities characteristic of personal agency; the contention that the possession of biological needs is prior to the possession of the abilities characteristic of personal agency. Finally, three philosophical usages of 'normative' are distinguished. On two of these, to describe a phenomenon or claim as 'normative' is to describe it as value-dependent. A description of a phenomenon or claim as 'normative' in the third sense does not entail such value-dependency, though it leaves open the possibility that value depends upon the phenomenon or upon the truth of the claim. It is argued that while survival needs (or claims about them) may well be normative in this third sense, they are normative in neither of the first two. Thus, the idea of absolute need is not inherently normative in either of the first two senses. PMID:23586876

  16. Analysis of position error by time constant in read-out resistive network for gamma-ray imaging detection system

    NASA Astrophysics Data System (ADS)

    Jeon, Su-Jin; Park, Chang-In; Son, Byung-Hee; Jung, Mi; Jang, Teak-Jin; Lee, Chun-Sik; Choi, Young-Wan

    2016-03-01

    Position-sensitive photomultiplier tubes (PSPMTs) in array are used as gamma ray position detector. Each PMT converts the light of wide spectrum range (100 nm ~ 2500 nm) to electrical signal with amplification. Because detection system size is determined by the number of output channels in the PSPMTs, resistive network has been used for reducing the number of output channels. The photo-generated current is distributed to the four output current pulses according to a ratio by resistance values of resistive network. The detected positions are estimated by the peak value of the distributed current pulses. However, due to parasitic capacitance of PSPMTs in parallel with resistor in the resistive network, the time constants should be considered. When the duration of current pulse is not long enough, peak value of distributed pulses is reduced and detected position error is increased. In this paper, we analyzed the detected position error in the resistive network and variation of time constant according to the input position of the PSPMTs.

  17. Correction of timing errors in photomultiplier tubes used in phase-modulation fluorometry.

    PubMed

    Lakowicz, J R; Cherek, H; Balter, A

    1981-09-01

    The measurement of fluorescence lifetimes is known to be hindered by the wavelength-dependent and photocathode area-dependent time response of photomultiplier tubes. A simple and direct method is described to minimize these effects in photomultiplier tubes used for phase-modulation fluorometry. Reference fluorophores of known lifetime were used in place of the usual scattering reference. The emission wavelengths of the reference and sample were matched by either filters or a monochromator, and the use of a fluorophore rather than a scatterer decreases the differences in spatial distribution of light emanating from the reference and sample. Thus photomultiplier tube artifacts are minimized. Five reference fluorophores were selected on the basis of availability, ease of solution preparation, and constancy of lifetime with temperature and emission wavelength. These compounds are p-terphenyl, PPO, PPD, POPOP and dimethyl POPOP. These compounds are dissolved in ethanol to give standard solutions that can be used over the temperature range from -55 to +55 degrees C. Purging with inert gas is not necessary. The measured phase and modulation of the reference solution is used, in conjunction with the known reference lifetime, to calculate the actual phase and modulation of the excitation beam. The use of standard fluorophores does not require separate experiments to quantify photomultiplier effects, and does not increase the time required for the measurement of fluorescence lifetimes. Examples are presented which demonstrate the elimination of artifactual photomultiplier effects in measurements of the lifetimes of NADH (0.4 ns) and indole solutions quenched by iodide. In addition, the use of these reference solutions increases the accuracy of fluorescence lifetime measurements ranging to 30 ns. We judge this method to provide more reliable lifetime measurements by the phase and modulation method. The test solutions and procedures we describe may be used by other

  18. Large-Scale Uncertainty and Error Analysis for Time-dependent Fluid/Structure Interactions in Wind Turbine Applications

    SciTech Connect

    Alonso, Juan J.; Iaccarino, Gianluca

    2013-08-25

    The following is the final report covering the entire period of this aforementioned grant, June 1, 2011 - May 31, 2013 for the portion of the effort corresponding to Stanford University (SU). SU has partnered with Sandia National Laboratories (PI: Mike S. Eldred) and Purdue University (PI: Dongbin Xiu) to complete this research project and this final report includes those contributions made by the members of the team at Stanford. Dr. Eldred is continuing his contributions to this project under a no-cost extension and his contributions to the overall effort will be detailed at a later time (once his effort has concluded) on a separate project submitted by Sandia National Laboratories. At Stanford, the team is made up of Profs. Alonso, Iaccarino, and Duraisamy, post-doctoral researcher Vinod Lakshminarayan, and graduate student Santiago Padron. At Sandia National Laboratories, the team includes Michael Eldred, Matt Barone, John Jakeman, and Stefan Domino, and at Purdue University, we have Prof. Dongbin Xiu as our main collaborator. The overall objective of this project was to develop a novel, comprehensive methodology for uncertainty quantification by combining stochastic expansions (nonintrusive polynomial chaos and stochastic collocation), the adjoint approach, and fusion with experimental data to account for aleatory and epistemic uncertainties from random variable, random field, and model form sources. The expected outcomes of this activity were detailed in the proposal and are repeated here to set the stage for the results that we have generated during the time period of execution of this project: 1. The rigorous determination of an error budget comprising numerical errors in physical space and statistical errors in stochastic space and its use for optimal allocation of resources; 2. A considerable increase in efficiency when performing uncertainty quantification with a large number of uncertain variables in complex non-linear multi-physics problems; 3. A

  19. Error-based Extraction of States and Energy Landscapes from Experimental Single-Molecule Time-Series

    PubMed Central

    Taylor, J. Nicholas; Li, Chun-Biu; Cooper, David R.; Landes, Christy F.; Komatsuzaki, Tamiki

    2015-01-01

    Characterization of states, the essential components of the underlying energy landscapes, is one of the most intriguing subjects in single-molecule (SM) experiments due to the existence of noise inherent to the measurements. Here we present a method to extract the underlying state sequences from experimental SM time-series. Taking into account empirical error and the finite sampling of the time-series, the method extracts a steady-state network which provides an approximation of the underlying effective free energy landscape. The core of the method is the application of rate-distortion theory from information theory, allowing the individual data points to be assigned to multiple states simultaneously. We demonstrate the method's proficiency in its application to simulated trajectories as well as to experimental SM fluorescence resonance energy transfer (FRET) trajectories obtained from isolated agonist binding domains of the AMPA receptor, an ionotropic glutamate receptor that is prevalent in the central nervous system. PMID:25779909

  20. Error-based Extraction of States and Energy Landscapes from Experimental Single-Molecule Time-Series

    NASA Astrophysics Data System (ADS)

    Taylor, J. Nicholas; Li, Chun-Biu; Cooper, David R.; Landes, Christy F.; Komatsuzaki, Tamiki

    2015-03-01

    Characterization of states, the essential components of the underlying energy landscapes, is one of the most intriguing subjects in single-molecule (SM) experiments due to the existence of noise inherent to the measurements. Here we present a method to extract the underlying state sequences from experimental SM time-series. Taking into account empirical error and the finite sampling of the time-series, the method extracts a steady-state network which provides an approximation of the underlying effective free energy landscape. The core of the method is the application of rate-distortion theory from information theory, allowing the individual data points to be assigned to multiple states simultaneously. We demonstrate the method's proficiency in its application to simulated trajectories as well as to experimental SM fluorescence resonance energy transfer (FRET) trajectories obtained from isolated agonist binding domains of the AMPA receptor, an ionotropic glutamate receptor that is prevalent in the central nervous system.

  1. Comparing range data across the slow-time dimension to correct motion measurement errors beyond the range resolution of a synthetic aperture radar

    SciTech Connect

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2010-08-17

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  2. Time-to-contact estimation errors among older drivers with useful field of view impairments.

    PubMed

    Rusch, Michelle L; Schall, Mark C; Lee, John D; Dawson, Jeffrey D; Edwards, Samantha V; Rizzo, Matthew

    2016-10-01

    Previous research indicates that useful field of view (UFOV) decline affects older driver performance. In particular, elderly drivers have difficulty estimating oncoming vehicle time-to-contact (TTC). The objective of this study was to evaluate how UFOV impairments affect TTC estimates in elderly drivers deciding when to make a left turn across oncoming traffic. TTC estimates were obtained from 64 middle-aged (n=17, age=46±6years) and older (n=37, age=75±6years) licensed drivers with a range of UFOV abilities using interactive scenarios in a fixed-base driving simulator. Each driver was situated in an intersection to turn left across oncoming traffic approaching and disappearing at differing distances (1.5, 3, or 5s) and speeds (45, 55, or 65mph). Drivers judged when each oncoming vehicle would collide with them if they were to turn left. Findings showed that TTC estimates across all drivers, on average, were most accurate for oncoming vehicles travelling at the highest velocities and least accurate for those travelling at the slowest velocities. Drivers with the worst UFOV scores had the least accurate TTC estimates, especially for slower oncoming vehicles. Results suggest age-related UFOV decline impairs older driver judgment of TTC with oncoming vehicles in safety-critical left-turn situations. Our results are compatible with national statistics on older driver crash proclivity at intersections. PMID:27472816

  3. Analytical Calculation of Errors in Time and Value Perception Due to a Subjective Time Accumulator: A Mechanistic Model and the Generation of Weber's Law.

    PubMed

    Namboodiri, Vijay Mohan K; Mihalas, Stefan; Hussain Shuler, Marshall G

    2016-01-01

    It has been previously shown (Namboodiri, Mihalas, Marton, & Hussain Shuler, 2014) that an evolutionary theory of decision making and time perception is capable of explaining numerous behavioral observations regarding how humans and animals decide between differently delayed rewards of differing magnitudes and how they perceive time. An implementation of this theory using a stochastic drift-diffusion accumulator model (Namboodiri, Mihalas, & Hussain Shuler, 2014a) showed that errors in time perception and decision making approximately obey Weber's law for a range of parameters. However, prior calculations did not have a clear mechanistic underpinning. Further, these calculations were only approximate, with the range of parameters being limited. In this letter, we provide a full analytical treatment of such an accumulator model, along with a mechanistic implementation, to calculate the expression of these errors for the entirety of the parameter space. In our mechanistic model, Weber's law results from synaptic facilitation and depression within the feedback synapses of the accumulator. Our theory also makes the prediction that the steepness of temporal discounting can be affected by requiring the precise timing of temporal intervals. Thus, by presenting exact quantitative calculations, this work provides falsifiable predictions for future experimental testing. PMID:26599714

  4. A real-time error-free color-correction facility for digital consumers

    NASA Astrophysics Data System (ADS)

    Shaw, Rodney

    2008-01-01

    It has been well known since the earliest days of color photography that color-balance in general, and facial reproduction (flesh tones) in particular, are of dominant interest to the consumer, and significant research resources have been expended in satisfying this need. The general problem is a difficult one, spanning the factors that govern perception and personal preference, the physics and chemistry of color reproduction, as well as wide field of color measurement specification, and analysis. However, with the advent of digital photography and its widespread acceptance in the consumer market, and with the possibility of a much greater degree of individual control over color reproduction, the field is taking on a new consumer-driven impetus, and the provision of user facilities for preferred color choice now constitutes an intense field of research. In addition, due to the conveniences of digital technology, the collection of large data bases and statistics relating to individual color preferences have now become a relatively straightforward operation. Using a consumer preference approach of this type, we have developed a user-friendly facility whereby unskilled consumers may manipulate the color of their personal digital images according to their preferred choice. By virtue of its ease of operation and the real-time nature of the color-correction transforms, this facility can readily be inserted anywhere a consumer interacts with a digital image, from camera, printer, or scanner, to web or photo-kiosk. Here the underlying scientific principles are explored in detail, and these are related to the practical color-preference outcomes. Examples are given of the application to the correction of images with unsatisfactory color balance, and especially to flesh tones and faces, and the nature of the consumer controls and their corresponding image transformations are explored.

  5. Updated Absolute Flux Calibration of the COS FUV Modes

    NASA Astrophysics Data System (ADS)

    Massa, D.; Ely, J.; Osten, R.; Penton, S.; Aloisi, A.; Bostroem, A.; Roman-Duval, J.; Proffitt, C.

    2014-03-01

    We present newly derived point source absolute flux calibrations for the COS FUV modes at both the original and second lifetime positions. The analysis includes observa- tions through the Primary Science Aperture (PSA) of the standard stars WD0308-565, GD71, WD1057+729 and WD0947+857 obtained as part of two calibration programs. Data were were obtained for all of the gratings at all of the original CENWAVE settings at both the original and second lifetime positions and for the G130M CENWAVE = 1222 at the second lifetime position. Data were also obtained with the FUVB segment for the G130M CENWAVE = 1055 and 1096 setting at the second lifetime position. We also present the derivation of L-flats that were used in processing the data and show that the internal consistency of the primary standards is 1%. The accuracy of the absolute flux calibrations over the UV are estimated to be 1-2% for the medium resolution gratings, and 2-3% over most of the wavelength range of the G140L grating, although the uncertainty can be as large as 5% or more at some G140L wavelengths. We note that these errors are all relative to the optical flux near the V band and small additional errors may be present due to inaccuracies in the V band calibration. In addition, these error estimates are for the time at which the flux calibration data were obtained; the accuracy of the flux calibration at other times can be affected by errors in the time dependent sensitivity (TDS) correction.

  6. Double peak-induced distance error in short-time-Fourier-transform-Brillouin optical time domain reflectometers event detection and the recovery method.

    PubMed

    Yu, Yifei; Luo, Linqing; Li, Bo; Guo, Linfeng; Yan, Jize; Soga, Kenichi

    2015-10-01

    The measured distance error caused by double peaks in the BOTDRs (Brillouin optical time domain reflectometers) system is a kind of Brillouin scattering spectrum (BSS) deformation, discussed and simulated for the first time in the paper, to the best of the authors' knowledge. Double peak, as a kind of Brillouin spectrum deformation, is important in the enhancement of spatial resolution, measurement accuracy, and crack detection. Due to the variances of the peak powers of the BSS along the fiber, the measured starting point of a step-shape frequency transition region is shifted and results in distance errors. Zero-padded short-time-Fourier-transform (STFT) can restore the transition-induced double peaks in the asymmetric and deformed BSS, thus offering more accurate and quicker measurements than the conventional Lorentz-fitting method. The recovering method based on the double-peak detection and corresponding BSS deformation can be applied to calculate the real starting point, which can improve the distance accuracy of the STFT-based BOTDR system. PMID:26479653

  7. Adaptive dynamic programming for finite-horizon optimal control of discrete-time nonlinear systems with ε-error bound.

    PubMed

    Wang, Fei-Yue; Jin, Ning; Liu, Derong; Wei, Qinglai

    2011-01-01

    In this paper, we study the finite-horizon optimal control problem for discrete-time nonlinear systems using the adaptive dynamic programming (ADP) approach. The idea is to use an iterative ADP algorithm to obtain the optimal control law which makes the performance index function close to the greatest lower bound of all performance indices within an ε-error bound. The optimal number of control steps can also be obtained by the proposed ADP algorithms. A convergence analysis of the proposed ADP algorithms in terms of performance index function and control policy is made. In order to facilitate the implementation of the iterative ADP algorithms, neural networks are used for approximating the performance index function, computing the optimal control policy, and modeling the nonlinear system. Finally, two simulation examples are employed to illustrate the applicability of the proposed method. PMID:20876014

  8. The absolute path command

    2012-05-11

    The ap command traveres all symlinks in a given file, directory, or executable name to identify the final absolute path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it canmore » provide the absolute path to a relative directory from the current working directory.« less

  9. The absolute path command

    SciTech Connect

    Moody, A.

    2012-05-11

    The ap command traveres all symlinks in a given file, directory, or executable name to identify the final absolute path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it can provide the absolute path to a relative directory from the current working directory.

  10. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    NASA Astrophysics Data System (ADS)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  11. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM).

    PubMed

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission. PMID:26754955

  12. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    PubMed Central

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission. PMID:26754955

  13. Response error correction--a demonstration of improved human-machine performance using real-time EEG monitoring.

    PubMed

    Parra, Lucas C; Spence, Clay D; Gerson, Adam D; Sajda, Paul

    2003-06-01

    We describe a brain-computer interface (BCI) system, which uses a set of adaptive linear preprocessing and classification algorithms for single-trial detection of error related negativity (ERN). We use the detected ERN as an estimate of a subject's perceived error during an alternative forced choice visual discrimination task. The detected ERN is used to correct subject errors. Our initial results show average improvement in subject performance of 21% when errors are automatically corrected via the BCI. We are currently investigating the generalization of the overall approach to other tasks and stimulus paradigms. PMID:12899266

  14. Measurement of absolute concentrations of individual compounds in metabolite mixtures by gradient-selective time-zero 1H-13C HSQC with two concentration references and fast maximum likelihood reconstruction analysis.

    PubMed

    Hu, Kaifeng; Ellinger, James J; Chylla, Roger A; Markley, John L

    2011-12-15

    Time-zero 2D (13)C HSQC (HSQC(0)) spectroscopy offers advantages over traditional 2D NMR for quantitative analysis of solutions containing a mixture of compounds because the signal intensities are directly proportional to the concentrations of the constituents. The HSQC(0) spectrum is derived from a series of spectra collected with increasing repetition times within the basic HSQC block by extrapolating the repetition time to zero. Here we present an alternative approach to data collection, gradient-selective time-zero (1)H-(13)C HSQC(0) in combination with fast maximum likelihood reconstruction (FMLR) data analysis and the use of two concentration references for absolute concentration determination. Gradient-selective data acquisition results in cleaner spectra, and NMR data can be acquired in both constant-time and non-constant-time mode. Semiautomatic data analysis is supported by the FMLR approach, which is used to deconvolute the spectra and extract peak volumes. The peak volumes obtained from this analysis are converted to absolute concentrations by reference to the peak volumes of two internal reference compounds of known concentration: DSS (4,4-dimethyl-4-silapentane-1-sulfonic acid) at the low concentration limit (which also serves as chemical shift reference) and MES (2-(N-morpholino)ethanesulfonic acid) at the high concentration limit. The linear relationship between peak volumes and concentration is better defined with two references than with one, and the measured absolute concentrations of individual compounds in the mixture are more accurate. We compare results from semiautomated gsHSQC(0) with those obtained by the original manual phase-cycled HSQC(0) approach. The new approach is suitable for automatic metabolite profiling by simultaneous quantification of multiple metabolites in a complex mixture. PMID:22029275

  15. 5 CFR 1605.22 - Claims for correction of Board or TSP record keeper errors; time limitations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... participant or beneficiary. (b) Board's or TSP record keeper's discovery of error. (1) Upon discovery of an... before its discovery, the Board or the TSP record keeper may exercise sound discretion in deciding... error if it is discovered before 30 days after the issuance of the earlier of the most recent...

  16. 5 CFR 1605.22 - Claims for correction of Board or TSP record keeper errors; time limitations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... participant or beneficiary. (b) Board's or TSP record keeper's discovery of error. (1) Upon discovery of an... before its discovery, the Board or the TSP record keeper may exercise sound discretion in deciding... error if it is discovered before 30 days after the issuance of the earlier of the most recent...

  17. 5 CFR 1605.22 - Claims for correction of Board or TSP record keeper errors; time limitations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... participant or beneficiary. (b) Board's or TSP record keeper's discovery of error. (1) Upon discovery of an... before its discovery, the Board or the TSP record keeper may exercise sound discretion in deciding... error if it is discovered before 30 days after the issuance of the earlier of the most recent...

  18. 5 CFR 1605.22 - Claims for correction of Board or TSP record keeper errors; time limitations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... participant or beneficiary. (b) Board's or TSP record keeper's discovery of error. (1) Upon discovery of an... before its discovery, the Board or the TSP record keeper may exercise sound discretion in deciding... error if it is discovered before 30 days after the issuance of the earlier of the most recent...

  19. The use of ionospheric tomography and elevation masks to reduce the overall error in single-frequency GPS timing applications

    NASA Astrophysics Data System (ADS)

    Rose, Julian A. R.; Tong, Jenna R.; Allain, Damien J.; Mitchell, Cathryn N.

    2011-01-01

    Signals from Global Positioning System (GPS) satellites at the horizon or at low elevations are often excluded from a GPS solution because they experience considerable ionospheric delays and multipath effects. Their exclusion can degrade the overall satellite geometry for the calculations, resulting in greater errors; an effect known as the Dilution of Precision (DOP). In contrast, signals from high elevation satellites experience less ionospheric delays and multipath effects. The aim is to find a balance in the choice of elevation mask, to reduce the propagation delays and multipath whilst maintaining good satellite geometry, and to use tomography to correct for the ionosphere and thus improve single-frequency GPS timing accuracy. GPS data, collected from a global network of dual-frequency GPS receivers, have been used to produce four GPS timing solutions, each with a different ionospheric compensation technique. One solution uses a 4D tomographic algorithm, Multi-Instrument Data Analysis System (MIDAS), to compensate for the ionospheric delay. Maps of ionospheric electron density are produced and used to correct the single-frequency pseudorange observations. This method is compared to a dual-frequency solution and two other single-frequency solutions: one does not include any ionospheric compensation and the other uses the broadcast Klobuchar model. Data from the solar maximum year 2002 and October 2003 have been investigated to display results when the ionospheric delays are large and variable. The study focuses on Europe and results are produced for the chosen test site, VILL (Villafranca, Spain). The effects of excluding all of the GPS satellites below various elevation masks, ranging from 5° to 40°, on timing solutions for fixed (static) and mobile (moving) situations are presented. The greatest timing accuracies when using the fixed GPS receiver technique are obtained by using a 40° mask, rather than a 5° mask. The mobile GPS timing solutions are most

  20. Identifying Autocorrelation Generated by Various Error Processes in Interrupted Time-Series Regression Designs: A Comparison of AR1 and Portmanteau Tests

    ERIC Educational Resources Information Center

    Huitema, Bradley E.; McKean, Joseph W.

    2007-01-01

    Regression models used in the analysis of interrupted time-series designs assume statistically independent errors. Four methods of evaluating this assumption are the Durbin-Watson (D-W), Huitema-McKean (H-M), Box-Pierce (B-P), and Ljung-Box (L-B) tests. These tests were compared with respect to Type I error and power under a wide variety of error…

  1. Visual cortex combines a stimulus and an error-like signal with a proportion that is dependent on time, space, and stimulus contrast

    PubMed Central

    Eriksson, David; Wunderle, Thomas; Schmidt, Kerstin

    2012-01-01

    Even though the visual cortex is one of the most studied brain areas, the neuronal code in this area is still not fully understood. In the literature, two codes are commonly hypothesized, namely stimulus and predictive (error) codes. Here, we examined whether and how these two codes can coexist in a neuron. To this end, we assumed that neurons could predict a constant stimulus across time or space, since this is the most fundamental type of prediction. Prediction was examined in time using electrophysiology and voltage-sensitive dye imaging in the supragranular layers in area 18 of the anesthetized cat, and in space using a computer model. The distinction into stimulus and error code was made by means of the orientation tuning of the recorded unit. The stimulus was constructed as such that a maximum response to the non-preferred orientation indicated an error signal, and the maximum response to the preferred orientation indicated a stimulus signal. We demonstrate that a single neuron combines stimulus and error-like coding. In addition, we observed that the duration of the error coding varies as a function of stimulus contrast. For low contrast the error-like coding was prolonged by around 60–100%. Finally, the combination of stimulus and error leads to a suboptimal free energy in a recent predictive coding model. We therefore suggest a straightforward modification that can be applied to the free energy model and other predictive coding models. Combining stimulus and error might be advantageous because the stimulus code enables a direct stimulus recognition that is free of assumptions whereas the error code enables an experience dependent inference of ambiguous and non-salient stimuli. PMID:22539918

  2. Absolute phase retrieval for defocused fringe projection three-dimensional measurement

    NASA Astrophysics Data System (ADS)

    Zheng, Dongliang; Da, Feipeng

    2014-02-01

    Defocused fringe projection three-dimensional technique based on pulse-width modulation (PWM) can generate high-quality sinusoidal fringe patterns. It only uses slightly defocused binary structured patterns which can eliminate the gamma problem (i.e. nonlinear response), and the phase error can be significantly reduced. However, when the projector is defocused, it is difficult to retrieve the absolute phase from the wrapped phase. A recently proposed phase coding method is efficient for absolute phase retrieval, but the gamma problem leads this method not so reliable. In this paper, we use the PWM technique to generate fringe patterns for the phase coding method. The gamma problem of the projector can be eliminated, and correct absolute phase can be retrieved. The proposed method only uses two grayscale values (0's and 255's), which can be used for real-time 3D shape measurement. Both simulation and experiment demonstrate the performance of the proposed method.

  3. The correction of vibration in frequency scanning interferometry based absolute distance measurement system for dynamic measurements

    NASA Astrophysics Data System (ADS)

    Lu, Cheng; Liu, Guodong; Liu, Bingguo; Chen, Fengdong; Zhuang, Zhitao; Xu, Xinke; Gan, Yu

    2015-10-01

    Absolute distance measurement systems are of significant interest in the field of metrology, which could improve the manufacturing efficiency and accuracy of large assemblies in fields such as aircraft construction, automotive engineering, and the production of modern windmill blades. Frequency scanning interferometry demonstrates noticeable advantages as an absolute distance measurement system which has a high precision and doesn't depend on a cooperative target. In this paper , the influence of inevitable vibration in the frequency scanning interferometry based absolute distance measurement system is analyzed. The distance spectrum is broadened as the existence of Doppler effect caused by vibration, which will bring in a measurement error more than 103 times bigger than the changes of optical path difference. In order to decrease the influence of vibration, the changes of the optical path difference are monitored by a frequency stabilized laser, which runs parallel to the frequency scanning interferometry. The experiment has verified the effectiveness of this method.

  4. Time-resolved absolute measurements by electro-optic effect of giant electromagnetic pulses due to laser-plasma interaction in nanosecond regime.

    PubMed

    Consoli, F; De Angelis, R; Duvillaret, L; Andreoli, P L; Cipriani, M; Cristofari, G; Di Giorgio, G; Ingenito, F; Verona, C

    2016-01-01

    We describe the first electro-optical absolute measurements of electromagnetic pulses (EMPs) generated by laser-plasma interaction in nanosecond regime. Laser intensities are inertial-confinement-fusion (ICF) relevant and wavelength is 1054 nm. These are the first direct EMP amplitude measurements with the detector rather close and in direct view of the plasma. A maximum field of 261 kV/m was measured, two orders of magnitude higher than previous measurements by conductive probes on nanosecond regime lasers with much higher energy. The analysis of measurements and of particle-in-cell simulations indicates that signals match the emission of charged particles detected in the same experiment, and suggests that anisotropic particle emission from target, X-ray photoionization and charge implantation on surfaces directly exposed to plasma, could be important EMP contributions. Significant information achieved on EMP features and sources is crucial for future plants of laser-plasma acceleration and inertial-confinement-fusion and for the use as effective plasma diagnostics. It also opens to remarkable applications of laser-plasma interaction as intense source of RF-microwaves for studies on materials and devices, EMP-radiation-hardening and electromagnetic compatibility. The demonstrated extreme effectivity of electric-fields detection in laser-plasma context by electro-optic effect, leads to great potential for characterization of laser-plasma interaction and generated Terahertz radiation. PMID:27301704

  5. Time-resolved absolute measurements by electro-optic effect of giant electromagnetic pulses due to laser-plasma interaction in nanosecond regime

    PubMed Central

    Consoli, F.; De Angelis, R.; Duvillaret, L.; Andreoli, P. L.; Cipriani, M.; Cristofari, G.; Di Giorgio, G.; Ingenito, F.; Verona, C.

    2016-01-01

    We describe the first electro-optical absolute measurements of electromagnetic pulses (EMPs) generated by laser-plasma interaction in nanosecond regime. Laser intensities are inertial-confinement-fusion (ICF) relevant and wavelength is 1054 nm. These are the first direct EMP amplitude measurements with the detector rather close and in direct view of the plasma. A maximum field of 261 kV/m was measured, two orders of magnitude higher than previous measurements by conductive probes on nanosecond regime lasers with much higher energy. The analysis of measurements and of particle-in-cell simulations indicates that signals match the emission of charged particles detected in the same experiment, and suggests that anisotropic particle emission from target, X-ray photoionization and charge implantation on surfaces directly exposed to plasma, could be important EMP contributions. Significant information achieved on EMP features and sources is crucial for future plants of laser-plasma acceleration and inertial-confinement-fusion and for the use as effective plasma diagnostics. It also opens to remarkable applications of laser-plasma interaction as intense source of RF-microwaves for studies on materials and devices, EMP-radiation-hardening and electromagnetic compatibility. The demonstrated extreme effectivity of electric-fields detection in laser-plasma context by electro-optic effect, leads to great potential for characterization of laser-plasma interaction and generated Terahertz radiation. PMID:27301704

  6. Time-resolved absolute measurements by electro-optic effect of giant electromagnetic pulses due to laser-plasma interaction in nanosecond regime

    NASA Astrophysics Data System (ADS)

    Consoli, F.; de Angelis, R.; Duvillaret, L.; Andreoli, P. L.; Cipriani, M.; Cristofari, G.; di Giorgio, G.; Ingenito, F.; Verona, C.

    2016-06-01

    We describe the first electro-optical absolute measurements of electromagnetic pulses (EMPs) generated by laser-plasma interaction in nanosecond regime. Laser intensities are inertial-confinement-fusion (ICF) relevant and wavelength is 1054 nm. These are the first direct EMP amplitude measurements with the detector rather close and in direct view of the plasma. A maximum field of 261 kV/m was measured, two orders of magnitude higher than previous measurements by conductive probes on nanosecond regime lasers with much higher energy. The analysis of measurements and of particle-in-cell simulations indicates that signals match the emission of charged particles detected in the same experiment, and suggests that anisotropic particle emission from target, X-ray photoionization and charge implantation on surfaces directly exposed to plasma, could be important EMP contributions. Significant information achieved on EMP features and sources is crucial for future plants of laser-plasma acceleration and inertial-confinement-fusion and for the use as effective plasma diagnostics. It also opens to remarkable applications of laser-plasma interaction as intense source of RF-microwaves for studies on materials and devices, EMP-radiation-hardening and electromagnetic compatibility. The demonstrated extreme effectivity of electric-fields detection in laser-plasma context by electro-optic effect, leads to great potential for characterization of laser-plasma interaction and generated Terahertz radiation.

  7. The generalized STAR(1,1) modeling with time correlated errors to red-chili weekly prices of some traditional markets in Bandung, West Java

    NASA Astrophysics Data System (ADS)

    Nisa Fadlilah F., I.; Mukhaiyar, Utriweni; Fahmi, Fauzia

    2015-12-01

    The observations at a certain location may be linearly influenced by the previous times of observations at that location and neighbor locations, which could be analyzed by Generalized STAR(1,1). In this paper, the weekly red-chili prices secondary-data of five main traditional markets in Bandung are used as case study. The purpose of GSTAR(1,1) model is to forecast the next time red-chili prices at those markets. The model is identified by sample space-time ACF and space-time PACF, and model parameters are estimated by least square estimation method. Theoretically, the errors' independency assumption could simplify the parameter estimation's problem. However, practically that assumption is hard to satisfy since the errors may be correlated each other's. In red-chili prices modeling, it is considered that the process has time-correlated errors, i.e. martingale difference process, instead of follow normal distribution. Here, we do some simulations to investigate the behavior of errors' assumptions. Although some of results show that the behavior of the errors' model are not always followed the martingale difference process, it does not corrupt the goodness of GSTAR(1,1) model to forecast the red-chili prices at those five markets.

  8. Adjustment of wind-drift effect for real-time systematic error correction in radar rainfall data

    NASA Astrophysics Data System (ADS)

    Dai, Qiang; Han, Dawei; Zhuo, Lu; Huang, Jing; Islam, Tanvir; Zhang, Shuliang

    An effective bias correction procedure using gauge measurement is a significant step for radar data processing to reduce the systematic error in hydrological applications. In these bias correction methods, the spatial matching of precipitation patterns between radar and gauge networks is an important premise. However, the wind-drift effect on radar measurement induces an inconsistent spatial relationship between radar and gauge measurements as the raindrops observed by radar do not fall vertically to the ground. Consequently, a rain gauge does not correspond to the radar pixel based on the projected location of the radar beam. In this study, we introduce an adjustment method to incorporate the wind-drift effect into a bias correlation scheme. We first simulate the trajectory of raindrops in the air using downscaled three-dimensional wind data from the weather research and forecasting model (WRF) and calculate the final location of raindrops on the ground. The displacement of rainfall is then estimated and a radar-gauge spatial relationship is reconstructed. Based on this, the local real-time biases of the bin-average radar data were estimated for 12 selected events. Then, the reference mean local gauge rainfall, mean local bias, and adjusted radar rainfall calculated with and without consideration of the wind-drift effect are compared for different events and locations. There are considerable differences for three estimators, indicating that wind drift has a considerable impact on the real-time radar bias correction. Based on these facts, we suggest bias correction schemes based on the spatial correlation between radar and gauge measurements should consider the adjustment of the wind-drift effect and the proposed adjustment method is a promising solution to achieve this.

  9. Analysis of positron annihilation lifetime data by numerical Laplace inversion: Corrections for source terms and zero-time shift errors

    NASA Astrophysics Data System (ADS)

    Gregory, Roger B.

    1991-05-01

    We have recently described modifications to the program CONTIN [S.W. Provencher, Comput. Phys. Commun. 27 (1982) 229] for the solution of Fredholm integral equations with convoluted kernels of the type that occur in the analysis of positron annihilation lifetime data [R.B. Gregory and Yongkang Zhu, Nucl. Instr. and Meth. A290 (1990) 172]. In this article, modifications to the program to correct for source terms in the sample and reference decay curves and for shifts in the position of the zero-time channel of the sample and reference data are described. Unwanted source components, expressed as a discrete sum of exponentials, may be removed from both the sample and reference data by modification of the sample data alone, without the need for direct knowledge of the instrument resolution function. Shifts in the position of the zero-time channel of up to half the channel width of the multichannel analyzer can be corrected. Analyses of computer-simulated test data indicate that the quality of the reconstructed annihilation rate probability density functions is improved by employing a reference material with a short lifetime and indicate that reference materials which generate free positrons by quenching positronium formation (i.e. strong oxidizing agents) have lifetimes that are too long (400-450 ps) to provide reliable estimates of the lifetime parameters for the shortlived components with the methods described here. Well-annealed single crystals of metals with lifetimes less than 200 ps, such as molybdenum (123 ps) and aluminum (166 ps) do not introduce significant errors in estimates of the lifetime parameters and are to be preferred as reference materials. The performance of our modified version of CONTIN is illustrated by application to positron annihilation in polytetrafluoroethylene.

  10. Simulations for Full Unit-memory and Partial Unit-memory Convolutional Codes with Real-time Minimal-byte-error Probability Decoding Algorithm

    NASA Technical Reports Server (NTRS)

    Vo, Q. D.

    1984-01-01

    A program which was written to simulate Real Time Minimal-Byte-Error Probability (RTMBEP) decoding of full unit-memory (FUM) convolutional codes on a 3-bit quantized AWGN channel is described. This program was used to compute the symbol-error probability of FUM codes and to determine the signal to noise (SNR) required to achieve a bit error rate (BER) of 10 to the minus 6th power for corresponding concatenated systems. A (6,6/30) FUM code, 6-bit Reed-Solomon code combination was found to achieve the required BER at a SNR of 1.886 dB. The RTMBEP algorithm was then modified for decoding partial unit-memory (PUM) convolutional codes. A simulation program was also written to simulate the symbol-error probability of these codes.

  11. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2001-01-01

    prohibitively expensive, as it would require manufacturing numerous amplifiers, in addition to acquiring the required digital hardware. As an alternative, the time-domain TWT interaction model developed here provides the capability to establish a computational test bench where ISI or bit error rate can be simulated as a function of TWT operating parameters and component geometries. Intermodulation products, harmonic generation, and backward waves can also be monitored with the model for similar correlations. The advancements in computational capabilities and corresponding potential improvements in TWT performance may prove to be the enabling technologies for realizing unprecedented data rates for near real time transmission of the increasingly larger volumes of data demanded by planned commercial and Government satellite communications applications. This work is in support of the Cross Enterprise Technology Development Program in Headquarters' Advanced Technology & Mission Studies Division and the Air Force Office of Scientific Research Small Business Technology Transfer programs.

  12. Dose error from deviation of dwell time and source position for high dose-rate 192Ir in remote afterloading system

    PubMed Central

    Okamoto, Hiroyuki; Aikawa, Ako; Wakita, Akihisa; Yoshio, Kotaro; Murakami, Naoya; Nakamura, Satoshi; Hamada, Minoru; Abe, Yoshihisa; Itami, Jun

    2014-01-01

    The influence of deviations in dwell times and source positions for 192Ir HDR-RALS was investigated. The potential dose errors for various kinds of brachytherapy procedures were evaluated. The deviations of dwell time ΔT of a 192Ir HDR source for the various dwell times were measured with a well-type ionization chamber. The deviations of source position ΔP were measured with two methods. One is to measure actual source position using a check ruler device. The other is to analyze peak distances from radiographic film irradiated with 20 mm gap between the dwell positions. The composite dose errors were calculated using Gaussian distribution with ΔT and ΔP as 1σ of the measurements. Dose errors depend on dwell time and distance from the point of interest to the dwell position. To evaluate the dose error in clinical practice, dwell times and point of interest distances were obtained from actual treatment plans involving cylinder, tandem-ovoid, tandem-ovoid with interstitial needles, multiple interstitial needles, and surface-mold applicators. The ΔT and ΔP were 32 ms (maximum for various dwell times) and 0.12 mm (ruler), 0.11 mm (radiographic film). The multiple interstitial needles represent the highest dose error of 2%, while the others represent less than approximately 1%. Potential dose error due to dwell time and source position deviation can depend on kinds of brachytherapy techniques. In all cases, the multiple interstitial needles is most susceptible. PMID:24566719

  13. Electronic Absolute Cartesian Autocollimator

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B.

    2006-01-01

    An electronic absolute Cartesian autocollimator performs the same basic optical function as does a conventional all-optical or a conventional electronic autocollimator but differs in the nature of its optical target and the manner in which the position of the image of the target is measured. The term absolute in the name of this apparatus reflects the nature of the position measurement, which, unlike in a conventional electronic autocollimator, is based absolutely on the position of the image rather than on an assumed proportionality between the position and the levels of processed analog electronic signals. The term Cartesian in the name of this apparatus reflects the nature of its optical target. Figure 1 depicts the electronic functional blocks of an electronic absolute Cartesian autocollimator along with its basic optical layout, which is the same as that of a conventional autocollimator. Referring first to the optical layout and functions only, this or any autocollimator is used to measure the compound angular deviation of a flat datum mirror with respect to the optical axis of the autocollimator itself. The optical components include an illuminated target, a beam splitter, an objective or collimating lens, and a viewer or detector (described in more detail below) at a viewing plane. The target and the viewing planes are focal planes of the lens. Target light reflected by the datum mirror is imaged on the viewing plane at unit magnification by the collimating lens. If the normal to the datum mirror is parallel to the optical axis of the autocollimator, then the target image is centered on the viewing plane. Any angular deviation of the normal from the optical axis manifests itself as a lateral displacement of the target image from the center. The magnitude of the displacement is proportional to the focal length and to the magnitude (assumed to be small) of the angular deviation. The direction of the displacement is perpendicular to the axis about which the

  14. Absolute magnitudes of trans-neptunian objects

    NASA Astrophysics Data System (ADS)

    Duffard, R.; Alvarez-candal, A.; Pinilla-Alonso, N.; Ortiz, J. L.; Morales, N.; Santos-Sanz, P.; Thirouin, A.

    2015-10-01

    Accurate measurements of diameters of trans- Neptunian objects are extremely complicated to obtain. Radiomatric techniques applied to thermal measurements can provide good results, but precise absolute magnitudes are needed to constrain diameters and albedos. Our objective is to measure accurate absolute magnitudes for a sample of trans- Neptunian objects, many of which have been observed, and modelled, by the "TNOs are cool" team, one of Herschel Space Observatory key projects grantes with ~ 400 hours of observing time. We observed 56 objects in filters V and R, if possible. These data, along with data available in the literature, was used to obtain phase curves and to measure absolute magnitudes by assuming a linear trend of the phase curves and considering magnitude variability due to rotational light-curve. In total we obtained 234 new magnitudes for the 56 objects, 6 of them with no reported previous measurements. Including the data from the literature we report a total of 109 absolute magnitudes.

  15. ABSOLUTE POLARIMETRY AT RHIC.

    SciTech Connect

    OKADA; BRAVAR, A.; BUNCE, G.; GILL, R.; HUANG, H.; MAKDISI, Y.; NASS, A.; WOOD, J.; ZELENSKI, Z.; ET AL.

    2007-09-10

    Precise and absolute beam polarization measurements are critical for the RHIC spin physics program. Because all experimental spin-dependent results are normalized by beam polarization, the normalization uncertainty contributes directly to final physics uncertainties. We aimed to perform the beam polarization measurement to an accuracy Of {Delta}P{sub beam}/P{sub beam} < 5%. The absolute polarimeter consists of Polarized Atomic Hydrogen Gas Jet Target and left-right pairs of silicon strip detectors and was installed in the RHIC-ring in 2004. This system features proton-proton elastic scattering in the Coulomb nuclear interference (CNI) region. Precise measurements of the analyzing power A{sub N} of this process has allowed us to achieve {Delta}P{sub beam}/P{sub beam} = 4.2% in 2005 for the first long spin-physics run. In this report, we describe the entire set up and performance of the system. The procedure of beam polarization measurement and analysis results from 2004-2005 are described. Physics topics of AN in the CNI region (four-momentum transfer squared 0.001 < -t < 0.032 (GeV/c){sup 2}) are also discussed. We point out the current issues and expected optimum accuracy in 2006 and the future.

  16. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close ...

  17. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  18. Comparison of haptic guidance and error amplification robotic trainings for the learning of a timing-based motor task by healthy seniors

    PubMed Central

    Bouchard, Amy E.; Corriveau, Hélène; Milot, Marie-Hélène

    2015-01-01

    With age, a decline in the temporal aspect of movement is observed such as a longer movement execution time and a decreased timing accuracy. Robotic training can represent an interesting approach to help improve movement timing among the elderly. Two types of robotic training—haptic guidance (HG; demonstrating the correct movement for a better movement planning and improved execution of movement) and error amplification (EA; exaggerating movement errors to have a more rapid and complete learning) have been positively used in young healthy subjects to boost timing accuracy. For healthy seniors, only HG training has been used so far where significant and positive timing gains have been obtained. The goal of the study was to evaluate and compare the impact of both HG and EA robotic trainings on the improvement of seniors’ movement timing. Thirty-two healthy seniors (mean age 68 ± 4 years) learned to play a pinball-like game by triggering a one-degree-of-freedom hand robot at the proper time to make a flipper move and direct a falling ball toward a randomly positioned target. During HG and EA robotic trainings, the subjects’ timing errors were decreased and increased, respectively, based on the subjects’ timing errors in initiating a movement. Results showed that only HG training benefited learning, but the improvement did not generalize to untrained targets. Also, age had no influence on the efficacy of HG robotic training, meaning that the oldest subjects did not benefit more from HG training than the younger senior subjects. Using HG to teach the correct timing of movement seems to be a good strategy to improve motor learning for the elderly as for younger people. However, more studies are needed to assess the long-term impact of HG robotic training on improvement in movement timing. PMID:25873868

  19. Absolute phase-assisted three-dimensional data registration for a dual-camera structured light system

    SciTech Connect

    Zhang Song; Yau Shingtung

    2008-06-10

    For a three-dimensional shape measurement system with a single projector and multiple cameras, registering patches from different cameras is crucial. Registration usually involves a complicated and time-consuming procedure. We propose a new method that can robustly match different patches via absolute phase without significantly increasing its cost. For y and z coordinates, the transformations from one camera to the other are approximated as third-order polynomial functions of the absolute phase. The x coordinates involve only translations and scalings. These functions are calibrated and only need to be determined once. Experiments demonstrated that the alignment error is within RMS 0.7 mm.

  20. Absolute Equilibrium Entropy

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    1997-01-01

    The entropy associated with absolute equilibrium ensemble theories of ideal, homogeneous, fluid and magneto-fluid turbulence is discussed and the three-dimensional fluid case is examined in detail. A sigma-function is defined, whose minimum value with respect to global parameters is the entropy. A comparison is made between the use of global functions sigma and phase functions H (associated with the development of various H-theorems of ideal turbulence). It is shown that the two approaches are complimentary though conceptually different: H-theorems show that an isolated system tends to equilibrium while sigma-functions allow the demonstration that entropy never decreases when two previously isolated systems are combined. This provides a more complete picture of entropy in the statistical mechanics of ideal fluids.

  1. CONTROL OF ANTIGEN MASS TRANSFER VIA CAPTURE SUBSTRATE ROTATION: AN ABSOLUTE METHOD FOR THE DETERMINATION OF VIRAL PATHOGEN CONCENTRATION AND REDUCTION OF HETEROGENEOUS IMMUNOASSAY INCUBATION TIMES

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Immunosorbent assays are commonly employed as diagnostic tests in human healthcare and veterinary medicine and are strongly relevant to the methodologies for bioterrorism detection. However, immunoassays often require long incubation times, limiting sample throughput. As an approach to overcome this...

  2. Stimulus probability effects in absolute identification.

    PubMed

    Kent, Christopher; Lamberts, Koen

    2016-05-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of presentation probability on both proportion correct and response times. The effects were moderated by the ubiquitous stimulus position effect. The accuracy and response time data were predicted by an exemplar-based model of perceptual cognition (Kent & Lamberts, 2005). The bow in discriminability was also attenuated when presentation probability for middle items was relatively high, an effect that will constrain future model development. The study provides evidence for item-specific learning in absolute identification. Implications for other theories of absolute identification are discussed. (PsycINFO Database Record PMID:26478959

  3. Using the Guttman Scale to Define and Estimate Measurement Error in Items over Time: The Case of Cognitive Decline and the Meaning of “Points Lost”

    PubMed Central

    Tractenberg, Rochelle E.; Yumoto, Futoshi; Aisen, Paul S.; Kaye, Jeffrey A.; Mislevy, Robert J.

    2012-01-01

    We used a Guttman model to represent responses to test items over time as an approximation of what is often referred to as “points lost” in studies of cognitive decline or interventions. To capture this meaning of “point loss”, over four successive assessments, we assumed that once an item is incorrect, it cannot be correct at a later visit. If the loss of a point represents actual decline, then failure of an item to fit the Guttman model over time can be considered measurement error. This representation and definition of measurement error also permits testing the hypotheses that measurement error is constant for items in a test, and that error is independent of “true score”, which are two key consequences of the definition of “measurement error” –and thereby, reliability- under Classical Test Theory. We tested the hypotheses by fitting our model to, and comparing our results from, four consecutive annual evaluations in three groups of elderly persons: a) cognitively normal (NC, N = 149); b) diagnosed with possible or probable AD (N = 78); and c) cognitively normal initially and a later diagnosis of AD (converters, N = 133). Of 16 items that converged, error-free measurement of “cognitive loss” was observed for 10 items in NC, eight in converters, and two in AD. We found that measurement error, as we defined it, was inconsistent over time and across cognitive functioning levels, violating the theory underlying reliability and other psychometric characteristics, and key regression assumptions. PMID:22363411

  4. New definitions of pointing stability - ac and dc effects. [constant and time-dependent pointing error effects on image sensor performance

    NASA Technical Reports Server (NTRS)

    Lucke, Robert L.; Sirlin, Samuel W.; San Martin, A. M.

    1992-01-01

    For most imaging sensors, a constant (dc) pointing error is unimportant (unless large), but time-dependent (ac) errors degrade performance by either distorting or smearing the image. When properly quantified, the separation of the root-mean-square effects of random line-of-sight motions into dc and ac components can be used to obtain the minimum necessary line-of-sight stability specifications. The relation between stability requirements and sensor resolution is discussed, with a view to improving communication between the data analyst and the control systems engineer.

  5. The efficacy of safety barriers for children: absolute efficacy, time to cross and action modes in children between 19 and 75 months.

    PubMed

    Cordovil, R; Barreiros, J; Vieira, F; Neto, C

    2009-09-01

    We examined the efficacy of safety barriers by testing their capabilities to prevent or delay crossing. Children between 19 and 75 months tried to climb different barriers selected for their age group, which represented the most common types of panel and horizontal bars barriers available on the market. Success or failure in crossing, time to cross and crossing techniques were analysed. Barrier characteristics' influenced its restraining efficacy. Children's success rate varied between 10% and 95.3%. None of the barriers assured a considerable protective delay. Three major action modes were identified: head over waist (HOW), head and waist (HAW) and head under waist (HUW). Generally, children adopted the safer action mode, HOW, to cross most barriers. Younger children often adopted unstable action mode in barriers with crossable gaps. Although some standards might need to be re-evaluated, there are no childproof barriers. Barriers are time-delaying devices that cannot substitute supervision and education. PMID:19941212

  6. The relationship between Monte Carlo estimators of heterogeneity and error for daily to monthly time steps in a small Minnesota precipitation gauge network

    NASA Astrophysics Data System (ADS)

    Wright, Michael; Ferreira, Celso; Houck, Mark; Giovannettone, Jason

    2015-07-01

    Precipitation quantile estimates are used in engineering, agriculture, and a variety of other disciplines. Index flood regional frequency methods pool normalized gauge data in the case of homogeneity among the constituent gauges of the region. Unitless regional quantile estimates are outputted and rescaled at each gauge. Because violation of the homogeneity hypothesis is a major component of quantile estimation error in regional frequency analysis, heterogeneity estimators should be "reasonable proxies" of the error of quantile estimation. In this study, three Monte Carlo heterogeneity statistics tested in Hosking and Wallis (1997) are plotted against Monte Carlo estimates of quantile error for all five-or-more-gauge regionalizations in a 12 gauge network in the Twin Cities region of Minnesota. Upper-tail quantiles with nonexceedance probabilities of 0.75 and above are examined at time steps ranging from daily to monthly. A linear relationship between heterogeneity and error estimates is found and quantified using Pearson's r score. Two of Hosking and Wallis's (1997) heterogeneity measures, incorporating the coefficient of variation in one case and additionally the skewness in the other, are found to be reasonable proxies for quantile error at the L-moment ratio values characterizing these data. This result, in addition to confirming the utility of a commonly used coefficient of variation-based heterogeneity statistic, provides evidence for the utility of a heterogeneity measure that incorporates skewness information.

  7. Absolute timing of sulfide and gold mineralization: A comparison of Re-Os molybdenite and Ar-Ar mica methods from the Tintina Gold Belt, Alaska

    USGS Publications Warehouse

    Selby, D.; Creaser, R.A.; Hart, C.J.R.; Rombach, C.S.; Thompson, J.F.H.; Smith, M.T.; Bakke, A.A.; Goldfarb, R.J.

    2002-01-01

    New Re-Os molybdenite dates from two lode gold deposits of the Tintina Gold Belt, Alaska, provide direct timing constraints for sulfide and gold mineralization. At Fort Knox, the Re-Os molybdenite date is identical to the U-Pb zircon age for the host intrusion, supporting an intrusive-related origin for the deposit. However, 40Ar/39Ar dates from hydrothermal and igneous mica are considerably younger. At the Pogo deposit, Re-Os molybdenite dates are also much older than 40Ar/39Ar dates from hydrothermal mica, but dissimilar to the age of local granites. These age relationships indicate that the Re-Os molybdenite method records the timing of sulfide and gold mineralization, whereas much younger 40Ar/39Ar dates are affected by post-ore thermal events, slow cooling, and/or systemic analytical effects. The results of this study complement a growing body of evidence to indicate that the Re-Os chronometer in molybdenite can be an accurate and robust tool for establishing timing relations in ore systems.

  8. Spatially resolved absolute spectrophotometry of Saturn - 3390 to 8080 A

    NASA Technical Reports Server (NTRS)

    Bergstralh, J. T.; Diner, D. J.; Baines, K. H.; Neff, J. S.; Allen, M. A.; Orton, G. S.

    1981-01-01

    A series of spatially resolved absolute spectrophotometric measurements of Saturn was conducted for the expressed purpose of calibrating the data obtained with the Imaging Photopolarimeter (IPP) on Pioneer 11 during its recent encounter with Saturn. All observations reported were made at the Mt. Wilson 1.5-m telescope, using a 1-m Ebert-Fastie scanning spectrometer. Spatial resolution was 1.92 arcsec. Photometric errors are considered, taking into account the fixed error, the variable error, and the composite error. The results are compared with earlier observations, as well as with synthetic spectra derived from preliminary physical models, giving attention to the equatorial region and the South Temperate Zone.

  9. Absolute neutrino mass measurements

    NASA Astrophysics Data System (ADS)

    Wolf, Joachim

    2011-10-01

    The neutrino mass plays an important role in particle physics, astrophysics and cosmology. In recent years the detection of neutrino flavour oscillations proved that neutrinos carry mass. However, oscillation experiments are only sensitive to the mass-squared difference of the mass eigenvalues. In contrast to cosmological observations and neutrino-less double beta decay (0v2β) searches, single β-decay experiments provide a direct, model-independent way to determine the absolute neutrino mass by measuring the energy spectrum of decay electrons at the endpoint region with high accuracy. Currently the best kinematic upper limits on the neutrino mass of 2.2eV have been set by two experiments in Mainz and Troitsk, using tritium as beta emitter. The next generation tritium β-experiment KATRIN is currently under construction in Karlsruhe/Germany by an international collaboration. KATRIN intends to improve the sensitivity by one order of magnitude to 0.2eV. The investigation of a second isotope (137Rh) is being pursued by the international MARE collaboration using micro-calorimeters to measure the beta spectrum. The technology needed to reach 0.2eV sensitivity is still in the R&D phase. This paper reviews the present status of neutrino-mass measurements with cosmological data, 0v2β decay and single β-decay.

  10. Absolute neutrino mass measurements

    SciTech Connect

    Wolf, Joachim

    2011-10-06

    The neutrino mass plays an important role in particle physics, astrophysics and cosmology. In recent years the detection of neutrino flavour oscillations proved that neutrinos carry mass. However, oscillation experiments are only sensitive to the mass-squared difference of the mass eigenvalues. In contrast to cosmological observations and neutrino-less double beta decay (0v2{beta}) searches, single {beta}-decay experiments provide a direct, model-independent way to determine the absolute neutrino mass by measuring the energy spectrum of decay electrons at the endpoint region with high accuracy.Currently the best kinematic upper limits on the neutrino mass of 2.2eV have been set by two experiments in Mainz and Troitsk, using tritium as beta emitter. The next generation tritium {beta}-experiment KATRIN is currently under construction in Karlsruhe/Germany by an international collaboration. KATRIN intends to improve the sensitivity by one order of magnitude to 0.2eV. The investigation of a second isotope ({sup 137}Rh) is being pursued by the international MARE collaboration using micro-calorimeters to measure the beta spectrum. The technology needed to reach 0.2eV sensitivity is still in the R and D phase. This paper reviews the present status of neutrino-mass measurements with cosmological data, 0v2{beta} decay and single {beta}-decay.

  11. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOEpatents

    Prasad, Mano K.; Snyderman, Neal J.; Rowland, Mark S.

    2015-12-01

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  12. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOEpatents

    Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

    2012-06-05

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  13. An absolute radius scale for Saturn's rings

    NASA Technical Reports Server (NTRS)

    Nicholson, Philip D.; Cooke, Maren L.; Pelton, Emily

    1990-01-01

    Radio and stellar occultation observations of Saturn's rings made by the Voyager spacecraft are discussed. The data reveal systematic discrepancies of almost 10 km in some parts of the rings, limiting some of the investigations. A revised solution for Saturn's rotation pole has been proposed which removes the discrepancies between the stellar and radio occultation profiles. Corrections to previously published radii vary from -2 to -10 km for the radio occultation, and +5 to -6 km for the stellar occultation. An examination of spiral density waves in the outer A Ring supports that the revised absolute radii are in error by no more than 2 km.

  14. Absolute method of measuring magnetic susceptibility

    USGS Publications Warehouse

    Thorpe, A.; Senftle, F.E.

    1959-01-01

    An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.

  15. Color-coded prefilled medication syringes decrease time to delivery and dosing errors in simulated prehospital pediatric resuscitations: A randomized crossover trial☆, ☆

    PubMed Central

    Stevens, Allen D.; Hernandez, Caleb; Jones, Seth; Moreira, Maria E.; Blumen, Jason R.; Hopkins, Emily; Sande, Margaret; Bakes, Katherine; Haukoos, Jason S.

    2016-01-01

    Background Medication dosing errors remain commonplace and may result in potentially life-threatening outcomes, particularly for pediatric patients where dosing often requires weight-based calculations. Novel medication delivery systems that may reduce dosing errors resonate with national healthcare priorities. Our goal was to evaluate novel, prefilled medication syringes labeled with color-coded volumes corresponding to the weight-based dosing of the Broselow Tape, compared to conventional medication administration, in simulated prehospital pediatric resuscitation scenarios. Methods We performed a prospective, block-randomized, cross-over study, where 10 full-time paramedics each managed two simulated pediatric arrests in situ using either prefilled, color-coded-syringes (intervention) or their own medication kits stocked with conventional ampoules (control). Each paramedic was paired with two emergency medical technicians to provide ventilations and compressions as directed. The ambulance patient compartment and the intravenous medication port were video recorded. Data were extracted from video review by blinded, independent reviewers. Results Median time to delivery of all doses for the intervention and control groups was 34 (95% CI: 28–39) seconds and 42 (95% CI: 36–51) seconds, respectively (difference = 9 [95% CI: 4–14] seconds). Using the conventional method, 62 doses were administered with 24 (39%) critical dosing errors; using the prefilled, color-coded syringe method, 59 doses were administered with 0 (0%) critical dosing errors (difference = 39%, 95% CI: 13–61%). Conclusions A novel color-coded, prefilled syringe decreased time to medication administration and significantly reduced critical dosing errors by paramedics during simulated prehospital pediatric resuscitations. PMID:26247145

  16. Field evaluation of the error arising from inadequate time averaging in the standard use of depth-integrating suspended-sediment samplers

    USGS Publications Warehouse

    Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.

    2011-01-01

    Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required

  17. Real-time RT-PCR for detection, identification and absolute quantification of viral haemorrhagic septicaemia virus using different types of standards.

    PubMed

    Lopez-Vazquez, C; Bandín, I; Dopazo, C P

    2015-05-21

    In the present study, 2 systems of real-time RT-PCR-one based on SYBR Green and the other on TaqMan-were designed to detect strains from any genotype of viral haemorrhagic septicaemia virus (VHSV), with high sensitivity and repeatability/reproducibility. In addition, the method was optimized for quantitative purposes (qRT-PCR), and standard curves with different types of reference templates were constructed and compared. Specificity was tested against 26 isolates from 4 genotypes. The sensitivity of the procedures was first tested against cell culture isolation, obtaining a limit of detection (LD) of 100 TCID50 ml-1 (100-fold below the LD using cell culture), at a threshold cycle value (Ct) of 36. Sensitivity was also evaluated using RNA from crude (LD = 1 fg; 160 genome copies) and purified virus (100 ag; 16 copies), plasmid DNA (2 copies) and RNA transcript (15 copies). No differences between both chemistries were observed in sensitivity and dynamic range. To evaluate repeatability and reproducibility, all experiments were performed in triplicate and on 3 different days, by workers with different levels of experience, obtaining Ct values with coefficients of variation always <5. This fact, together with the high efficiency and R2 values of the standard curves, encouraged us to analyse the reliability of the method for viral quantification. The results not only demonstrated that the procedure can be used for detection, identification and quantification of this virus, but also demonstrated a clear correlation between the regression lines obtained with different standards, which will help scientists to compare sensitivity results between different studies. PMID:25993885

  18. Real-time high-resolution PC-based system for measurement of errors on compact disks

    NASA Astrophysics Data System (ADS)

    Tehranchi, Babak; Howe, Dennis G.

    1994-10-01

    Hardware and software utilities are developed to directly monitor the Eight-to-Fourteen (EFM) demodulated data bytes at the input of a CD player's Cross-Interleaved Reed-Solomon Code (CIRC) block decoder. The hardware is capable of identifying erroneous data with single-byte resolution in the serial data stream read from a Compact Disc by a CDD 461 Philips CD-ROM drive. In addition, the system produces graphical maps that show the physical location of the measured errors on the entire disc, or via a zooming and planning feature, on user selectable local disc regions.

  19. Closed-loop step motor control using absolute encoders

    SciTech Connect

    Hicks, J.S.; Wright, M.C.

    1997-08-01

    A multi-axis, step motor control system was developed to accurately position and control the operation of a triple axis spectrometer at the High Flux Isotope Reactor (HFIR) located at Oak Ridge National Laboratory. Triple axis spectrometers are used in neutron scattering and diffraction experiments and require highly accurate positioning. This motion control system can handle up to 16 axes of motion. Four of these axes are outfitted with 17-bit absolute encoders. These four axes are controlled with a software feedback loop that terminates the move based on real-time position information from the absolute encoders. Because the final position of the actuator is used to stop the motion of the step motors, the moves can be made accurately in spite of the large amount of mechanical backlash from a chain drive between the motors and the spectrometer arms. A modified trapezoidal profile, custom C software, and an industrial PC, were used to achieve a positioning accuracy of 0.00275 degrees of rotation. A form of active position maintenance ensures that the angles are maintained with zero error or drift.

  20. First Absolutely Calibrated Localized Measurements of Ion Velocity in the MST in Locked and Rotating Plasmas

    NASA Astrophysics Data System (ADS)

    Baltzer, M.; Craig, D.; den Hartog, D. J.; Nornberg, M. D.; Munaretto, S.

    2015-11-01

    An Ion Doppler Spectrometer (IDS) is used on MST for high time-resolution passive and active measurements of impurity ion emission. Absolutely calibrated measurements of flow are difficult because the spectrometer records data within 0.3 nm of the C+5 line of interest, and commercial calibration lamps do not produce lines in this narrow range . A novel optical system was designed to absolutely calibrate the IDS. The device uses an UV LED to produce a broad emission curve in the desired region. A Fabry-Perot etalon filters this light, cutting transmittance peaks into the pattern of the LED emission. An optical train of fused silica lenses focuses the light into the IDS with f/4. A holographic diffuser blurs the light cone to increase homogeneity. Using this light source, the absolute Doppler shift of ion emissions can be measured in MST plasmas. In combination with charge exchange recombination spectroscopy, localized ion velocities can now be measured. Previously, a time-averaged measurement along the chord bisecting the poloidal plane was used to calibrate the IDS; the quality of these central chord calibrations can be characterized with our absolute calibration. Calibration errors may also be quantified and minimized by optimizing the curve-fitting process. Preliminary measurements of toroidal velocity in locked and rotating plasmas will be shown. This work has been supported by the US DOE.

  1. Four-directional stereo-microscopy for 3D particle tracking with real-time error evaluation.

    PubMed

    Hay, R F; Gibson, G M; Lee, M P; Padgett, M J; Phillips, D B

    2014-07-28

    High-speed video stereo-microscopy relies on illumination from two distinct angles to create two views of a sample from different directions. The 3D trajectory of a microscopic object can then be reconstructed using parallax to combine 2D measurements of its position in each image. In this work, we evaluate the accuracy of 3D particle tracking using this technique, by extending the number of views from two to four directions. This allows us to record two independent sets of measurements of the 3D coordinates of tracked objects, and comparison of these enables measurement and minimisation of the tracking error in all dimensions. We demonstrate the method by tracking the motion of an optically trapped microsphere of 5 μm in diameter, and find an accuracy of 2-5 nm laterally, and 5-10 nm axially, representing a relative error of less than 2.5% of its range of motion in each dimension. PMID:25089484

  2. Stimulus Probability Effects in Absolute Identification

    ERIC Educational Resources Information Center

    Kent, Christopher; Lamberts, Koen

    2016-01-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…

  3. Absolute Identification by Relative Judgment

    ERIC Educational Resources Information Center

    Stewart, Neil; Brown, Gordon D. A.; Chater, Nick

    2005-01-01

    In unidimensional absolute identification tasks, participants identify stimuli that vary along a single dimension. Performance is surprisingly poor compared with discrimination of the same stimuli. Existing models assume that identification is achieved using long-term representations of absolute magnitudes. The authors propose an alternative…

  4. Be Resolute about Absolute Value

    ERIC Educational Resources Information Center

    Kidd, Margaret L.

    2007-01-01

    This article explores how conceptualization of absolute value can start long before it is introduced. The manner in which absolute value is introduced to students in middle school has far-reaching consequences for their future mathematical understanding. It begins to lay the foundation for students' understanding of algebra, which can change…

  5. Establishment of a real-time RT-PCR for the determination of absolute amounts of IGF-I and IGF-II gene expression in liver and extrahepatic sites of the tilapia.

    PubMed

    Caelers, Antje; Berishvili, Giorgi; Meli, Marina L; Eppler, Elisabeth; Reinecke, Manfred

    2004-06-01

    We developed a one-tube two-temperature real-time RT-PCR that allows to absolutely quantify the gene expression of hormones using the standard curve method. As our research focuses on the expression of the insulin-like growth factors (IGFs) in bony fish, we established the technique for IGF-I and IGF-II using the tilapia (Oreochromis niloticus) as model species. As approach, we used primer extension adding a T7 phage polymerase promoter (21 nt) to the 5' end of the antisense primers. This procedure avoids the disadvantages arising from plasmids. Total RNA extracted from liver was subjected to conventional RT-PCR to create templates for in vitro transcription of IGF-I and IGF-II cRNA. Correct template sizes including the T7 promoter were verified (IGF-I: 91 nt; IGF-II: 94 nt). The PCR products were used to create IGF-I and IGF-II cRNAs which were quantified in dot blot by comparison with defined amounts of standardised kanamycin mRNA. Standardised threshold cycle (Ct) values for IGF-I and IGF-II mRNA were achieved by real-time RT-PCR and used to create standard curves. To allow sample normalisation the standard curve was also established for beta-actin as internal calibrator (template: 86 nt), and validation experiments were performed demonstrating similar amplification efficiencies for target and reference genes. Based on the standard curves, the absolute amounts of IGF-I and IGF-II mRNA were determined for liver (IGF-I: 8.90+/-1.90 pg/microg total RNA, IGF-II: 3.59+/-0.98 pg/microg total RNA) and extrahepatic sites, such as heart, kidney, intestine, spleen, gills, gonad, and brain considering the different lengths of cRNAs and mRNAs by correction factors. The reliability of the method was confirmed in additional experiments. The amplification of descending dilutions of cRNA and total liver RNA resulted in parallel slopes of the amplification curves. Furthermore, amplification plots of the standard cRNA and the IGF-I and IGF-II mRNAs showed signals starting at the

  6. The Implications for Higher-Accuracy Absolute Measurements for NGS and its GRAV-D Project

    NASA Astrophysics Data System (ADS)

    Childers, V. A.; Winester, D.; Roman, D. R.; Eckl, M. C.; Smith, D. A.

    2013-12-01

    Absolute and relative gravity measurements play an important role in the work of NOAA's National Geodetic Survey (NGS). When NGS decided to replace the US national vertical datum, the Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project added a new dimension to the NGS gravity program. Airborne gravity collection would complement existing satellite and surface gravity data to allow the creation of a gravimetric geoid sufficiently accurate to form the basis of the new reference surface. To provide absolute gravity ties for the airborne surveys, initially new FG5 absolute measurements were made at existing absolute stations and relative measurements were used to transfer those measurements to excenters near the absolute mark and to the aircraft sensor height at the parking space. In 2011, NGS obtained a field-capable A10 absolute gravimeter from Micro-g LaCoste which became the basis of the support of the airborne surveys. Now A10 measurements are made at the aircraft location and transferred to sensor height. Absolute and relative gravity play other roles in GRAV-D. Comparison of surface data with new airborne collection will highlight surface surveys with bias or tilt errors and can provide enough information to repair or discard the data. We expect that areas of problem surface data may be re-measured. The GRAV-D project also plans to monitor the geoid in regions of rapid change and update the vertical datum when appropriate. Geoid change can result from glacial isostatic adjustment (GIA), tectonic change, and the massive drawdown of large scale aquifers. The NGS plan for monitoring these changes over time is still in its preliminary stages and is expected to rely primarily on the GRACE and GRACE Follow On satellite data in conjunction with models of GIA and tectonic change. We expect to make absolute measurements in areas of rapid change in order to verify model predictions. With the opportunities presented by rapid, highly accurate

  7. Accounting for Sampling Error When Inferring Population Synchrony from Time-Series Data: A Bayesian State-Space Modelling Approach with Applications

    PubMed Central

    Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique

    2014-01-01

    Background Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. Methodology/Principal findings The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. Conclusion/Significance The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in

  8. Absolute surface metrology by rotational averaging in oblique incidence interferometry.

    PubMed

    Lin, Weihao; He, Yumei; Song, Li; Luo, Hongxin; Wang, Jie

    2014-06-01

    A modified method for measuring the absolute figure of a large optical flat surface in synchrotron radiation by a small aperture interferometer is presented. The method consists of two procedures: the first step is oblique incidence measurement; the second is multiple rotating measurements. This simple method is described in terms of functions that are symmetric or antisymmetric with respect to reflections at the vertical axis. Absolute deviations of a large flat surface could be obtained when mirror antisymmetric errors are removed by N-position rotational averaging. Formulas are derived for measuring the absolute surface errors of a rectangle flat, and experiments on high-accuracy rectangle flats are performed to verify the method. Finally, uncertainty analysis is carried out in detail. PMID:24922410

  9. Medication Errors

    MedlinePlus

    ... to reduce the risk of medication errors to industry and others at FDA. Additionally, DMEPA prospectively reviews ... List of Abbreviations Regulations and Guidances Guidance for Industry: Safety Considerations for Product Design to Minimize Medication ...

  10. Medication Errors

    MedlinePlus

    Medicines cure infectious diseases, prevent problems from chronic diseases, and ease pain. But medicines can also cause harmful reactions if not used ... You can help prevent errors by Knowing your medicines. Keep a list of the names of your ...

  11. The solar absolute spectral irradiance 1150-3173 A - May 17, 1982

    NASA Technical Reports Server (NTRS)

    Mount, G. H.; Rottman, G. J.

    1983-01-01

    The full-disk solar spectral irradiance in the spectral range 1150-3173 A was obtained from a rocket observation above White Sands Missile Range, NM, on May 17, 1982, half way in time between solar maximum and solar minimum. Comparison with measurements made during solar maximum in 1980 indicate a large decrease in the absolute solar irradiance at wavelengths below 1900 A to approximately solar minimum values. No change above 1900 A from solar maximum to this flight was observed to within the errors of the measurements. Irradiance values lower than the Broadfoot results in the 2100-2500 A spectral range are found, but excellent agreement with Broadfoot between 2500 and 3173 A is found. The absolute calibration of the instruments for this flight was accomplished at the National Bureau of Standards Synchrotron Radiation Facility which significantly improves calibration of solar measurements made in this spectral region.

  12. Using residual stacking to mitigate site-specific errors in order to improve the quality of GNSS-based coordinate time series of CORS

    NASA Astrophysics Data System (ADS)

    Knöpfler, Andreas; Mayer, Michael; Heck, Bernhard

    2014-05-01

    Within the last decades, positioning using GNSS (Global Navigation Satellite Systems; e.g., GPS) has become a standard tool in many (geo-) sciences. The positioning methods Precise Point Positioning and differential point positioning based on carrier phase observations have been developed for a broad variety of applications with different demands for example on accuracy. In high precision applications, a lot of effort was invested to mitigate different error sources: the products for satellite orbits and satellite clocks were improved; the misbehaviour of satellite and receiver antennas compared to an ideal antenna is modelled by calibration values on absolute level, the modelling of the ionosphere and the troposphere is updated year by year. Therefore, within processing of data of CORS (continuously operating reference sites), equipped with geodetic hardware using a sophisticated strategy, the latest products and models nowadays enable positioning accuracies at low mm level. Despite the considerable improvements that have been achieved within GNSS data processing, a generally valid multipath model is still lacking. Therefore, site specific multipath still represents a major error source in precise GNSS positioning. Furthermore, the calibration information of receiving GNSS antennas, which is for instance derived by a robot or chamber calibration, is valid strictly speaking only for the location of the calibration. The calibrated antenna can show a slightly different behaviour at the CORS due to near field multipath effects. One very promising strategy to mitigate multipath effects as well as imperfectly calibrated receiver antennas is to stack observation residuals of several days, thereby, multipath-loaded observation residuals are analysed for example with respect to signal direction, to find and reduce systematic constituents. This presentation will give a short overview about existing stacking approaches. In addition, first results of the stacking approach

  13. A multi-centennial time series of well-constrained ΔR values for the Irish Sea derived using absolutely-dated shell samples from the mollusc Arctica islandica

    NASA Astrophysics Data System (ADS)

    Butler, P. G.; Scourse, J. D.; Richardson, C. A.; Wanamaker, A. D., Jr.

    2009-04-01

    Determinations of the local correction (ΔR) to the globally averaged marine radiocarbon reservoir age are often isolated in space and time, derived from heterogeneous sources and constrained by significant uncertainties. Although time series of ΔR at single sites can be obtained from sediment cores, these are subject to multiple uncertainties related to sedimentation rates, bioturbation and interspecific variations in the source of radiocarbon in the analysed samples. Coral records provide better resolution, but these are available only for tropical locations. It is shown here that it is possible to use the shell of the long-lived bivalve mollusc Arctica islandica as a source of high resolution time series of absolutely-dated marine radiocarbon determinations for the shelf seas surrounding the North Atlantic ocean. Annual growth increments in the shell can be crossdated and chronologies can be constructed in a precise analogue with the use of tree-rings. Because the calendar dates of the samples are known, ΔR can be determined with high precision and accuracy and because all the samples are from the same species, the time series of ΔR values possesses a high degree of internal consistency. Presented here is a multi-centennial (AD 1593 - AD 1933) time series of 31 ΔR values for a site in the Irish Sea close to the Isle of Man. The mean value of ΔR (-62 14C yrs) does not change significantly during this period but increased variability is apparent before AD 1750.

  14. Estimation of the reaction times in tasks of varying difficulty from the phase coherence of the auditory steady-state response using the least absolute shrinkage and selection operator analysis.

    PubMed

    Yokota, Yusuke; Igarashi, Yasuhiko; Okada, Masato; Naruse, Yasushi

    2015-08-01

    Quantitative estimation of the workload in the brain is an important factor for helping to predict the behavior of humans. The reaction time when performing a difficult task is longer than that when performing an easy task. Thus, the reaction time reflects the workload in the brain. In this study, we employed an N-back task in order to regulate the degree of difficulty of the tasks, and then estimated the reaction times from the brain activity. The brain activity that we used to estimate the reaction time was the auditory steady-state response (ASSR) evoked by a 40-Hz click sound. Fifteen healthy participants participated in the present study and magnetoencephalogram (MEG) responses were recorded using a 148-channel magnetometer system. The least absolute shrinkage and selection operator (LASSO), which is a type of sparse modeling, was employed to estimate the reaction times from the ASSR recorded by MEG. The LASSO showed higher estimation accuracy than the least squares method. This result indicates that LASSO overcame the over-fitting to the learning data. Furthermore, the LASSO selected channels in not only the parietal region, but also in the frontal and occipital regions. Since the ASSR is evoked by auditory stimuli, it is usually large in the parietal region. However, since LASSO also selected channels in regions outside the parietal region, this suggests that workload-related neural activity occurs in many brain regions. In the real world, it is more practical to use a wearable electroencephalography device with a limited number of channels than to use MEG. Therefore, determining which brain areas should be measured is essential. The channels selected by the sparse modeling method are informative for determining which brain areas to measure. PMID:26737821

  15. Space-time data fusion under error in computer model output: an application to modeling air quality.

    PubMed

    Berrocal, Veronica J; Gelfand, Alan E; Holland, David M

    2012-09-01

    We provide methods that can be used to obtain more accurate environmental exposure assessment. In particular, we propose two modeling approaches to combine monitoring data at point level with numerical model output at grid cell level, yielding improved prediction of ambient exposure at point level. Extending our earlier downscaler model (Berrocal, V. J., Gelfand, A. E., and Holland, D. M. (2010b). A spatio-temporal downscaler for outputs from numerical models. Journal of Agricultural, Biological and Environmental Statistics 15, 176-197), these new models are intended to address two potential concerns with the model output. One recognizes that there may be useful information in the outputs for grid cells that are neighbors of the one in which the location lies. The second acknowledges potential spatial misalignment between a station and its putatively associated grid cell. The first model is a Gaussian Markov random field smoothed downscaler that relates monitoring station data and computer model output via the introduction of a latent Gaussian Markov random field linked to both sources of data. The second model is a smoothed downscaler with spatially varying random weights defined through a latent Gaussian process and an exponential kernel function, that yields, at each site, a new variable on which the monitoring station data is regressed with a spatial linear model. We applied both methods to daily ozone concentration data for the Eastern US during the summer months of June, July and August 2001, obtaining, respectively, a 5% and a 15% predictive gain in overall predictive mean square error over our earlier downscaler model (Berrocal et al., 2010b). Perhaps more importantly, the predictive gain is greater at hold-out sites that are far from monitoring sites. PMID:22211949

  16. Determination of short-term error caused by the reference clock in precision time-interval measurement and generation

    NASA Astrophysics Data System (ADS)

    Kalisz, Jozef

    1988-06-01

    A simple analysis based on the randomized clock cycle T(o) yields a useful formula on its variance in terms of the Allan variance. The short-term uncertainty of the measured or generated time interval t is expressed by the standard deviation in an approximate form as a function of the Allen variance. The estimates obtained are useful for determining the measurement uncertainty of time intervals within the approximate range of 10 ms-100 s.

  17. Mechanical temporal fluctuation induced distance and force systematic errors in Casimir force experiments

    NASA Astrophysics Data System (ADS)

    Lamoreaux, Steve; Wong, Douglas

    2015-06-01

    The basic theory of temporal mechanical fluctuation induced systematic errors in Casimir force experiments is developed and applications of this theory to several experiments is reviewed. This class of systematic error enters in a manner similar to the usual surface roughness correction, but unlike the treatment of surface roughness for which an exact result requires an electromagnetic mode analysis, time dependent fluctuations can be treated exactly, assuming the fluctuation times are much longer than the zero point and thermal fluctuation correlation times of the electromagnetic field between the plates. An experimental method for measuring absolute distance with high bandwidth is also described and measurement data presented.

  18. Mechanical temporal fluctuation induced distance and force systematic errors in Casimir force experiments.

    PubMed

    Lamoreaux, Steve; Wong, Douglas

    2015-06-01

    The basic theory of temporal mechanical fluctuation induced systematic errors in Casimir force experiments is developed and applications of this theory to several experiments is reviewed. This class of systematic error enters in a manner similar to the usual surface roughness correction, but unlike the treatment of surface roughness for which an exact result requires an electromagnetic mode analysis, time dependent fluctuations can be treated exactly, assuming the fluctuation times are much longer than the zero point and thermal fluctuation correlation times of the electromagnetic field between the plates. An experimental method for measuring absolute distance with high bandwidth is also described and measurement data presented. PMID:25965319

  19. Early-time observations of gamma-ray burst error boxes with the Livermore optical transient imaging system

    SciTech Connect

    Williams, G G

    2000-08-01

    Despite the enormous wealth of gamma-ray burst (GRB) data collected over the past several years the physical mechanism which causes these extremely powerful phenomena is still unknown. Simultaneous and early time optical observations of GRBs will likely make an great contribution t o our understanding. LOTIS is a robotic wide field-of-view telescope dedicated to the search for prompt and early-time optical afterglows from gamma-ray bursts. LOTIS began routine operations in October 1996 and since that time has responded to over 145 gamma-ray burst triggers. Although LOTIS has not yet detected prompt optical emission from a GRB its upper limits have provided constraints on the theoretical emission mechanisms. Super-LOTIS, also a robotic wide field-of-view telescope, can detect emission 100 times fainter than LOTIS is capable of detecting. Routine observations from Steward Observatory's Kitt Peak Station will begin in the immediate future. During engineering test runs under bright skies from the grounds of Lawrence Livermore National Laboratory Super-LOTIS provided its first upper limits on the early-time optical afterglow of GRBs. This dissertation provides a summary of the results from LOTIS and Super-LOTIS through the time of writing. Plans for future studies with both systems are also presented.

  20. Developing control charts to review and monitor medication errors.

    PubMed

    Ciminera, J L; Lease, M P

    1992-03-01

    There is a need to monitor reported medication errors in a hospital setting. Because the quantity of errors vary due to external reporting, quantifying the data is extremely difficult. Typically, these errors are reviewed using classification systems that often have wide variations in the numbers per class per month. The authors recommend the use of control charts to review historical data and to monitor future data. The procedure they have adopted is a modification of schemes using absolute (i.e., positive) values of successive differences to estimate the standard deviation when only single incidence values are available in time rather than sample averages, and when many successive differences may be zero. PMID:10116719

  1. Prospects for the Moon as an SI-Traceable Absolute Spectroradiometric Standard for Satellite Remote Sensing

    NASA Astrophysics Data System (ADS)

    Cramer, C. E.; Stone, T. C.; Lykke, K.; Woodward, J. T.

    2015-12-01

    The Earth's Moon has many physical properties that make it suitable for use as a reference light source for radiometric calibration of remote sensing satellite instruments. Lunar calibration has been successfully applied to many imagers in orbit, including both MODIS instruments and NPP-VIIRS, using the USGS ROLO model to predict the reference exoatmospheric lunar irradiance. Sensor response trending was developed for SeaWIFS with a relative accuracy better than 0.1 % per year with lunar calibration techniques. However, the Moon rarely is used as an absolute reference for on-orbit calibration, primarily due to uncertainties in the ROLO model absolute scale of 5%-10%. But this limitation lies only with the models - the Moon itself is radiometrically stable, and development of a high-accuracy absolute lunar reference is inherently feasible. A program has been undertaken by NIST to collect absolute measurements of the lunar spectral irradiance with absolute accuracy <1 % (k=2), traceable to SI radiometric units. Initial Moon observations were acquired from the Whipple Observatory on Mt. Hopkins, Arizona, elevation 2367 meters, with continuous spectral coverage from 380 nm to 1040 nm at ~3 nm resolution. The lunar spectrometer acquired calibration measurements several times each observing night by pointing to a calibrated integrating sphere source. The lunar spectral irradiance at the top of the atmosphere was derived from a time series of ground-based measurements by a Langley analysis that incorporated measured atmospheric conditions and ROLO model predictions for the change in irradiance resulting from the changing Sun-Moon-Observer geometry throughout each night. Two nights were selected for further study. An extensive error analysis, which includes instrument calibration and atmospheric correction terms, shows a combined standard uncertainty under 1 % over most of the spectral range. Comparison of these two nights' spectral irradiance measurements with predictions

  2. Early-time Observations of Gamma-ray Burst Error Boxes with the Livermore Optical Transient Imaging System

    NASA Astrophysics Data System (ADS)

    Williams, George Grant

    2000-08-01

    Approximately three times per day a bright flash of high energy radiation from the depths of the universe encounters the Earth. These gamma-ray bursts (GRBs) were discovered circa 1970 yet their origin remains a mystery. Traditional astronomical observations of GRBs are hindered by their transient nature. They have durations of only a few seconds and occur at random times from unpredictable directions. In recent years, precise GRB localizations and rapid coordinate dissemination have permitted sensitive follow-up observations. These observations resulted in the identification of long wavelength counterparts within distant galaxies. Despite the wealth of data now available the physical mechanism which produces these extremely energetic phenomena is still unknown. In the near future, simultaneous and early-time optical observations of GRBs will aid in constraining the theoretical models. The Livermore Optical Transient Imaging System (LOTIS) is an automated robotic wide field-of-view telescope dedicated to the search for prompt and early-time optical emission from GRBs. Since routine operations began in October 1996 LOTIS has responded to over 145 GRB triggers. LOTIS has not yet detected optical emission from a GRB but upper limits provided by the telescope constrain the theoretical emission mechanisms. Super-LOTIS, also a robotic wide field-of-view telescope, is 100 times more sensitive than LOTIS. Routine observations from Steward Observatory's Kitt Peak Station will begin in the immediate future. During engineering test runs Super-LOTIS obtained its first upper limit on the early-time optical afterglow of GRBs. An overview of the history and current state of GRBs is presented. Theoretical models are reviewed briefly. The LOTIS and Super-LOTIS hardware and operating procedures are discussed. A summary of the results from both LOTIS and Super-LOTIS and an interpretation of those results is presented. Plans for future studies with both systems are briefly stated.

  3. An integrated error estimation and lag-aware data assimilation scheme for real-time flood forecasting

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The performance of conventional filtering methods can be degraded by ignoring the time lag between soil moisture and discharge response when discharge observations are assimilated into streamflow modelling. This has led to the ongoing development of more optimal ways to implement sequential data ass...

  4. Application of Pb isotopes to the absolute timing of regional exposure events in carbonate rocks: An example from U-rich dolostones from the Wahoo Formation (Pennsylvanian), Prudhoe Bay, Alaska

    SciTech Connect

    Hoff, J.A.; Hanson, G.N.; Jameson, J.

    1995-01-02

    Pb isotope data from U-rich dolostones from the Wahoo Formation (Pennsylvanian) from the subsurface at Prudhoe Bay, alaska, demonstrate that the U-Th-Pb system can be a powerful geochemical and geochronological tool in understanding carbonate diagenesis. These U-rich dolostones are developed beneath a major, Late Permian to Early Triassic truncational unconformity. U enrichment is uniquely associated with the mineral dolomite, but anomalously high concentrations of U are not present within the dolomite crystal lattice. Major mineral or fluid phases can be ruled out as U hosts. SEM analyses indicate that U anomalies are present in an unknown mineral phase associated with authigenic clays and are commonly concentrated along stylolites. Geologic, petrographic, and geochemical data indicate that the bulk of dolomitization occurred during the Permo-Triassic, following development of a regional unconformity (Jameson 1989a, 1989b, 1990a, 1990b, 1994). In this study, the Pb isotopic composition of these U-rich dolostones is used to establish the absolute timing of U enrichment and its relationship to dolomitization and to the burial history of the Wahoo Formation.

  5. [Geriatrics: an absolute necessity].

    PubMed

    Oostvogel, F J

    1982-02-01

    The medical care for elderly people could be greatly improved. If no specific attention is paid immediately, namely through the various training courses and by way of further and part-time schooling, then this medical care will remain unsatisfactory. This situation worsens continually due to the growing number of elderly people and, within this group, a much higher rate of very aged people. Increasing the care in institutions is altogether unsatisfactory. The problem should be dealt with structurally and the emphasis placed upon prevention and early-diagnosis. There is an urgent need for an integrated method, keeping in mind the limits of the elderly person, from the physical, psychological and social aspects. This demands teamwork in a multidisciplinary system inside as well as outside the institutions. It demands a thorough knowledge of geriatrics based upon gerontology. Geriatricians are urgently needed in this development together with doctors in nursing homes, general practitioners and specialists, so that the necessary care may be established as quickly as possible. PMID:7101393

  6. Absolute Radiometer for Reproducing the Solar Irradiance Unit

    NASA Astrophysics Data System (ADS)

    Sapritskii, V. I.; Pavlovich, M. N.

    1989-01-01

    A high-precision absolute radiometer with a thermally stabilized cavity as receiving element has been designed for use in solar irradiance measurements. The State Special Standard of the Solar Irradiance Unit has been built on the basis of the developed absolute radiometer. The Standard also includes the sun tracking system and the system for automatic thermal stabilization and information processing, comprising a built-in microcalculator which calculates the irradiance according to the input program. During metrological certification of the Standard, main error sources have been analysed and the non-excluded systematic and accidental errors of the irradiance-unit realization have been determined. The total error of the Standard does not exceed 0.3%. Beginning in 1984 the Standard has been taking part in a comparison with the Å 212 pyrheliometer and other Soviet and foreign standards. In 1986 it took part in the international comparison of absolute radiometers and standard pyrheliometers of socialist countries. The results of the comparisons proved the high metrological quality of this Standard based on an absolute radiometer.

  7. Medical error and disclosure.

    PubMed

    White, Andrew A; Gallagher, Thomas H

    2013-01-01

    Errors occur commonly in healthcare and can cause significant harm to patients. Most errors arise from a combination of individual, system, and communication failures. Neurologists may be involved in harmful errors in any practice setting and should familiarize themselves with tools to prevent, report, and examine errors. Although physicians, patients, and ethicists endorse candid disclosure of harmful medical errors to patients, many physicians express uncertainty about how to approach these conversations. A growing body of research indicates physicians often fail to meet patient expectations for timely and open disclosure. Patients desire information about the error, an apology, and a plan for preventing recurrence of the error. To meet these expectations, physicians should participate in event investigations and plan thoroughly for each disclosure conversation, preferably with a disclosure coach. Physicians should also anticipate and attend to the ongoing medical and emotional needs of the patient. A cultural change towards greater transparency following medical errors is in motion. Substantial progress is still required, but neurologists can further this movement by promoting policies and environments conducive to open reporting, respectful disclosure to patients, and support for the healthcare workers involved. PMID:24182370

  8. LEMming: A Linear Error Model to Normalize Parallel Quantitative Real-Time PCR (qPCR) Data as an Alternative to Reference Gene Based Methods

    PubMed Central

    Feuer, Ronny; Vlaic, Sebastian; Arlt, Janine; Sawodny, Oliver; Dahmen, Uta; Zanger, Ulrich M.; Thomas, Maria

    2015-01-01

    Background Gene expression analysis is an essential part of biological and medical investigations. Quantitative real-time PCR (qPCR) is characterized with excellent sensitivity, dynamic range, reproducibility and is still regarded to be the gold standard for quantifying transcripts abundance. Parallelization of qPCR such as by microfluidic Taqman Fluidigm Biomark Platform enables evaluation of multiple transcripts in samples treated under various conditions. Despite advanced technologies, correct evaluation of the measurements remains challenging. Most widely used methods for evaluating or calculating gene expression data include geNorm and ΔΔCt, respectively. They rely on one or several stable reference genes (RGs) for normalization, thus potentially causing biased results. We therefore applied multivariable regression with a tailored error model to overcome the necessity of stable RGs. Results We developed a RG independent data normalization approach based on a tailored linear error model for parallel qPCR data, called LEMming. It uses the assumption that the mean Ct values within samples of similarly treated groups are equal. Performance of LEMming was evaluated in three data sets with different stability patterns of RGs and compared to the results of geNorm normalization. Data set 1 showed that both methods gave similar results if stable RGs are available. Data set 2 included RGs which are stable according to geNorm criteria, but became differentially expressed in normalized data evaluated by a t-test. geNorm-normalized data showed an effect of a shifted mean per gene per condition whereas LEMming-normalized data did not. Comparing the decrease of standard deviation from raw data to geNorm and to LEMming, the latter was superior. In data set 3 according to geNorm calculated average expression stability and pairwise variation, stable RGs were available, but t-tests of raw data contradicted this. Normalization with RGs resulted in distorted data contradicting

  9. Stitching interferometry: recent results and absolute calibration

    NASA Astrophysics Data System (ADS)

    Bray, Michael

    2004-02-01

    Stitching Interferometry is a method of analysing large optical components using a standard "small" interferometer. This result is obtained by taking multiple overlapping images of the large component, and numerically "stitching" these sub-apertures together. We have already reported the industrial use our Stitching Interferometry systems (Previous SPIE symposia), but experimental results had been lacking because this technique is still new, and users needed to get accustomed to it before producing reliable measurements. We now have more results. We will report user comments and show new, unpublished results. We will discuss sources of error, and show how some of these can be reduced to arbitrarily small values. These will be discussed in some detail. We conclude with a few graphical examples of absolute measurements performed by us.

  10. Naive Hypothesis Testing for Case Series Analysis with Time-Varying Exposure Onset Measurement Error: Inference for Infection-Cardiovascular Risk in Patients on Dialysis

    PubMed Central

    Mohammed, Sandra M.; Dalrymple, Lorien S.; Şentürk, Damla

    2014-01-01

    Summary The case series method is useful in studying the relationship between time-varying exposures, such as infections, and acute events observed during the observation periods of individuals. It provides estimates of the relative incidences of events in risk periods (e.g., 30-day period after infections) relative to the baseline periods. When the times of exposure onsets are not known precisely, application of the case series model ignoring exposure onset measurement error leads to biased estimates. Bias-correction is necessary in order to understand the true directions and effect sizes associated with exposure risk periods, although uncorrected estimators have smaller variance. Thus, inference via hypothesis testing based on uncorrected test statistics, if valid, is potentially more powerful. Furthermore, the tests can be implemented in standard software and do not require additional auxiliary data. In this work, we examine the validity and power of naive hypothesis testing, based on applying the case series analysis to the imprecise data without correcting for the error. Based on simulation studies and theoretical calculations, we determine the validity and relative power of common hypothesis tests of interest in case series analysis. In particular, we illustrate that the tests for the global null hypothesis, the overall null hypotheses associated with all risk periods or all age effects are valid. However, tests of individual risk period parameters are not generally valid. Practical guidelines are provided and illustrated with data from patients on dialysis. PMID:23731166

  11. Absolute Humidity and the Seasonality of Influenza (Invited)

    NASA Astrophysics Data System (ADS)

    Shaman, J. L.; Pitzer, V.; Viboud, C.; Grenfell, B.; Goldstein, E.; Lipsitch, M.

    2010-12-01

    Much of the observed wintertime increase of mortality in temperate regions is attributed to seasonal influenza. A recent re-analysis of laboratory experiments indicates that absolute humidity strongly modulates the airborne survival and transmission of the influenza virus. Here we show that the onset of increased wintertime influenza-related mortality in the United States is associated with anomalously low absolute humidity levels during the prior weeks. We then use an epidemiological model, in which observed absolute humidity conditions temper influenza transmission rates, to successfully simulate the seasonal cycle of observed influenza-related mortality. The model results indicate that direct modulation of influenza transmissibility by absolute humidity alone is sufficient to produce this observed seasonality. These findings provide epidemiological support for the hypothesis that absolute humidity drives seasonal variations of influenza transmission in temperate regions. In addition, we show that variations of the basic and effective reproductive numbers for influenza, caused by seasonal changes in absolute humidity, are consistent with the general timing of pandemic influenza outbreaks observed for 2009 A/H1N1 in temperate regions. Indeed, absolute humidity conditions correctly identify the region of the United States vulnerable to a third, wintertime wave of pandemic influenza. These findings suggest that the timing of pandemic influenza outbreaks is controlled by a combination of absolute humidity conditions, levels of susceptibility and changes in population mixing and contact rates.

  12. Absolute calibration of TFTR helium proportional counters

    SciTech Connect

    Strachan, J.D.; Diesso, M.; Jassby, D.; Johnson, L.; McCauley, S.; Munsat, T.; Roquemore, A.L.; Barnes, C.W. |; Loughlin, M. |

    1995-06-01

    The TFTR helium proportional counters are located in the central five (5) channels of the TFTR multichannel neutron collimator. These detectors were absolutely calibrated using a 14 MeV neutron generator positioned at the horizontal midplane of the TFTR vacuum vessel. The neutron generator position was scanned in centimeter steps to determine the collimator aperture width to 14 MeV neutrons and the absolute sensitivity of each channel. Neutron profiles were measured for TFTR plasmas with time resolution between 5 msec and 50 msec depending upon count rates. The He detectors were used to measure the burnup of 1 MeV tritons in deuterium plasmas, the transport of tritium in trace tritium experiments, and the residual tritium levels in plasmas following 50:50 DT experiments.

  13. An absolute measure for a key currency

    NASA Astrophysics Data System (ADS)

    Oya, Shunsuke; Aihara, Kazuyuki; Hirata, Yoshito

    It is generally considered that the US dollar and the euro are the key currencies in the world and in Europe, respectively. However, there is no absolute general measure for a key currency. Here, we investigate the 24-hour periodicity of foreign exchange markets using a recurrence plot, and define an absolute measure for a key currency based on the strength of the periodicity. Moreover, we analyze the time evolution of this measure. The results show that the credibility of the US dollar has not decreased significantly since the Lehman shock, when the Lehman Brothers bankrupted and influenced the economic markets, and has increased even relatively better than that of the euro and that of the Japanese yen.

  14. Learning in the temporal bisection task: Relative or absolute?

    PubMed

    de Carvalho, Marilia Pinheiro; Machado, Armando; Tonneau, François

    2016-01-01

    We examined whether temporal learning in a bisection task is absolute or relational. Eight pigeons learned to choose a red key after a t-seconds sample and a green key after a 3t-seconds sample. To determine whether they had learned a relative mapping (short→Red, long→Green) or an absolute mapping (t-seconds→Red, 3t-seconds→Green), the pigeons then learned a series of new discriminations in which either the relative or the absolute mapping was maintained. Results showed that the generalization gradient obtained at the end of a discrimination predicted the pattern of choices made during the first session of a new discrimination. Moreover, most acquisition curves and generalization gradients were consistent with the predictions of the learning-to-time model, a Spencean model that instantiates absolute learning with temporal generalization. In the bisection task, the basis of temporal discrimination seems to be absolute, not relational. PMID:26752233

  15. TE = 32 ms vs TE = 100 ms echo‐time 1H‐magnetic resonance spectroscopy in prostate cancer: Tumor metabolite depiction and absolute concentrations in tumors and adjacent tissues

    PubMed Central

    Basharat, Meer; Morgan, Veronica A.; Parker, Chris; Dearnaley, David; deSouza, Nandita M.

    2015-01-01

    Purpose To compare the depiction of metabolite signals in short and long echo time (TE) prostate cancer spectra at 3T, and to quantify their concentrations in tumors of different stage and grade, and tissues adjacent to tumor. Materials and Methods First, single‐voxel magnetic resonance imaging (MRI) spectra were acquired from voxels consisting entirely of tumor, as defined on T 2‐weighted and diffusion‐weighted (DW)‐MRI and from a biopsy‐positive octant, at TEs of 32 msec and 100 msec in 26 prostate cancer patients. Then, in a separate cohort of 26 patients, single‐voxel TE = 32 msec MR spectroscopy (MRS) was performed over a partial‐tumor region and a matching, contralateral normal‐appearing region, defined similarly. Metabolite depiction was compared between TEs using Cramér‐Rao lower bounds (CRLB), and absolute metabolite concentrations were calculated from TE = 32 msec spectra referenced to unsuppressed water spectra. Results Citrate and spermine resonances in tumor were better depicted (had significantly lower CRLB) at TE = 32 msec, while the choline resonance was better depicted at TE = 100 msec. Citrate and spermine concentrations were significantly lower in patients of more advanced stage, significantly lower in Gleason grade 3+4 than 3+3 tumors, and significantly lower than expected from the tumor fraction in partial‐tumor voxels (by 14 mM and 4 mM, respectively, P < 0.05). Conclusion Citrate and spermine resonances are better depicted at short TE than long TE in tumors. Reduction in these concentrations is related to increasing tumor stage and grade in vivo, while reductions in the normal‐appearing tissues immediately adjacent to tumor likely reflect tumor field effects. J. Magn. Reson. Imaging 2015;42:1086–1093. PMID:26258905

  16. Absolute transition probabilities of phosphorus.

    NASA Technical Reports Server (NTRS)

    Miller, M. H.; Roig, R. A.; Bengtson, R. D.

    1971-01-01

    Use of a gas-driven shock tube to measure the absolute strengths of 21 P I lines and 126 P II lines (from 3300 to 6900 A). Accuracy for prominent, isolated neutral and ionic lines is estimated to be 28 to 40% and 18 to 30%, respectively. The data and the corresponding theoretical predictions are examined for conformity with the sum rules.-

  17. Investigation of the effects of correlated measurement errors in time series analysis techniques applied to nuclear material accountancy data. [Program COVAR

    SciTech Connect

    Pike, D.H.; Morrison, G.W.; Downing, D.J.

    1982-04-01

    It has been shown in previous work that the Kalman Filter and Linear Smoother produces optimal estimates of inventory and loss from a material balance area. The assumptions of the Kalman Filter/Linear Smoother approach assume no correlation between inventory measurement error nor does it allow for serial correlation in these measurement errors. The purpose of this report is to extend the previous results by relaxing these assumptions to allow for correlation of measurement errors. The results show how to account for correlated measurement errors in the linear system model of the Kalman Filter/Linear Smoother. An algorithm is also included for calculating the required error covariance matrices.

  18. Absolute measurement of the extreme UV solar flux

    NASA Technical Reports Server (NTRS)

    Carlson, R. W.; Ogawa, H. S.; Judge, D. L.; Phillips, E.

    1984-01-01

    A windowless rare-gas ionization chamber has been developed to measure the absolute value of the solar extreme UV flux in the 50-575-A region. Successful results were obtained on a solar-pointing sounding rocket. The ionization chamber, operated in total absorption, is an inherently stable absolute detector of ionizing UV radiation and was designed to be independent of effects from secondary ionization and gas effusion. The net error of the measurement is + or - 7.3 percent, which is primarily due to residual outgassing in the instrument, other errors such as multiple ionization, photoelectron collection, and extrapolation to the zero atmospheric optical depth being small in comparison. For the day of the flight, Aug. 10, 1982, the solar irradiance (50-575 A), normalized to unit solar distance, was found to be 5.71 + or - 0.42 x 10 to the 10th photons per sq cm sec.

  19. Experimental Quantum Error Detection

    PubMed Central

    Jin, Xian-Min; Yi, Zhen-Huan; Yang, Bin; Zhou, Fei; Yang, Tao; Peng, Cheng-Zhi

    2012-01-01

    Faithful transmission of quantum information is a crucial ingredient in quantum communication networks. To overcome the unavoidable decoherence in a noisy channel, to date, many efforts have been made to transmit one state by consuming large numbers of time-synchronized ancilla states. However, such huge demands of quantum resources are hard to meet with current technology and this restricts practical applications. Here we experimentally demonstrate quantum error detection, an economical approach to reliably protecting a qubit against bit-flip errors. Arbitrary unknown polarization states of single photons and entangled photons are converted into time bins deterministically via a modified Franson interferometer. Noise arising in both 10 m and 0.8 km fiber, which induces associated errors on the reference frame of time bins, is filtered when photons are detected. The demonstrated resource efficiency and state independence make this protocol a promising candidate for implementing a real-world quantum communication network. PMID:22953047

  20. Mathematical Model for Absolute Magnetic Measuring Systems in Industrial Applications

    NASA Astrophysics Data System (ADS)

    Fügenschuh, Armin; Fügenschuh, Marzena; Ludszuweit, Marina; Mojsic, Aleksandar; Sokół, Joanna

    2015-09-01

    Scales for measuring systems are either based on incremental or absolute measuring methods. Incremental scales need to initialize a measurement cycle at a reference point. From there, the position is computed by counting increments of a periodic graduation. Absolute methods do not need reference points, since the position can be read directly from the scale. The positions on the complete scales are encoded using two incremental tracks with different graduation. We present a new method for absolute measuring using only one track for position encoding up to micrometre range. Instead of the common perpendicular magnetic areas, we use a pattern of trapezoidal magnetic areas, to store more complex information. For positioning, we use the magnetic field where every position is characterized by a set of values measured by a hall sensor array. We implement a method for reconstruction of absolute positions from the set of unique measured values. We compare two patterns with respect to uniqueness, accuracy, stability and robustness of positioning. We discuss how stability and robustness are influenced by different errors during the measurement in real applications and how those errors can be compensated.

  1. Absolute optical surface measurement with deflectometry

    NASA Astrophysics Data System (ADS)

    Li, Wansong; Sandner, Marc; Gesierich, Achim; Burke, Jan

    Deflectometry utilises the deformation and displacement of a sample pattern after reflection from a test surface to infer the surface slopes. Differentiation of the measurement data leads to a curvature map, which is very useful for surface quality checks with sensitivity down to the nanometre range. Integration of the data allows reconstruction of the absolute surface shape, but the procedure is very error-prone because systematic errors may add up to large shape deviations. In addition, there are infinitely many combinations for slope and object distance that satisfy a given observation. One solution for this ambiguity is to include information on the object's distance. It must be known very accurately. Two laser pointers can be used for positioning the object, and we also show how a confocal chromatic distance sensor can be used to define a reference point on a smooth surface from which the integration can be started. The used integration algorithm works without symmetry constraints and is therefore suitable for free-form surfaces as well. Unlike null testing, deflectometry also determines radius of curvature (ROC) or focal lengths as a direct result of the 3D surface reconstruction. This is shown by the example of a 200 mm diameter telescope mirror, whose ROC measurements by coordinate measurement machine and deflectometry coincide to within 0.27 mm (or a sag error of 1.3μm). By the example of a diamond-turned off-axis parabolic mirror, we demonstrate that the figure measurement uncertainty comes close to a well-calibrated Fizeau interferometer.

  2. Optomechanics for absolute rotation detection

    NASA Astrophysics Data System (ADS)

    Davuluri, Sankar

    2016-07-01

    In this article, we present an application of optomechanical cavity for the absolute rotation detection. The optomechanical cavity is arranged in a Michelson interferometer in such a way that the classical centrifugal force due to rotation changes the length of the optomechanical cavity. The change in the cavity length induces a shift in the frequency of the cavity mode. The phase shift corresponding to the frequency shift in the cavity mode is measured at the interferometer output to estimate the angular velocity of absolute rotation. We derived an analytic expression to estimate the minimum detectable rotation rate in our scheme for a given optomechanical cavity. Temperature dependence of the rotation detection sensitivity is studied.

  3. The Absolute Spectrum Polarimeter (ASP)

    NASA Technical Reports Server (NTRS)

    Kogut, A. J.

    2010-01-01

    The Absolute Spectrum Polarimeter (ASP) is an Explorer-class mission to map the absolute intensity and linear polarization of the cosmic microwave background and diffuse astrophysical foregrounds over the full sky from 30 GHz to 5 THz. The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r much greater than 1O(raised to the power of { -3}) and Compton distortion y < 10 (raised to the power of{-6}). We describe the ASP instrument and mission architecture needed to detect the signature of an inflationary epoch in the early universe using only 4 semiconductor bolometers.

  4. Real-time soil flux measurements and calculations with CRDS + Soil Flux Processor: comparison among flux algorithms and derivation of whole system error

    NASA Astrophysics Data System (ADS)

    Alstad, K. P.; Venterea, R. T.; Tan, S. M.; Saad, N.

    2015-12-01

    Understanding chamber-based soil flux model fitting and measurement error is key to scaling soils GHG emissions and resolving the primary uncertainties in climate and management feedbacks at regional scales. One key challenge is the selection of the correct empirical model applied to soil flux rate analysis in chamber-based experiments. Another challenge is the characterization of error in the chamber measurement. Traditionally, most chamber-based N2O and CH4 measurements and model derivations have used discrete sampling for GC analysis, and have been conducted using extended chamber deployment periods (DP) which are expected to result in substantial alteration of the pre-deployment flux. The development of high-precision, high-frequency CRDS analyzers has advanced the science of soil flux analysis by facilitating much shorter DP and, in theory, less chamber-induced suppression of the soil-atmosphere diffusion gradient. As well, a new software tool developed by Picarro (the "Soil Flux Processor" or "SFP") links the power of Cavity Ring-Down Spectroscopy (CRDS) technology with an easy-to-use interface that features flexible sample-ID and run-schemes, and provides real-time monitoring of chamber accumulations and environmental conditions. The SFP also includes a sophisticated flux analysis interface which offers a user-defined model selection, including three predominant fit algorithms as default, and an open-code interface for user-composed algorithms. The SFP is designed to couple with the Picarro G2508 system, an analyzer which simplifies soils flux studies by simultaneously measuring primary GHG species -- N2O, CH4, CO2 and H2O. In this study, Picarro partners with the ARS USDA Soil & Water Management Research Unit (R. Venterea, St. Paul), to examine the degree to which the high-precision, high-frequency Picarro analyzer allows for much shorter DPs periods in chamber-based flux analysis, and, in theory, less chamber-induced suppression of the soil

  5. Using absolute gravimeter data to determine vertical gravity gradients

    USGS Publications Warehouse

    Robertson, D.S.

    2001-01-01

    The position versus time data from a free-fall absolute gravimeter can be used to estimate the vertical gravity gradient in addition to the gravity value itself. Hipkin has reported success in estimating the vertical gradient value using a data set of unusually good quality. This paper explores techniques that may be applicable to a broader class of data that may be contaminated with "system response" errors of larger magnitude than were evident in the data used by Hipkin. This system response function is usually modelled as a sum of exponentially decaying sinusoidal components. The technique employed here involves combining the x0, v0 and g parameters from all the drops made during a site occupation into a single least-squares solution, and including the value of the vertical gradient and the coefficients of system response function in the same solution. The resulting non-linear equations must be solved iteratively and convergence presents some difficulties. Sparse matrix techniques are used to make the least-squares problem computationally tractable.

  6. Absolute Priority for a Vehicle in VANET

    NASA Astrophysics Data System (ADS)

    Shirani, Rostam; Hendessi, Faramarz; Montazeri, Mohammad Ali; Sheikh Zefreh, Mohammad

    In today's world, traffic jams waste hundreds of hours of our life. This causes many researchers try to resolve the problem with the idea of Intelligent Transportation System. For some applications like a travelling ambulance, it is important to reduce delay even for a second. In this paper, we propose a completely infrastructure-less approach for finding shortest path and controlling traffic light to provide absolute priority for an emergency vehicle. We use the idea of vehicular ad-hoc networking to reduce the imposed travelling time. Then, we simulate our proposed protocol and compare it with a centrally controlled traffic light system.

  7. Magnetospheric Multiscale (MMS) Mission Commissioning Phase Orbit Determination Error Analysis

    NASA Technical Reports Server (NTRS)

    Chung, Lauren R.; Novak, Stefan; Long, Anne; Gramling, Cheryl

    2009-01-01

    The Magnetospheric MultiScale (MMS) mission commissioning phase starts in a 185 km altitude x 12 Earth radii (RE) injection orbit and lasts until the Phase 1 mission orbits and orientation to the Earth-Sun li ne are achieved. During a limited time period in the early part of co mmissioning, five maneuvers are performed to raise the perigee radius to 1.2 R E, with a maneuver every other apogee. The current baseline is for the Goddard Space Flight Center Flight Dynamics Facility to p rovide MMS orbit determination support during the early commissioning phase using all available two-way range and Doppler tracking from bo th the Deep Space Network and Space Network. This paper summarizes th e results from a linear covariance analysis to determine the type and amount of tracking data required to accurately estimate the spacecraf t state, plan each perigee raising maneuver, and support thruster cal ibration during this phase. The primary focus of this study is the na vigation accuracy required to plan the first and the final perigee ra ising maneuvers. Absolute and relative position and velocity error hi stories are generated for all cases and summarized in terms of the ma ximum root-sum-square consider and measurement noise error contributi ons over the definitive and predictive arcs and at discrete times inc luding the maneuver planning and execution times. Details of the meth odology, orbital characteristics, maneuver timeline, error models, and error sensitivities are provided.

  8. Analyses of atmospheric extinction data obtained by astronomers. I - A time-trend analysis of data with internal accidental errors obtained at four observatories

    NASA Technical Reports Server (NTRS)

    Taylor, B. J.; Lucke, P. B.; Laulainen, N. S.

    1977-01-01

    Long-term time-trend analysis was performed on astronomical atmospheric extinction data in wideband UBV and various narrow-band systems recorded at Cerro Tololo, Kitt Peak, Lick, and McDonald observatories. All of the data had to be transformed into uniform monochromatic extinction data before trend analysis could be performed. The paper describes the various reduction techniques employed. The time-trend analysis was then carried out by the method of least squares. A special technique, called 'histogram shaping', was employed to adjust for the fact that the errors of the reduced monochromatic extinction data were not essentially Gaussian. On the assumption that there are no compensatory background and local extinction changes, the best values obtained for extinction trends due to background aerosol changes during the years 1960 to 1972 are 0.006 + or - 0.013 (rms) and 0.009 + or - 0.009 (rms) stellar magnitudes per air mass per decade in the blue and yellow wavelength regions, respectively.

  9. Real-Time Correction of Rigid-Body-Motion-Induced Phase Errors for Diffusion-Weighted Steady State Free Precession Imaging

    PubMed Central

    O’Halloran, R; Aksoy, M; Aboussouan, E; Peterson, E; Van, A; Bammer, R

    2014-01-01

    Purpose Diffusion contrast in diffusion-weighted steady state free precession MRI is generated through the constructive addition of signal from many coherence pathways. Motion-induced phase causes destructive interference which results in loss of signal magnitude and diffusion contrast. In this work, a 3D navigator-based real-time correction of the rigid-body-motion-induced phase errors is developed for diffusion-weighted steady state free precession MRI. Methods The efficacy of the real-time prospective correction method in preserving phase coherence of the steady-state is tested in 3D phantom experiments and 3D scans of healthy human subjects. Results In nearly all experiments, the signal magnitude in images obtained with proposed prospective correction was higher than the signal magnitude in images obtained with no correction. In the human subjects the mean magnitude signal in the data was up to 30 percent higher with prospective motion correction than without. Prospective correction never resulted in a decrease in mean signal magnitude in either the data or in the images. Conclusions The proposed prospective motion correction method is shown to preserve the phase coherence of the steady state in diffusion-weighted steady state free precession MRI, thus mitigating signal magnitude losses that would confound the desired diffusion contrast. PMID:24715414

  10. Standardization of the cumulative absolute velocity

    SciTech Connect

    O'Hara, T.F.; Jacobson, J.P. )

    1991-12-01

    EPRI NP-5930, A Criterion for Determining Exceedance of the Operating Basis Earthquake,'' was published in July 1988. As defined in that report, the Operating Basis Earthquake (OBE) is exceeded when both a response spectrum parameter and a second damage parameter, referred to as the Cumulative Absolute Velocity (CAV), are exceeded. In the review process of the above report, it was noted that the calculation of CAV could be confounded by time history records of long duration containing low (nondamaging) acceleration. Therefore, it is necessary to standardize the method of calculating CAV to account for record length. This standardized methodology allows consistent comparisons between future CAV calculations and the adjusted CAV threshold value based upon applying the standardized methodology to the data set presented in EPRI NP-5930. The recommended method to standardize the CAV calculation is to window its calculation on a second-by-second basis for a given time history. If the absolute acceleration exceeds 0.025g at any time during each one second interval, the earthquake records used in EPRI NP-5930 have been reanalyzed and the adjusted threshold of damage for CAV was found to be 0.16g-set.

  11. The AFGL absolute gravity program

    NASA Technical Reports Server (NTRS)

    Hammond, J. A.; Iliff, R. L.

    1978-01-01

    A brief discussion of the AFGL's (Air Force Geophysics Laboratory) program in absolute gravity is presented. Support of outside work and in-house studies relating to gravity instrumentation are discussed. A description of the current transportable system is included and the latest results are presented. These results show good agreement with measurements at the AFGL site by an Italian system. The accuracy obtained by the transportable apparatus is better than 0.1 microns sq sec 10 microgal and agreement with previous measurements is within the combined uncertainties of the measurements.

  12. Social aspects of clinical errors.

    PubMed

    Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave

    2009-08-01

    Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors. PMID:19201405

  13. Transient absolute robustness in stochastic biochemical networks.

    PubMed

    Enciso, German A

    2016-08-01

    Absolute robustness allows biochemical networks to sustain a consistent steady-state output in the face of protein concentration variability from cell to cell. This property is structural and can be determined from the topology of the network alone regardless of rate parameters. An important question regarding these systems is the effect of discrete biochemical noise in the dynamical behaviour. In this paper, a variable freezing technique is developed to show that under mild hypotheses the corresponding stochastic system has a transiently robust behaviour. Specifically, after finite time the distribution of the output approximates a Poisson distribution, centred around the deterministic mean. The approximation becomes increasingly accurate, and it holds for increasingly long finite times, as the total protein concentrations grow to infinity. In particular, the stochastic system retains a transient, absolutely robust behaviour corresponding to the deterministic case. This result contrasts with the long-term dynamics of the stochastic system, which eventually must undergo an extinction event that eliminates robustness and is completely different from the deterministic dynamics. The transiently robust behaviour may be sufficient to carry out many forms of robust signal transduction and cellular decision-making in cellular organisms. PMID:27581485

  14. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  15. Absolute Density Calibration Cell for Laser Induced Fluorescence Erosion Rate Measurements

    NASA Technical Reports Server (NTRS)

    Domonkos, Matthew T.; Stevens, Richard E.

    2001-01-01

    Flight qualification of ion thrusters typically requires testing on the order of 10,000 hours. Extensive knowledge of wear mechanisms and rates is necessary to establish design confidence prior to long duration tests. Consequently, real-time erosion rate measurements offer the potential both to reduce development costs and to enhance knowledge of the dependency of component wear on operating conditions. Several previous studies have used laser-induced fluorescence (LIF) to measure real-time, in situ erosion rates of ion thruster accelerator grids. Those studies provided only relative measurements of the erosion rate. In the present investigation, a molybdenum tube was resistively heated such that the evaporation rate yielded densities within the tube on the order of those expected from accelerator grid erosion. This work examines the suitability of the density cell as an absolute calibration source for LIF measurements, and the intrinsic error was evaluated.

  16. Absolute calibration of forces in optical tweezers

    NASA Astrophysics Data System (ADS)

    Dutra, R. S.; Viana, N. B.; Maia Neto, P. A.; Nussenzveig, H. M.

    2014-07-01

    Optical tweezers are highly versatile laser traps for neutral microparticles, with fundamental applications in physics and in single molecule cell biology. Force measurements are performed by converting the stiffness response to displacement of trapped transparent microspheres, employed as force transducers. Usually, calibration is indirect, by comparison with fluid drag forces. This can lead to discrepancies by sizable factors. Progress achieved in a program aiming at absolute calibration, conducted over the past 15 years, is briefly reviewed. Here we overcome its last major obstacle, a theoretical overestimation of the peak stiffness, within the most employed range for applications, and we perform experimental validation. The discrepancy is traced to the effect of primary aberrations of the optical system, which are now included in the theory. All required experimental parameters are readily accessible. Astigmatism, the dominant effect, is measured by analyzing reflected images of the focused laser spot, adapting frequently employed video microscopy techniques. Combined with interface spherical aberration, it reveals a previously unknown window of instability for trapping. Comparison with experimental data leads to an overall agreement within error bars, with no fitting, for a broad range of microsphere radii, from the Rayleigh regime to the ray optics one, for different polarizations and trapping heights, including all commonly employed parameter domains. Besides signaling full first-principles theoretical understanding of optical tweezers operation, the results may lead to improved instrument design and control over experiments, as well as to an extended domain of applicability, allowing reliable force measurements, in principle, from femtonewtons to nanonewtons.

  17. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954

  18. Cosmology with negative absolute temperatures

    NASA Astrophysics Data System (ADS)

    Vieira, J. P. P.; Byrnes, Christian T.; Lewis, Antony

    2016-08-01

    Negative absolute temperatures (NAT) are an exotic thermodynamical consequence of quantum physics which has been known since the 1950's (having been achieved in the lab on a number of occasions). Recently, the work of Braun et al. [1] has rekindled interest in negative temperatures and hinted at a possibility of using NAT systems in the lab as dark energy analogues. This paper goes one step further, looking into the cosmological consequences of the existence of a NAT component in the Universe. NAT-dominated expanding Universes experience a borderline phantom expansion (w < ‑1) with no Big Rip, and their contracting counterparts are forced to bounce after the energy density becomes sufficiently large. Both scenarios might be used to solve horizon and flatness problems analogously to standard inflation and bouncing cosmologies. We discuss the difficulties in obtaining and ending a NAT-dominated epoch, and possible ways of obtaining density perturbations with an acceptable spectrum.

  19. A Helium-Cooled Absolute Cavity Radiometer For Solar And Laboratory Irradiance Measurement

    NASA Astrophysics Data System (ADS)

    Foukal, P.; Miller, P.

    1983-09-01

    We describe the design and testing of a helium-cooled absolute radiometer (HCAR) devel-oped for highly reproducible measurements of total solar irradiance and ultraviolet flux, and for laboratory standards uses. The receiver of this cryogenic radiometer is a blackened cone of pure copper whose temperature is sensed by a germanium resistance thermometer. During a duty cycle, radiant power input is compared to electrical heating in an accurate resistor wound on the receiver, as in conventional self-calibrating radiometers of the PACRAD and ACR type. But operation at helium temperatures enables us to achieve excellent radia-tive shielding between the receiver and the radiometer thermal background. This enables us to attain a sensitivity level of 10-7 watts at 30 seconds integration time, at least 10 times better than achieved by room temperature cavities. The dramatic drop of copper specific heat at helium temperatures reduces the time constant for a given mass of receiver, by a factor of 103. Together with other cryogenic materials properties such as electrical superconductivity and the high thermal conductivity of copper, this can be used to greatly reduce non-equivalence errors between electrical and radiant heating, that presently limit the absolute accuracy of radiometers to approximately 0,2%. Absolute accuracy of better than 0.01% has been achieved with a similar cryogenic radiometer in laboratory measurements of the Stefan-Boltzmann constant at NPL in the U.K. Electrical and radiometric tests con-ducted so far on our prototype indicate that comparable accuracy and long-term reproducibility can be achieved in a versatile instrument of manageable size for Shuttle flight and laboratory standards uses. This work is supported at AER under NOAA contract NA8ORAC00204 and NSF grant DMR-8260273.

  20. Global absolut gravity reference system as replacement of IGSN 71

    NASA Astrophysics Data System (ADS)

    Wilmes, Herbert; Wziontek, Hartmut; Falk, Reinhard

    2015-04-01

    The determination of precise gravity field parameters is of great importance in a period in which earth sciences are achieving the necessary accuracy to monitor and document global change processes. This is the reason why experts from geodesy and metrology joined in a successful cooperation to make absolute gravity observations traceable to SI quantities, to improve the metrological kilogram definition and to monitor mass movements and smallest height changes for geodetic and geophysical applications. The international gravity datum is still defined by the International Gravity Standardization Net adopted in 1971 (IGSN 71). The network is based upon pendulum and spring gravimeter observations taken in the 1950s and 60s supported by the early free fall absolute gravimeters. Its gravity values agreed in every case to better than 0.1 mGal. Today, more than 100 absolute gravimeters are in use worldwide. The series of repeated international comparisons confirms the traceability of absolute gravity measurements to SI quantities and confirm the degree of equivalence of the gravimeters in the order of a few µGal. For applications in geosciences where e.g. gravity changes over time need to be analyzed, the temporal stability of an absolute gravimeter is most important. Therefore, the proposition is made to replace the IGSN 71 by an up-to-date gravity reference system which is based upon repeated absolute gravimeter comparisons and a global network of well controlled gravity reference stations.

  1. Absolute Electron Extraction Efficiency of Liquid Xenon

    NASA Astrophysics Data System (ADS)

    Kamdin, Katayun; Mizrachi, Eli; Morad, James; Sorensen, Peter

    2016-03-01

    Dual phase liquid/gas xenon time projection chambers (TPCs) currently set the world's most sensitive limits on weakly interacting massive particles (WIMPs), a favored dark matter candidate. These detectors rely on extracting electrons from liquid xenon into gaseous xenon, where they produce proportional scintillation. The proportional scintillation from the extracted electrons serves to internally amplify the WIMP signal; even a single extracted electron is detectable. Credible dark matter searches can proceed with electron extraction efficiency (EEE) lower than 100%. However, electrons systematically left at the liquid/gas boundary are a concern. Possible effects include spontaneous single or multi-electron proportional scintillation signals in the gas, or charging of the liquid/gas interface or detector materials. Understanding EEE is consequently a serious concern for this class of rare event search detectors. Previous EEE measurements have mostly been relative, not absolute, assuming efficiency plateaus at 100%. I will present an absolute EEE measurement with a small liquid/gas xenon TPC test bed located at Lawrence Berkeley National Laboratory.

  2. Absolute/convective instability of planar viscoelastic jets

    NASA Astrophysics Data System (ADS)

    Ray, Prasun K.; Zaki, Tamer A.

    2015-01-01

    Spatiotemporal linear stability analysis is used to investigate the onset of local absolute instability in planar viscoelastic jets. The influence of viscoelasticity in dilute polymer solutions is modeled with the FENE-P constitutive equation which requires the specification of a non-dimensional polymer relaxation time (the Weissenberg number, We), the maximum polymer extensibility, L, and the ratio of solvent and solution viscosities, β. A two-parameter family of velocity profiles is used as the base state with the parameter, S, controlling the amount of co- or counter-flow while N-1 sets the thickness of the jet shear layer. We examine how the variation of these fluid and flow parameters affects the minimum value of S at which the flow becomes locally absolutely unstable. Initially setting the Reynolds number to Re = 500, we find that the first varicose jet-column mode dictates the presence of absolute instability, and increasing the Weissenberg number produces important changes in the nature of the instability. The region of absolute instability shifts towards thin shear layers, and the amount of back-flow needed for absolute instability decreases (i.e., the influence of viscoelasticity is destabilizing). Additionally, when We is sufficiently large and N-1 is sufficiently small, single-stream jets become absolutely unstable. Numerical experiments with approximate equations show that both the polymer and solvent contributions to the stress become destabilizing when the scaled shear rate, η = /W e dU¯1/dx 2L ( /d U ¯ 1 d x 2 is the base-state velocity gradient), is sufficiently large. These qualitative trends are largely unchanged when the Reynolds number is reduced; however, the relative importance of the destabilizing stresses increases tangibly. Consequently, absolute instability is substantially enhanced, and single-stream jets become absolutely unstable over a sizable portion of the parameter space.

  3. The absolute radiometric calibration of the advanced very high resolution radiometer

    NASA Technical Reports Server (NTRS)

    Slater, P. N.; Teillet, P. M.; Ding, Y.

    1988-01-01

    The need for independent, redundant absolute radiometric calibration methods is discussed with reference to the Thematic Mapper. Uncertainty requirements for absolute calibration of between 0.5 and 4 percent are defined based on the accuracy of reflectance retrievals at an agricultural site. It is shown that even very approximate atmospheric corrections can reduce the error in reflectance retrieval to 0.02 over the reflectance range 0 to 0.4.

  4. Error analysis in laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Gantert, Walter A.; Tendick, Frank; Bhoyrul, Sunil; Tyrrell, Dana; Fujino, Yukio; Rangel, Shawn; Patti, Marco G.; Way, Lawrence W.

    1998-06-01

    Iatrogenic complications in laparoscopic surgery, as in any field, stem from human error. In recent years, cognitive psychologists have developed theories for understanding and analyzing human error, and the application of these principles has decreased error rates in the aviation and nuclear power industries. The purpose of this study was to apply error analysis to laparoscopic surgery and evaluate its potential for preventing complications. Our approach is based on James Reason's framework using a classification of errors according to three performance levels: at the skill- based performance level, slips are caused by attention failures, and lapses result form memory failures. Rule-based mistakes constitute the second level. Knowledge-based mistakes occur at the highest performance level and are caused by shortcomings in conscious processing. These errors committed by the performer 'at the sharp end' occur in typical situations which often times are brought about by already built-in latent system failures. We present a series of case studies in laparoscopic surgery in which errors are classified and the influence of intrinsic failures and extrinsic system flaws are evaluated. Most serious technical errors in lap surgery stem from a rule-based or knowledge- based mistake triggered by cognitive underspecification due to incomplete or illusory visual input information. Error analysis in laparoscopic surgery should be able to improve human performance, and it should detect and help eliminate system flaws. Complication rates in laparoscopic surgery due to technical errors can thus be considerably reduced.

  5. Sounding rocket measurement of the absolute solar EUV flux utilizing a silicon photodiode

    NASA Technical Reports Server (NTRS)

    Ogawa, H. S.; Mcmullin, D.; Judge, D. L.; Canfield, L. R.

    1990-01-01

    A newly developed stable and high quantum efficiency silicon photodiode was used to obtain an accurate measurement of the integrated absolute magnitude of the solar extreme UV photon flux in the spectral region between 50 and 800 A. The adjusted daily 10.7-cm solar radio flux and sunspot number were 168.4 and 121, respectively. The unattenuated absolute value of the solar EUV flux at 1 AU in the specified wavelength region was 6.81 x 10 to the 10th photons/sq cm per s. Based on a nominal probable error of 7 percent for National Institute of Standards and Technology detector efficiency measurements in the 50- to 500-A region (5 percent on longer wavelength measurements between 500 and 1216 A), and based on experimental errors associated with the present rocket instrumentation and analysis, a conservative total error estimate of about 14 percent is assigned to the absolute integral solar flux obtained.

  6. Statistical errors in Monte Carlo estimates of systematic errors

    NASA Astrophysics Data System (ADS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  7. Absolute magnitude calibration using trigonometric parallax - Incomplete, spectroscopic samples

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Casertano, Stefano

    1991-01-01

    A new numerical algorithm is used to calibrate the absolute magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum-likelihood estimation, can retrieve unbiased estimates of the intrinsic absolute magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate error estimates are derived for each of the fitted parameters. The algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. The procedure is described in general and applied to both real and simulated data.

  8. Flow rate calibration for absolute cell counting rationale and design.

    PubMed

    Walker, Clare; Barnett, David

    2006-05-01

    There is a need for absolute leukocyte enumeration in the clinical setting, and accurate, reliable (and affordable) technology to determine absolute leukocyte counts has been developed. Such technology includes single platform and dual platform approaches. Derivations of these counts commonly incorporate the addition of a known number of latex microsphere beads to a blood sample, although it has been suggested that the addition of beads to a sample may only be required to act as an internal quality control procedure for assessing the pipetting error. This unit provides the technical details for undertaking flow rate calibration that obviates the need to add reference beads to each sample. It is envisaged that this report will provide the basis for subsequent clinical evaluations of this novel approach. PMID:18770842

  9. Photometer calibration error using extended standard sources

    NASA Technical Reports Server (NTRS)

    Torr, M. R.; Hays, P. B.; Kennedy, B. C.; Torr, D. G.

    1976-01-01

    As part of a project to compare measurements of the night airglow made by the visible airglow experiment on the Atmospheric Explorer-C satellite, the standard light sources of several airglow observatories were compared with the standard source used in the absolute calibration of the satellite photometer. In the course of the comparison, it has been found that serious calibration errors (up to a factor of two) can arise when a calibration source with a reflecting surface is placed close to an interference filter. For reliable absolute calibration, the source should be located at a distance of at least five filter radii from the interference filter.

  10. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  11. Assessment of absolute added correlative coding in optical intensity modulation and direct detection channels

    NASA Astrophysics Data System (ADS)

    Dong-Nhat, Nguyen; Elsherif, Mohamed A.; Malekmohammadi, Amin

    2016-06-01

    The performance of absolute added correlative coding (AACC) modulation format with direct detection has been numerically and analytically reported, targeting metro data center interconnects. Hereby, the focus lies on the performance of the bit error rate, noise contributions, spectral efficiency, and chromatic dispersion tolerance. The signal space model of AACC, where the average electrical and optical power expressions are derived for the first time, is also delineated. The proposed modulation format was also compared to other well-known signaling, such as on-off-keying (OOK) and four-level pulse-amplitude modulation, at the same bit rate in a directly modulated vertical-cavity surface-emitting laser-based transmission system. The comparison results show a clear advantage of AACC in achieving longer fiber delivery distance due to the higher dispersion tolerance.

  12. Density dependence and climate effects in Rocky Mountain elk: an application of regression with instrumental variables for population time series with sampling error.

    PubMed

    Creel, Scott; Creel, Michael

    2009-11-01

    1. Sampling error in annual estimates of population size creates two widely recognized problems for the analysis of population growth. First, if sampling error is mistakenly treated as process error, one obtains inflated estimates of the variation in true population trajectories (Staples, Taper & Dennis 2004). Second, treating sampling error as process error is thought to overestimate the importance of density dependence in population growth (Viljugrein et al. 2005; Dennis et al. 2006). 2. In ecology, state-space models are used to account for sampling error when estimating the effects of density and other variables on population growth (Staples et al. 2004; Dennis et al. 2006). In econometrics, regression with instrumental variables is a well-established method that addresses the problem of correlation between regressors and the error term, but requires fewer assumptions than state-space models (Davidson & MacKinnon 1993; Cameron & Trivedi 2005). 3. We used instrumental variables to account for sampling error and fit a generalized linear model to 472 annual observations of population size for 35 Elk Management Units in Montana, from 1928 to 2004. We compared this model with state-space models fit with the likelihood function of Dennis et al. (2006). We discuss the general advantages and disadvantages of each method. Briefly, regression with instrumental variables is valid with fewer distributional assumptions, but state-space models are more efficient when their distributional assumptions are met. 4. Both methods found that population growth was negatively related to population density and winter snow accumulation. Summer rainfall and wolf (Canis lupus) presence had much weaker effects on elk (Cervus elaphus) dynamics [though limitation by wolves is strong in some elk populations with well-established wolf populations (Creel et al. 2007; Creel & Christianson 2008)]. 5. Coupled with predictions for Montana from global and regional climate models, our results

  13. Absolute nonlocality via distributed computing without communication

    NASA Astrophysics Data System (ADS)

    Czekaj, Ł.; Pawłowski, M.; Vértesi, T.; Grudka, A.; Horodecki, M.; Horodecki, R.

    2015-09-01

    Understanding the role that quantum entanglement plays as a resource in various information processing tasks is one of the crucial goals of quantum information theory. Here we propose an alternative perspective for studying quantum entanglement: distributed computation of functions without communication between nodes. To formalize this approach, we propose identity games. Surprisingly, despite no signaling, we obtain that nonlocal quantum strategies beat classical ones in terms of winning probability for identity games originating from certain bipartite and multipartite functions. Moreover we show that, for a majority of functions, access to general nonsignaling resources boosts success probability two times in comparison to classical ones for a number of large enough outputs. Because there are no constraints on the inputs and no processing of the outputs in the identity games, they detect very strong types of correlations: absolute nonlocality.

  14. Absolute oral bioavailability of ciprofloxacin.

    PubMed

    Drusano, G L; Standiford, H C; Plaisance, K; Forrest, A; Leslie, J; Caldwell, J

    1986-09-01

    We evaluated the absolute bioavailability of ciprofloxacin, a new quinoline carboxylic acid, in 12 healthy male volunteers. Doses of 200 mg were given to each of the volunteers in a randomized, crossover manner 1 week apart orally and as a 10-min intravenous infusion. Half-lives (mean +/- standard deviation) for the intravenous and oral administration arms were 4.2 +/- 0.77 and 4.11 +/- 0.74 h, respectively. The serum clearance rate averaged 28.5 +/- 4.7 liters/h per 1.73 m2 for the intravenous administration arm. The renal clearance rate accounted for approximately 60% of the corresponding serum clearance rate and was 16.9 +/- 3.0 liters/h per 1.73 m2 for the intravenous arm and 17.0 +/- 2.86 liters/h per 1.73 m2 for the oral administration arm. Absorption was rapid, with peak concentrations in serum occurring at 0.71 +/- 0.15 h. Bioavailability, defined as the ratio of the area under the curve from 0 h to infinity for the oral to the intravenous dose, was 69 +/- 7%. We conclude that ciprofloxacin is rapidly absorbed and reliably bioavailable in these healthy volunteers. Further studies with ciprofloxacin should be undertaken in target patient populations under actual clinical circumstances. PMID:3777908

  15. Absolute Instability in Coupled-Cavity TWTs

    NASA Astrophysics Data System (ADS)

    Hung, D. M. H.; Rittersdorf, I. M.; Zhang, Peng; Lau, Y. Y.; Simon, D. H.; Gilgenbach, R. M.; Chernin, D.; Antonsen, T. M., Jr.

    2014-10-01

    This paper will present results of our analysis of absolute instability in a coupled-cavity traveling wave tube (TWT). The structure mode at the lower and upper band edges are respectively approximated by a hyperbola in the (omega, k) plane. When the Briggs-Bers criterion is applied, a threshold current for onset of absolute instability is observed at the upper band edge, but not the lower band edge. The nonexistence of absolute instability at the lower band edge is mathematically similar to the nonexistence of absolute instability that we recently demonstrated for a dielectric TWT. The existence of absolute instability at the upper band edge is mathematically similar to the existence of absolute instability in a gyroton traveling wave amplifier. These interesting observations will be discussed, and the practical implications will be explored. This work was supported by AFOSR, ONR, and L-3 Communications Electron Devices.

  16. Testing and evaluation of thermal cameras for absolute temperature measurement

    NASA Astrophysics Data System (ADS)

    Chrzanowski, Krzysztof; Fischer, Joachim; Matyszkiel, Robert

    2000-09-01

    The accuracy of temperature measurement is the most important criterion for the evaluation of thermal cameras used in applications requiring absolute temperature measurement. All the main international metrological organizations currently propose a parameter called uncertainty as a measure of measurement accuracy. We propose a set of parameters for the characterization of thermal measurement cameras. It is shown that if these parameters are known, then it is possible to determine the uncertainty of temperature measurement due to only the internal errors of these cameras. Values of this uncertainty can be used as an objective criterion for comparisons of different thermal measurement cameras.

  17. a Portable Apparatus for Absolute Measurements of the Earth's Gravity.

    NASA Astrophysics Data System (ADS)

    Zumberge, Mark Andrew

    We have developed a new, portable apparatus for making absolute measurements of the acceleration due to the earth's gravity. We use the method of interferometrically determining the acceleration of a freely falling corner -cube prism. The falling object is surrounded by a chamber which is driven vertically inside a fixed vacuum chamber. This falling chamber is servoed to track the falling corner -cube to shield it from drag due to background gas. In addition, the drag-free falling chamber removes the need for a magnetic release, shields the falling object from electrostatic forces, and provides a means of both gently arresting the falling object and quickly returning it to its start position, to allow rapid acquisition of data. A synthesized long period isolation device reduces the noise due to seismic oscillations. A new type of Zeeman laser is used as the light source in the interferometer, and is compared with the wavelength of an iodine stabilized laser. The times of occurrence of 45 interference fringes are measured to within 0.2 nsec over a 20 cm drop and are fit to a quadratic by an on-line minicomputer. 150 drops can be made in ten minutes resulting in a value of g having a precision of 3 to 6 parts in 10('9). Systematic errors have been determined to be less than 5 parts in 10('9) through extensive tests. Three months of gravity data have been obtained with a reproducibility ranging from 5 to 10 parts in 10('9). The apparatus has been designed to be easily portable. Field measurements are planned for the immediate future. An accuracy of 6 parts in 10('9) corresponds to a height sensitivity of 2 cm. Vertical motions in the earth's crust and tectonic density changes that may precede earthquakes are to be investigated using this apparatus.

  18. Absolute negative mobility of interacting Brownian particles

    NASA Astrophysics Data System (ADS)

    Ou, Ya-li; Hu, Cai-tian; Wu, Jian-chun; Ai, Bao-quan

    2015-12-01

    Transport of interacting Brownian particles in a periodic potential is investigated in the presence of an ac force and a dc force. From Brownian dynamic simulations, we find that both the interaction between particles and the thermal fluctuations play key roles in the absolute negative mobility (the particle noisily moves backwards against a small constant bias). When no the interaction acts, there is only one region where the absolute negative mobility occurs. In the presence of the interaction, the absolute negative mobility may appear in multiple regions. The weak interaction can be helpful for the absolute negative mobility, while the strong interaction has a destructive impact on it.

  19. Direct comparisons between absolute and relative geomagnetic paleointensities: Absolute calibration of a relative paleointensity stack

    NASA Astrophysics Data System (ADS)

    Mochizuki, N.; Yamamoto, Y.; Hatakeyama, T.; Shibuya, H.

    2013-12-01

    Absolute geomagnetic paleointensities (APIs) have been estimated from igneous rocks, while relative paleomagnetic intensities (RPIs) have been reported from sediment cores. These two datasets have been treated separately, as correlations between APIs and RPIs are difficult on account of age uncertainties. High-resolution RPI stacks have been constructed from globally distributed sediment cores with high sedimentation rates. Previous studies often assumed that the RPI stacks have a linear relationship with geomagnetic axial dipole moments, and calibrated the RPI values to API values. However, the assumption of a linear relationship between APIs and RPIs has not been evaluated. Also, a quantitative calibration method for the RPI is lacking. We present a procedure for directly comparing API and RPI stacks, thus allowing reliable calibrations of RPIs. Direct comparisons between APIs and RPIs were conducted with virtually no associated age errors using both tephrochronologic correlations and RPI minima. Using the stratigraphic positions of tephra layers in oxygen isotope stratigraphic records, we directly compared the RPIs and APIs reported from welded tuffs contemporaneously extruded with the tephra layers. In addition, RPI minima during geomagnetic reversals and excursions were compared with APIs corresponding to the reversals and excursions. The comparison of APIs and RPIs at these exact points allowed a reliable calibration of the RPI values. We applied this direct comparison procedure to the global RPI stack PISO-1500. For six independent calibration points, virtual axial dipole moments (VADMs) from the corresponding APIs and RPIs of the PISO-1500 stack showed a near-linear relationship. On the basis of the linear relationship, RPIs of the stack were successfully calibrated to the VADMs. The direct comparison procedure provides an absolute calibration method that will contribute to the recovery of temporal variations and distributions of geomagnetic axial dipole

  20. Remediating Common Math Errors.

    ERIC Educational Resources Information Center

    Wagner, Rudolph F.

    1981-01-01

    Explanations and remediation suggestions for five types of mathematics errors due either to perceptual or cognitive difficulties are given. Error types include directionality problems, mirror writing, visually misperceived signs, diagnosed directionality problems, and mixed process errors. (CL)

  1. Absolute irradiance of the Moon for on-orbit calibration

    USGS Publications Warehouse

    Stone, T.C.; Kieffer, H.H.

    2002-01-01

    The recognized need for on-orbit calibration of remote sensing imaging instruments drives the ROLO project effort to characterize the Moon for use as an absolute radiance source. For over 5 years the ground-based ROLO telescopes have acquired spatially-resolved lunar images in 23 VNIR (Moon diameter ???500 pixels) and 9 SWIR (???250 pixels) passbands at phase angles within ??90 degrees. A numerical model for lunar irradiance has been developed which fits hundreds of ROLO images in each band, corrected for atmospheric extinction and calibrated to absolute radiance, then integrated to irradiance. The band-coupled extinction algorithm uses absorption spectra of several gases and aerosols derived from MODTRAN to fit time-dependent component abundances to nightly observations of standard stars. The absolute radiance scale is based upon independent telescopic measurements of the star Vega. The fitting process yields uncertainties in lunar relative irradiance over small ranges of phase angle and the full range of lunar libration well under 0.5%. A larger source of uncertainty enters in the absolute solar spectral irradiance, especially in the SWIR, where solar models disagree by up to 6%. Results of ROLO model direct comparisons to spacecraft observations demonstrate the ability of the technique to track sensor responsivity drifts to sub-percent precision. Intercomparisons among instruments provide key insights into both calibration issues and the absolute scale for lunar irradiance.

  2. Urey: to measure the absolute age of Mars

    NASA Technical Reports Server (NTRS)

    Randolph, J. E.; Plescia, J.; Bar-Cohen, Y.; Bartlett, P.; Bickler, D.; Carlson, R.; Carr, G.; Fong, M.; Gronroos, H.; Guske, P. J.; Herring, M.; Javadi, H.; Johnson, D. W.; Larson, T.; Malaviarachchi, K.; Sherrit, S.; Stride, S.; Trebi-Ollennu, A.; Warwick, R.

    2003-01-01

    UREY, a proposed NASA Mars Scout mission will, for the first time, measure the absolute age of an identified igneous rock formation on Mars. By extension to relatively older and younger rock formations dated by remote sensing, these results will enable a new and better understanding of Martian geologic history.

  3. Absolute configurations of zingiberenols isolated from ginger (Zingiber officinale) rhizomes

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The sesquiterpene alcohol zingiberenol, or 1,10-bisaboladien-3-ol, was isolated some time ago from ginger, Zingiber officinale, rhizomes, but its absolute configuration had not been determined. With three chiral centers present in the molecule, zingiberenol can exist in eight stereoisomeric forms. ...

  4. [Error factors in spirometry].

    PubMed

    Quadrelli, S A; Montiel, G C; Roncoroni, A J

    1994-01-01

    Spirometry is the more frequently used method to estimate pulmonary function in the clinical laboratory. It is important to comply with technical requisites to approximate the real values sought as well as adequate interpretation of results. Recommendations are made to establish: 1--quality control 2--define abnormality 3--classify the change from normal and its degree 4--define reversibility. In relation to quality control several criteria are pointed out such as end of the test, back-extrapolation and extrapolated volume in order to delineate most common errors. Daily calibration is advised. Inspection of graphical records of the test is mandatory. The limitations to the common use of 80% of predicted values to establish abnormality is stressed. The reasons for employing 95% confidence limits are detailed. It is important to select the reference values equation (in view of the differences in predicted values). It is advisable to validate the selection with local population normal values. In relation to the definition of the defect as restrictive or obstructive, the limitations of vital capacity (VC) to establish restriction, when obstruction is also present, are defined. Also the limitations of maximal mid-expiratory flow 25-75 (FMF 25-75) as an isolated marker of obstruction. Finally the qualities of forced expiratory volume in 1 sec (VEF1) and the difficulties with other indicators (CVF, FMF 25-75, VEF1/CVF) to estimate reversibility after bronchodilators are evaluated. The value of different methods used to define reversibility (% of change in initial value, absolute change or % of predicted), is commented. Clinical spirometric studies in order to be valuable should be performed with the same technical rigour as any other more complex studies. PMID:7990690

  5. Inequalities, Absolute Value, and Logical Connectives.

    ERIC Educational Resources Information Center

    Parish, Charles R.

    1992-01-01

    Presents an approach to the concept of absolute value that alleviates students' problems with the traditional definition and the use of logical connectives in solving related problems. Uses a model that maps numbers from a horizontal number line to a vertical ray originating from the origin. Provides examples solving absolute value equations and…

  6. Absolute optical metrology : nanometers to kilometers

    NASA Technical Reports Server (NTRS)

    Dubovitsky, Serge; Lay, O. P.; Peters, R. D.; Liebe, C. C.

    2005-01-01

    We provide and overview of the developments in the field of high-accuracy absolute optical metrology with emphasis on space-based applications. Specific work on the Modulation Sideband Technology for Absolute Ranging (MSTAR) sensor is described along with novel applications of the sensor.

  7. Monolithically integrated absolute frequency comb laser system

    DOEpatents

    Wanke, Michael C.

    2016-07-12

    Rather than down-convert optical frequencies, a QCL laser system directly generates a THz frequency comb in a compact monolithically integrated chip that can be locked to an absolute frequency without the need of a frequency-comb synthesizer. The monolithic, absolute frequency comb can provide a THz frequency reference and tool for high-resolution broad band spectroscopy.

  8. Introducing the Mean Absolute Deviation "Effect" Size

    ERIC Educational Resources Information Center

    Gorard, Stephen

    2015-01-01

    This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…

  9. Investigating Absolute Value: A Real World Application

    ERIC Educational Resources Information Center

    Kidd, Margaret; Pagni, David

    2009-01-01

    Making connections between various representations is important in mathematics. In this article, the authors discuss the numeric, algebraic, and graphical representations of sums of absolute values of linear functions. The initial explanations are accessible to all students who have experience graphing and who understand that absolute value simply…

  10. Absolute Income, Relative Income, and Happiness

    ERIC Educational Resources Information Center

    Ball, Richard; Chernova, Kateryna

    2008-01-01

    This paper uses data from the World Values Survey to investigate how an individual's self-reported happiness is related to (i) the level of her income in absolute terms, and (ii) the level of her income relative to other people in her country. The main findings are that (i) both absolute and relative income are positively and significantly…

  11. Absolute length measurement using manually decided stereo correspondence for endoscopy

    NASA Astrophysics Data System (ADS)

    Sasaki, M.; Koishi, T.; Nakaguchi, T.; Tsumura, N.; Miyake, Y.

    2009-02-01

    In recent years, various kinds of endoscope have been developed and widely used to endoscopic biopsy, endoscopic operation and endoscopy. The size of the inflammatory part is important to determine a method of medical treatment. However, it is not easy to measure absolute size of inflammatory part such as ulcer, cancer and polyp from the endoscopic image. Therefore, it is required measuring the size of those part in endoscopy. In this paper, we propose a new method to measure the absolute length in a straight line between arbitrary two points based on the photogrammetry using endoscope with magnetic tracking sensor which gives camera position and angle. In this method, the stereo-corresponding points between two endoscopic images are determined by the endoscopist without any apparatus of projection and calculation to find the stereo correspondences, then the absolute length can be calculated on the basis of the photogrammetry. The evaluation experiment using a checkerboard showed that the errors of the measurements are less than 2% of the target length when the baseline is sufficiently-long.

  12. Absolute instability of the Gaussian wake profile

    NASA Technical Reports Server (NTRS)

    Hultgren, Lennart S.; Aggarwal, Arun K.

    1987-01-01

    Linear parallel-flow stability theory has been used to investigate the effect of viscosity on the local absolute instability of a family of wake profiles with a Gaussian velocity distribution. The type of local instability, i.e., convective or absolute, is determined by the location of a branch-point singularity with zero group velocity of the complex dispersion relation for the instability waves. The effects of viscosity were found to be weak for values of the wake Reynolds number, based on the center-line velocity defect and the wake half-width, larger than about 400. Absolute instability occurs only for sufficiently large values of the center-line wake defect. The critical value of this parameter increases with decreasing wake Reynolds number, thereby indicating a shrinking region of absolute instability with decreasing wake Reynolds number. If backflow is not allowed, absolute instability does not occur for wake Reynolds numbers smaller than about 38.

  13. On the effect of distortion and dispersion in fringe signal of the FG5 absolute gravimeters

    NASA Astrophysics Data System (ADS)

    Křen, Petr; Pálinkáš, Vojtech; Mašika, Pavel

    2016-02-01

    The knowledge of absolute gravity acceleration at the level of 1  ×  10-9 is needed in geosciences (e.g. for monitoring crustal deformations and mass transports) and in metrology for watt balance experiments related to the new SI definition of the unit of kilogram. The gravity reference, which results from the international comparisons held with the participation of numerous absolute gravimeters, is significantly affected by qualities of instruments prevailing in the comparisons (i.e. at present, FG5 gravimeters). Therefore, it is necessary to thoroughly investigate all instrumental (particularly systematic) errors. This paper deals with systematic errors of the FG5#215 coming from the distorted fringe signal and from the electronic dispersion at several electronic components including cables. In order to investigate these effects, we developed a new experimental system for acquiring and analysing the data parallel to the FG5 built-in system. The new system based on the analogue-to-digital converter with digital waveform processing using the FFT swept band pass filter is developed and tested on the FG5#215 gravimeter equipped with a new fast analogue output. The system is characterized by a low timing jitter, digital handling of the distorted swept signal with determination of zero-crossings for the fundamental frequency sweep and also for its harmonics and can be used for any gravimeter based on the laser interferometry. Comparison of the original FG5 system and the experimental systems is provided on g-values, residuals and additional measurements/models. Moreover, advanced approach for the solution of the free-fall motion is presented, which allows to take into account a non-linear gravity change with height.

  14. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  15. Systematic errors in the simulation of the Asian summer monsoon: the role of rainfall variability on a range of time and space scales

    NASA Astrophysics Data System (ADS)

    Martin, Gill; Levine, Richard; Klingaman, Nicholas; Bush, Stephanie; Turner, Andrew; Woolnough, Steven

    2015-04-01

    Despite considerable efforts worldwide to improve model simulations of the Asian summer monsoon, significant biases still remain in climatological seasonal mean rainfall distribution, timing of the onset, and northward and eastward extent of the monsoon domain (Sperber et al., 2013). Many modelling studies have shown sensitivity to convection and boundary layer parameterization, cloud microphysics and land surface properties, as well as model resolution. Here we examine the problems in representing short-timescale rainfall variability (related to convection parameterization), problems in representing synoptic-scale systems such as monsoon depressions (related to model resolution), and the relationship of each of these with longer-term systematic biases. Analysis of the spatial distribution of rainfall intensity on a range of timescales ranging from ~30 minutes to daily, in the MetUM and in observations (where available), highlights how rainfall biases in the South Asian monsoon region on different timescales in different regions can be achieved in models through a combination of the incorrect frequency and/or intensity of rainfall. Over the Indian land area, the typical dry bias is related to sub-daily rainfall events being too infrequent, despite being too intense when they occur. In contrast, the wet bias regions over the equatorial Indian Ocean are mainly related to too frequent occurrence of lower-than-observed 3-hourly rainfall accumulations which result in too frequent occurrence of higher-than-observed daily rainfall accumulations. This analysis sheds light on the model deficiencies behind the climatological seasonal mean rainfall biases that many models exhibit in this region. Changing physical parameterizations alters this behaviour, with associated adjustments in the climatological rainfall distribution, although the latter is not always improved (Bush et al., 2014). This suggests a more complex interaction between the diabatic heating and the large

  16. Impaired rapid error monitoring but intact error signaling following rostral anterior cingulate cortex lesions in humans

    PubMed Central

    Maier, Martin E.; Di Gregorio, Francesco; Muricchio, Teresa; Di Pellegrino, Giuseppe

    2015-01-01

    Detecting one’s own errors and appropriately correcting behavior are crucial for efficient goal-directed performance. A correlate of rapid evaluation of behavioral outcomes is the error-related negativity (Ne/ERN) which emerges at the time of the erroneous response over frontal brain areas. However, whether the error monitoring system’s ability to distinguish between errors and correct responses at this early time point is a necessary precondition for the subsequent emergence of error awareness remains unclear. The present study investigated this question using error-related brain activity and vocal error signaling responses in seven human patients with lesions in the rostral anterior cingulate cortex (rACC) and adjoining ventromedial prefrontal cortex, while they performed a flanker task. The difference between errors and correct responses was severely attenuated in these patients indicating impaired rapid error monitong, but they showed no impairment in error signaling. However, impaired rapid error monitoring coincided with a failure to increase response accuracy on trials following errors. These results demonstrate that the error monitoring system’s ability to distinguish between errors and correct responses at the time of the response is crucial for adaptive post-error adjustments, but not a necessary precondition for error awareness. PMID:26136674

  17. Measurement of absolute optical thickness of mask glass by wavelength-tuning Fourier analysis.

    PubMed

    Kim, Yangjin; Hbino, Kenichi; Sugita, Naohiko; Mitsuishi, Mamoru

    2015-07-01

    Optical thickness is a fundamental characteristic of an optical component. A measurement method combining discrete Fourier-transform (DFT) analysis and a phase-shifting technique gives an appropriate value for the absolute optical thickness of a transparent plate. However, there is a systematic error caused by the nonlinearity of the phase-shifting technique. In this research the absolute optical-thickness distribution of mask blank glass was measured using DFT and wavelength-tuning Fizeau interferometry without using sensitive phase-shifting techniques. The error occurring during the DFT analysis was compensated for by using the unwrapping correlation. The experimental results indicated that the absolute optical thickness of mask glass was measured with an accuracy of 5 nm. PMID:26125394

  18. Timing of satellite observations for telescope with TV CCD camera

    NASA Astrophysics Data System (ADS)

    Dragomiretskoy, V. V.; Koshkin, N. I.; Korobeinikova, E. A.; Melikyants, C. M.; Ryabov, A. V.; Strahova, S. L.; Terpan, S. S.; Shakun, L. S.

    2013-12-01

    The time reference system to be used for linking of the satellite position and brightness measurements to the universal time scale UTC are described. These are used in Odessa astronomical observatory. They provides stable error does not exceeding the absolute value of 0.1 ms. The achieved accuracy of the timing allows us to study a very short-term satellite brightness variations and the actual unevenness of its orbital motion.

  19. Absolute flatness testing of skip-flat interferometry by matrix analysis in polar coordinates.

    PubMed

    Han, Zhi-Gang; Yin, Lu; Chen, Lei; Zhu, Ri-Hong

    2016-03-20

    A new method utilizing matrix analysis in polar coordinates has been presented for absolute testing of skip-flat interferometry. The retrieval of the absolute profile mainly includes three steps: (1) transform the wavefront maps of the two cavity measurements into data in polar coordinates; (2) retrieve the profile of the reflective flat in polar coordinates by matrix analysis; and (3) transform the profile of the reflective flat back into data in Cartesian coordinates and retrieve the profile of the sample. Simulation of synthetic surface data has been provided, showing the capability of the approach to achieve an accuracy of the order of 0.01 nm RMS. The absolute profile can be retrieved by a set of closed mathematical formulas without polynomial fitting of wavefront maps or the iterative evaluation of an error function, making the new method more efficient for absolute testing. PMID:27140578

  20. Absolute optical instruments without spherical symmetry

    NASA Astrophysics Data System (ADS)

    Tyc, Tomáš; Dao, H. L.; Danner, Aaron J.

    2015-11-01

    Until now, the known set of absolute optical instruments has been limited to those containing high levels of symmetry. Here, we demonstrate a method of mathematically constructing refractive index profiles that result in asymmetric absolute optical instruments. The method is based on the analogy between geometrical optics and classical mechanics and employs Lagrangians that separate in Cartesian coordinates. In addition, our method can be used to construct the index profiles of most previously known absolute optical instruments, as well as infinitely many different ones.

  1. TYPE Ia SUPERNOVA DISTANCE MODULUS BIAS AND DISPERSION FROM K-CORRECTION ERRORS: A DIRECT MEASUREMENT USING LIGHT CURVE FITS TO OBSERVED SPECTRAL TIME SERIES

    SciTech Connect

    Saunders, C.; Aldering, G.; Aragon, C.; Bailey, S.; Childress, M.; Fakhouri, H. K.; Kim, A. G.; Antilogus, P.; Bongard, S.; Canto, A.; Cellier-Holzem, F.; Guy, J.; Baltay, C.; Buton, C.; Chotard, N.; Copin, Y.; Gangler, E.; and others

    2015-02-10

    We estimate systematic errors due to K-corrections in standard photometric analyses of high-redshift Type Ia supernovae. Errors due to K-correction occur when the spectral template model underlying the light curve fitter poorly represents the actual supernova spectral energy distribution, meaning that the distance modulus cannot be recovered accurately. In order to quantify this effect, synthetic photometry is performed on artificially redshifted spectrophotometric data from 119 low-redshift supernovae from the Nearby Supernova Factory, and the resulting light curves are fit with a conventional light curve fitter. We measure the variation in the standardized magnitude that would be fit for a given supernova if located at a range of redshifts and observed with various filter sets corresponding to current and future supernova surveys. We find significant variation in the measurements of the same supernovae placed at different redshifts regardless of filters used, which causes dispersion greater than ∼0.05 mag for measurements of photometry using the Sloan-like filters and a bias that corresponds to a 0.03 shift in w when applied to an outside data set. To test the result of a shift in supernova population or environment at higher redshifts, we repeat our calculations with the addition of a reweighting of the supernovae as a function of redshift and find that this strongly affects the results and would have repercussions for cosmology. We discuss possible methods to reduce the contribution of the K-correction bias and uncertainty.

  2. Determination and error analysis of emittance and spectral emittance measurements by remote sensing

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Kumar, R.

    1977-01-01

    The author has identified the following significant results. From the theory of remote sensing of surface temperatures, an equation of the upper bound of absolute error of emittance was determined. It showed that the absolute error decreased with an increase in contact temperature, whereas, it increased with an increase in environmental integrated radiant flux density. Change in emittance had little effect on the absolute error. A plot of the difference between temperature and band radiance temperature vs. emittance was provided for the wavelength intervals: 4.5 to 5.5 microns, 8 to 13.5 microns, and 10.2 to 12.5 microns.

  3. On-orbit absolute radiance standard for the next generation of IR remote sensing instruments

    NASA Astrophysics Data System (ADS)

    Best, Fred A.; Adler, Douglas P.; Pettersen, Claire; Revercomb, Henry E.; Gero, P. Jonathan; Taylor, Joseph K.; Knuteson, Robert O.; Perepezko, John H.

    2012-11-01

    The next generation of infrared remote sensing satellite instrumentation, including climate benchmark missions will require better absolute measurement accuracy than now available, and will most certainly rely on the emerging capability to fly SI traceable standards that provide irrefutable absolute measurement accuracy. As an example, instrumentation designed to measure spectrally resolved infrared radiances with an absolute brightness temperature error of better than 0.1 K will require high-emissivity (<0.999) calibration blackbodies with emissivity uncertainty of better than 0.06%, and absolute temperature uncertainties of better than 0.045K (k=3). Key elements of an On-Orbit Absolute Radiance Standard (OARS) meeting these stringent requirements have been demonstrated in the laboratory at the University of Wisconsin (UW) and refined under the NASA Instrument Incubator Program (IIP). This work recently culminated with an integrated subsystem that was used in the laboratory to demonstrate end-to-end radiometric accuracy verification for the UW Absolute Radiance Interferometer. Along with an overview of the design, we present details of a key underlying technology of the OARS that provides on-orbit absolute temperature calibration using the transient melt signatures of small quantities (<1g) of reference materials (gallium, water, and mercury) imbedded in the blackbody cavity. In addition we present performance data from the laboratory testing of the OARS.

  4. A New Gimmick for Assigning Absolute Configuration.

    ERIC Educational Resources Information Center

    Ayorinde, F. O.

    1983-01-01

    A five-step procedure is provided to help students in making the assignment absolute configuration less bothersome. Examples for both single (2-butanol) and multi-chiral carbon (3-chloro-2-butanol) molecules are included. (JN)

  5. Absolute decay width measurements in 16O

    NASA Astrophysics Data System (ADS)

    Wheldon, C.; Ashwood, N. I.; Barr, M.; Curtis, N.; Freer, M.; Kokalova, Tz; Malcolm, J. D.; Spencer, S. J.; Ziman, V. A.; Faestermann, Th; Krücken, R.; Wirth, H.-F.; Hertenberger, R.; Lutter, R.; Bergmaier, A.

    2012-09-01

    The reaction 126C(63Li, d)168O* at a 6Li bombarding energy of 42 MeV has been used to populate excited states in 16O. The deuteron ejectiles were measured using the high-resolution Munich Q3D spectrograph. A large-acceptance silicon-strip detector array was used to register the recoil and break-up products. This complete kinematic set-up has enabled absolute α-decay widths to be measured with high-resolution in the 13.9 to 15.9 MeV excitation energy regime in 16O; many for the first time. This energy region spans the 14.4 MeV four-α breakup threshold. Monte-Carlo simulations of the detector geometry and break-up processes yield detection efficiencies for the two dominant decay modes of 40% and 37% for the α+12C(g.s.) and a+12C(2+1) break-up channels respectively.

  6. Multichannel error correction code decoder

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Ivancic, William D.

    1993-01-01

    A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.

  7. Population-based absolute risk estimation with survey data.

    PubMed

    Kovalchik, Stephanie A; Pfeiffer, Ruth M

    2014-04-01

    Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614

  8. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  9. Landsat-7 ETM+ radiometric stability and absolute calibration

    USGS Publications Warehouse

    Markham, B.L.; Barker, J.L.; Barsi, J.A.; Kaita, E.; Thome, K.J.; Helder, D.L.; Palluconi, Frank Don; Schott, J.R.; Scaramuzza, P.

    2002-01-01

    Launched in April 1999, the Landsat-7 ETM+ instrument is in its fourth year of operation. The quality of the acquired calibrated imagery continues to be high, especially with respect to its three most important radiometric performance parameters: reflective band instrument stability to better than ??1%, reflective band absolute calibration to better than ??5%, and thermal band absolute calibration to better than ??0.6 K. The ETM+ instrument has been the most stable of any of the Landsat instruments, in both the reflective and thermal channels. To date, the best on-board calibration source for the reflective bands has been the Full Aperture Solar Calibrator, which has indicated changes of at most -1.8% to -2.0% (95% C.I.) change per year in the ETM+ gain (band 4). However, this change is believed to be caused by changes in the solar diffuser panel, as opposed to a change in the instrument's gain. This belief is based partially on ground observations, which bound the changes in gain in band 4 at -0.7% to +1.5%. Also, ETM+ stability is indicated by the monitoring of desert targets. These image-based results for four Saharan and Arabian sites, for a collection of 35 scenes over the three years since launch, bound the gain change at -0.7% to +0.5% in band 4. Thermal calibration from ground observations revealed an offset error of +0.31 W/m 2 sr um soon after launch. This offset was corrected within the U. S. ground processing system at EROS Data Center on 21-Dec-00, and since then, the band 6 on-board calibration has indicated changes of at most +0.02% to +0.04% (95% C.I.) per year. The latest ground observations have detected no remaining offset error with an RMS error of ??0.6 K. The stability and absolute calibration of the Landsat-7 ETM+ sensor make it an ideal candidate to be used as a reference source for radiometric cross-calibrating to other land remote sensing satellite systems.

  10. Correction due to the finite speed of light in absolute gravimeters Correction due to the finite speed of light in absolute gravimeters

    NASA Astrophysics Data System (ADS)

    Nagornyi, V. D.; Zanimonskiy, Y. M.; Zanimonskiy, Y. Y.

    2011-06-01

    Equations (45) and (47) in our paper [1] in this issue have incorrect sign and should read \\tilde T_i=T_i+{b\\mp S_i\\over c},\\cr\\tilde T_i=T_i\\mp {S_i\\over c}. The error traces back to our formula (3), inherited from the paper [2]. According to the technical documentation [3, 4], the formula (3) is implemented by several commercially available instruments. An incorrect sign would cause a bias of about 20 µGal not known for these instruments, which probably indicates that the documentation incorrectly reflects the implemented measurement equation. Our attention to the error was drawn by the paper [5], also in this issue, where the sign is mentioned correctly. References [1] Nagornyi V D, Zanimonskiy Y M and Zanimonskiy Y Y 2011 Correction due to the finite speed of light in absolute gravimeters Metrologia 48 101-13 [2] Niebauer T M, Sasagawa G S, Faller J E, Hilt R and Klopping F 1995 A new generation of absolute gravimeters Metrologia 32 159-80 [3] Micro-g LaCoste, Inc. 2006 FG5 Absolute Gravimeter Users Manual [4] Micro-g LaCoste, Inc. 2007 g7 Users Manual [5] Niebauer T M, Billson R, Ellis B, Mason B, van Westrum D and Klopping F 2011 Simultaneous gravity and gradient measurements from a recoil-compensated absolute gravimeter Metrologia 48 154-63

  11. Absolute Position of Targets Measured Through a Chamber Window Using Lidar Metrology Systems

    NASA Technical Reports Server (NTRS)

    Kubalak, David; Hadjimichael, Theodore; Ohl, Raymond; Slotwinski, Anthony; Telfer, Randal; Hayden, Joseph

    2012-01-01

    Lidar is a useful tool for taking metrology measurements without the need for physical contact with the parts under test. Lidar instruments are aimed at a target using azimuth and elevation stages, then focus a beam of coherent, frequency modulated laser energy onto the target, such as the surface of a mechanical structure. Energy from the reflected beam is mixed with an optical reference signal that travels in a fiber path internal to the instrument, and the range to the target is calculated based on the difference in the frequency of the returned and reference signals. In cases when the parts are in extreme environments, additional steps need to be taken to separate the operator and lidar from that environment. A model has been developed that accurately reduces the lidar data to an absolute position and accounts for the three media in the testbed air, fused silica, and vacuum but the approach can be adapted for any environment or material. The accuracy of laser metrology measurements depends upon knowing the parameters of the media through which the measurement beam travels. Under normal conditions, this means knowledge of the temperature, pressure, and humidity of the air in the measurement volume. In the past, chamber windows have been used to separate the measuring device from the extreme environment within the chamber and still permit optical measurement, but, so far, only relative changes have been diagnosed. The ability to make accurate measurements through a window presents a challenge as there are a number of factors to consider. In the case of the lidar, the window will increase the time-of-flight of the laser beam causing a ranging error, and refract the direction of the beam causing angular positioning errors. In addition, differences in pressure, temperature, and humidity on each side of the window will cause slight atmospheric index changes and induce deformation and a refractive index gradient within the window. Also, since the window is a

  12. [Paradigm errors in the old biomedical science].

    PubMed

    Skurvydas, Albertas

    2008-01-01

    The aim of this article was to review the basic drawbacks of the deterministic and reductionistic thinking in biomedical science and to provide ways for dealing with them. The present paradigm of research in biomedical science has not got rid of the errors of the old science yet, i.e. the errors of absolute determinism and reductionism. These errors restrict the view and thinking of scholars engaged in the studies of complex and dynamic phenomena and mechanisms. Recently, discussions on science paradigm aimed at spreading the new science paradigm that of complex dynamic systems as well as chaos theory are in progress all over the world. It is for the nearest future to show which of the two, the old or the new science, will be the winner. We have come to the main conclusion that deterministic and reductionistic thinking applied in improper way can cause substantial damage rather than prove benefits for biomedicine science. PMID:18541951

  13. Absolute determination of local tropospheric OH concentrations

    NASA Technical Reports Server (NTRS)

    Armerding, Wolfgang; Comes, Franz-Josef

    1994-01-01

    Long path absorption (LPA) according to Lambert Beer's law is a method to determine absolute concentrations of trace gases such as tropospheric OH. We have developed a LPA instrument which is based on a rapid tuning of the light source which is a frequency doubled dye laser. The laser is tuned across two or three OH absorption features around 308 nm with a scanning speed of 0.07 cm(exp -1)/microsecond and a repetition rate of 1.3 kHz. This high scanning speed greatly reduces the fluctuation of the light intensity caused by the atmosphere. To obtain the required high sensitivity the laser output power is additionally made constant and stabilized by an electro-optical modulator. The present sensitivity is of the order of a few times 10(exp 5) OH per cm(exp 3) for an acquisition time of a minute and an absorption path length of only 1200 meters so that a folding of the optical path in a multireflection cell was possible leading to a lateral dimension of the cell of a few meters. This allows local measurements to be made. Tropospheric measurements have been carried out in 1991 resulting in the determination of OH diurnal variation at specific days in late summer. Comparison with model calculations have been made. Interferences are mainly due to SO2 absorption. The problem of OH self generation in the multireflection cell is of minor extent. This could be shown by using different experimental methods. The minimum-maximum signal to noise ratio is about 8 x 10(exp -4) for a single scan. Due to the small size of the absorption cell the realization of an open air laboratory is possible in which by use of an additional UV light source or by additional fluxes of trace gases the chemistry can be changed under controlled conditions allowing kinetic studies of tropospheric photochemistry to be made in open air.

  14. Programming Errors in APL.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  15. Jasminum flexile flower absolute from India--a detailed comparison with three other jasmine absolutes.

    PubMed

    Braun, Norbert A; Kohlenberg, Birgit; Sim, Sherina; Meier, Manfred; Hammerschmidt, Franz-Josef

    2009-09-01

    Jasminum flexile flower absolute from the south of India and the corresponding vacuum headspace (VHS) sample of the absolute were analyzed using GC and GC-MS. Three other commercially available Indian jasmine absolutes from the species: J. sambac, J. officinale subsp. grandiflorum, and J. auriculatum and the respective VHS samples were used for comparison purposes. One hundred and twenty-one compounds were characterized in J. flexile flower absolute, with methyl linolate, benzyl salicylate, benzyl benzoate, (2E,6E)-farnesol, and benzyl acetate as the main constituents. A detailed olfactory evaluation was also performed. PMID:19831037

  16. Absolute Gravity Datum in the Age of Cold Atom Gravimeters

    NASA Astrophysics Data System (ADS)

    Childers, V. A.; Eckl, M. C.

    2014-12-01

    The international gravity datum is defined today by the International Gravity Standardization Net of 1971 (IGSN-71). The data supporting this network was measured in the 1950s and 60s using pendulum and spring-based gravimeter ties (plus some new ballistic absolute meters) to replace the prior protocol of referencing all gravity values to the earlier Potsdam value. Since this time, gravimeter technology has advanced significantly with the development and refinement of the FG-5 (the current standard of the industry) and again with the soon-to-be-available cold atom interferometric absolute gravimeters. This latest development is anticipated to provide improvement in the range of two orders of magnitude as compared to the measurement accuracy of technology utilized to develop ISGN-71. In this presentation, we will explore how the IGSN-71 might best be "modernized" given today's requirements and available instruments and resources. The National Geodetic Survey (NGS), along with other relevant US Government agencies, is concerned about establishing gravity control to establish and maintain high order geodetic networks as part of the nation's essential infrastructure. The need to modernize the nation's geodetic infrastructure was highlighted in "Precise Geodetic Infrastructure, National Requirements for a Shared Resource" National Academy of Science, 2010. The NGS mission, as dictated by Congress, is to establish and maintain the National Spatial Reference System, which includes gravity measurements. Absolute gravimeters measure the total gravity field directly and do not involve ties to other measurements. Periodic "intercomparisons" of multiple absolute gravimeters at reference gravity sites are used to constrain the behavior of the instruments to ensure that each would yield reasonably similar measurements of the same location (i.e. yield a sufficiently consistent datum when measured in disparate locales). New atomic interferometric gravimeters promise a significant

  17. Four Years of Absolute Gravity in the Taiwan Orogen (AGTO)

    NASA Astrophysics Data System (ADS)

    Mouyen, Maxime; Masson, Frédéric; Hwang, Cheinway; Cheng, Ching-Chung; Le Moigne, Nicolas; Lee, Chiung-Wu; Kao, Ricky; Hsieh, Nicky

    2010-05-01

    AGTO is a scientific project between Taiwanese and French institutes, which aim is to improve tectonic knowledge of Taiwan primarily using absolute gravity measurements and permanent GPS stations. Both tools are indeed useful to study vertical movements and mass transfers involved in mountain building, a major process in Taiwan located at the convergent margin between Philippine Sea plate and Eurasian plate. This convergence results in two subductions north and south of Taiwan (Ryukyu and Manilla trenches, respectively), while the center is experiencing collision. These processes make Taiwan very active tectonically, as illustrated by numerous large earthquakes and rapid uplift of the Central Range. High slopes of Taiwan mountains and heavy rains brought by typhoons together lead to high landslides and mudflows risks. Practically, absolute gravity measurements have been yearly repeated since 2006 along a transect across south Taiwan, from Penghu to Lutao islands, using FG5 absolute gravimeters. This transect contains ten sites for absolute measurements and has been densified in 2008 by incorporating 45 sites for relative gravity measurements with CG5 gravimeters. The last relative and absolute measurements have been performed in November 2009. Most of the absolute sites have been measured with a good accuracy, about 1 or 2 ?Gal. Only the site located in Tainan University has higher standard deviation, due to the city noise. We note that absolute gravity changes seem to follow a trend in every site. However, straightforward tectonic interpretation of these trends is not valuable as many non-tectonic effects are supposed to change g with time, like groundwater or erosion. Estimating and removing these effects leads to a tectonic gravity signal, which has theoretically two origins : deep mass transfers around the site and vertical movements of the station. The latter can be well constrained by permanent GPS stations located close to the measurement pillar. Deep mass

  18. Medical Error and Moral Luck.

    PubMed

    Hubbeling, Dieneke

    2016-09-01

    This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome. PMID:26662613

  19. Error image aware content restoration

    NASA Astrophysics Data System (ADS)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  20. Universal Cosmic Absolute and Modern Science

    NASA Astrophysics Data System (ADS)

    Kostro, Ludwik

    The official Sciences, especially all natural sciences, respect in their researches the principle of methodic naturalism i.e. they consider all phenomena as entirely natural and therefore in their scientific explanations they do never adduce or cite supernatural entities and forces. The purpose of this paper is to show that Modern Science has its own self-existent, self-acting, and self-sufficient Natural All-in Being or Omni-Being i.e. the entire Nature as a Whole that justifies the scientific methodic naturalism. Since this Natural All-in Being is one and only It should be considered as the own scientifically justified Natural Absolute of Science and should be called, in my opinion, the Universal Cosmic Absolute of Modern Science. It will be also shown that the Universal Cosmic Absolute is ontologically enormously stratified and is in its ultimate i.e. in its most fundamental stratum trans-reistic and trans-personal. It means that in its basic stratum. It is neither a Thing or a Person although It contains in Itself all things and persons with all other sentient and conscious individuals as well, On the turn of the 20th century the Science has begun to look for a theory of everything, for a final theory, for a master theory. In my opinion the natural Universal Cosmic Absolute will constitute in such a theory the radical all penetrating Ultimate Basic Reality and will substitute step by step the traditional supernatural personal Absolute.

  1. Partially supervised P300 speller adaptation for eventual stimulus timing optimization: target confidence is superior to error-related potential score as an uncertain label

    NASA Astrophysics Data System (ADS)

    Zeyl, Timothy; Yin, Erwei; Keightley, Michelle; Chau, Tom

    2016-04-01

    Objective. Error-related potentials (ErrPs) have the potential to guide classifier adaptation in BCI spellers, for addressing non-stationary performance as well as for online optimization of system parameters, by providing imperfect or partial labels. However, the usefulness of ErrP-based labels for BCI adaptation has not been established in comparison to other partially supervised methods. Our objective is to make this comparison by retraining a two-step P300 speller on a subset of confident online trials using naïve labels taken from speller output, where confidence is determined either by (i) ErrP scores, (ii) posterior target scores derived from the P300 potential, or (iii) a hybrid of these scores. We further wish to evaluate the ability of partially supervised adaptation and retraining methods to adjust to a new stimulus-onset asynchrony (SOA), a necessary step towards online SOA optimization. Approach. Eleven consenting able-bodied adults attended three online spelling sessions on separate days with feedback in which SOAs were set at 160 ms (sessions 1 and 2) and 80 ms (session 3). A post hoc offline analysis and a simulated online analysis were performed on sessions two and three to compare multiple adaptation methods. Area under the curve (AUC) and symbols spelled per minute (SPM) were the primary outcome measures. Main results. Retraining using supervised labels confirmed improvements of 0.9 percentage points (session 2, p < 0.01) and 1.9 percentage points (session 3, p < 0.05) in AUC using same-day training data over using data from a previous day, which supports classifier adaptation in general. Significance. Using posterior target score alone as a confidence measure resulted in the highest SPM of the partially supervised methods, indicating that ErrPs are not necessary to boost the performance of partially supervised adaptive classification. Partial supervision significantly improved SPM at a novel SOA, showing promise for eventual online SOA

  2. Beta systems error analysis

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  3. Refractive errors in children.

    PubMed

    Tongue, A C

    1987-12-01

    Optical correction of refractive errors in infants and young children is indicated when the refractive errors are sufficiently large to cause unilateral or bilateral amblyopia, if they are impairing the child's ability to function normally, or if the child has accommodative strabismus. Screening for refractive errors is important and should be performed as part of the annual physical examination in all verbal children. Screening for significant refractive errors in preverbal children is more difficult; however, the red reflex test of Bruckner is useful for the detection of anisometropic refractive errors. The photorefraction test, which is an adaptation of Bruckner's red reflex test, may prove to be a useful screening device for detecting bilateral as well as unilateral refractive errors. Objective testing as well as subjective testing enables ophthalmologists to prescribe proper optical correction for refractive errors for infants and children of any age. PMID:3317238

  4. Error-prone signalling.

    PubMed

    Johnstone, R A; Grafen, A

    1992-06-22

    The handicap principle of Zahavi is potentially of great importance to the study of biological communication. Existing models of the handicap principle, however, make the unrealistic assumption that communication is error free. It seems possible, therefore, that Zahavi's arguments do not apply to real signalling systems, in which some degree of error is inevitable. Here, we present a general evolutionarily stable strategy (ESS) model of the handicap principle which incorporates perceptual error. We show that, for a wide range of error functions, error-prone signalling systems must be honest at equilibrium. Perceptual error is thus unlikely to threaten the validity of the handicap principle. Our model represents a step towards greater realism, and also opens up new possibilities for biological signalling theory. Concurrent displays, direct perception of quality, and the evolution of 'amplifiers' and 'attenuators' are all probable features of real signalling systems, yet handicap models based on the assumption of error-free communication cannot accommodate these possibilities. PMID:1354361

  5. Morphology and Absolute Magnitudes of the SDSS DR7 QSOs

    NASA Astrophysics Data System (ADS)

    Coelho, B.; Andrei, A. H.; Antón, S.

    2014-10-01

    The ESA mission Gaia will furnish a complete census of the Milky Way, delivering astrometrics, dynamics, and astrophysics information for 1 billion stars. Operating in all-sky repeated survey mode, Gaia will also provide measurements of extra-galactic objects. Among the later there will be at least 500,000 QSOs that will be used to build the reference frame upon which the several independent observations will be combined and interpreted. Not all the QSOs are equally suited to fulfill this role of fundamental, fiducial grid-points. Brightness, morphology, and variability define the astrometric error budget for each object. We made use of 3 morphological parameters based on the PSF sharpness, circularity and gaussianity, which enable us to distinguish the "real point-like" QSOs. These parameters are being explored on the spectroscopically certified QSOs of the SDSS DR7, to compare the performance against other morphology classification schemes, as well as to derive properties of the host galaxy. We present a new method, based on the Gaia quasar database, to derive absolute magnitudes, on the SDSS filters domain. The method can be extrapolated all over the optical window, including the Gaia filters. We discuss colors derived from SDSS apparent magnitudes and colors based on absolute magnitudes that we obtained tanking into account corrections for dust extinction, either intergalactic or from the QSO host, and for the Lyman α forest. In the future we want to further discuss properties of the host galaxies, comparing for e.g. the obtained morphological classification with the color, the apparent and absolute magnitudes, and the redshift distributions.

  6. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  7. Absolute blood velocity measured with a modified fundus camera

    NASA Astrophysics Data System (ADS)

    Duncan, Donald D.; Lemaillet, Paul; Ibrahim, Mohamed; Nguyen, Quan Dong; Hiller, Matthias; Ramella-Roman, Jessica

    2010-09-01

    We present a new method for the quantitative estimation of blood flow velocity, based on the use of the Radon transform. The specific application is for measurement of blood flow velocity in the retina. Our modified fundus camera uses illumination from a green LED and captures imagery with a high-speed CCD camera. The basic theory is presented, and typical results are shown for an in vitro flow model using blood in a capillary tube. Subsequently, representative results are shown for representative fundus imagery. This approach provides absolute velocity and flow direction along the vessel centerline or any lateral displacement therefrom. We also provide an error analysis allowing estimation of confidence intervals for the estimated velocity.

  8. Measured and modelled absolute gravity changes in Greenland

    NASA Astrophysics Data System (ADS)

    Nielsen, J. Emil; Forsberg, Rene; Strykowski, Gabriel

    2014-01-01

    In glaciated areas, the Earth is responding to the ongoing changes of the ice sheets, a response known as glacial isostatic adjustment (GIA). GIA can be investigated through observations of gravity change. For the ongoing assessment of the ice sheets mass balance, where satellite data are used, the study of GIA is important since it acts as an error source. GIA consists of three signals as seen by a gravimeter on the surface of the Earth. These signals are investigated in this study. The ICE-5G ice history and recently developed ice models of present day changes are used to model the gravity change in Greenland. The result is compared with the initial measurements of absolute gravity (AG) change at selected Greenland Network (GNET) sites.

  9. Full field imaging based instantaneous hyperspectral absolute refractive index measurement

    SciTech Connect

    Baba, Justin S; Boudreaux, Philip R

    2012-01-01

    Multispectral refractometers typically measure refractive index (RI) at discrete monochromatic wavelengths via a serial process. We report on the demonstration of a white light full field imaging based refractometer capable of instantaneous multispectral measurement of absolute RI of clear liquid/gel samples across the entire visible light spectrum. The broad optical bandwidth refractometer is capable of hyperspectral measurement of RI in the range 1.30 1.70 between 400nm 700nm with a maximum error of 0.0036 units (0.24% of actual) at 414nm for a = 1.50 sample. We present system design and calibration method details as well as results from a system validation sample.

  10. [Errors Analysis and Correction in Atmospheric Methane Retrieval Based on Greenhouse Gases Observing Satellite Data].

    PubMed

    Bu, Ting-ting; Wang, Xian-hua; Ye, Han-han; Jiang, Xin-hua

    2016-01-01

    High precision retrieval of atmospheric CH4 is influenced by a variety of factors. The uncertainties of ground properties and atmospheric conditions are important factors, such as surface reflectance, temperature profile, humidity profile and pressure profile. Surface reflectance is affected by many factors so that it is difficult to get the precise value. The uncertainty of surface reflectance will cause large error to retrieval result. The uncertainties of temperature profile, humidity profile and pressure profile are also important sources of retrieval error and they will cause unavoidable systematic error. This error is hard to eliminate only using CH4 band. In this paper, ratio spectrometry method and CO2 band correction method are proposed to reduce the error caused by these factors. Ratio spectrometry method can decrease the effect of surface reflectance in CH4 retrieval by converting absolute radiance spectrometry into ratio spectrometry. CO2 band correction method converts column amounts of CH4 into column averaged mixing ratio by using CO2 1.61 μm band and it can correct the systematic error caused by temperature profile, humidity profile and pressure profile. The combination of these two correction methods will decrease the effect caused by surface reflectance, temperature profile, humidity profile and pressure profile at the same time and reduce the retrieval error. GOSAT data were used to retrieve atmospheric CH4 to test and validate the two correction methods. The results showed that CH4 column averaged mixing ratio retrieved after correction was close to GOSAT Level2 product and the retrieval precision was up to -0.24%. The studies suggest that the error of CH4 retrieval caused by the uncertainties of ground properties and atmospheric conditions can be significantly reduced and the retrieval precision can be highly improved by using ratio spectrometry method and CO2 hand correction method. PMID:27228765

  11. Precision evaluation of calibration factor of a superconducting gravimeter using an absolute gravimeter

    NASA Astrophysics Data System (ADS)

    Feng, Jin-yang; Wu, Shu-qing; Li, Chun-jian; Su, Duo-wu; Xu, Jin-yi; Yu, Mei

    2016-01-01

    The precision of the calibration factor of a superconducting gravimeter (SG) using an absolute gravimeter (AG) is analyzed based on linear least square fitting and error propagation theory and factors affecting the accuracy are discussed. It can improve the accuracy to choose the observation period of solid tide as a significant change or increase the calibration time. Simulation is carried out based on synthetic gravity tides calculated with T-soft at observed site from Aug. 14th to Sept. 2nd in 2014. The result indicates that the highest precision using half a day's observation data is below 0.28% and the precision exponentially increases with the increase of peak-to-peak gravity change. The comparison of results obtained from the same observation time indicated that using properly selected observation data has more beneficial on the improvement of precision. Finally, the calibration experiment of the SG iGrav-012 is introduced and the calibration factor is determined for the first time using AG FG5X-249. With 2.5 days' data properly selected from solid tide period with large tidal amplitude, the determined calibration factor of iGrav-012 is (-92.54423+/-0.13616) μGal/V (1μGal=10-8m/s2), with the relative accuracy of about 0.15%.

  12. Absolute isotopic abundances of TI in meteorites

    NASA Astrophysics Data System (ADS)

    Niederer, F. R.; Papanastassiou, D. A.; Wasserburg, G. J.

    1985-03-01

    The absolute isotope abundance of Ti has been determined in Ca-Al-rich inclusions from the Allende and Leoville meteorites and in samples of whole meteorites. The absolute Ti isotope abundances differ by a significant mass dependent isotope fractionation transformation from the previously reported abundances, which were normalized for fractionation using 46Ti/48Ti. Therefore, the absolute compositions define distinct nucleosynthetic components from those previously identified or reflect the existence of significant mass dependent isotope fractionation in nature. The authors provide a general formalism for determining the possible isotope compositions of the exotic Ti from the measured composition, for different values of isotope fractionation in nature and for different mixing ratios of the exotic and normal components.

  13. Molecular iodine absolute frequencies. Final report

    SciTech Connect

    Sansonetti, C.J.

    1990-06-25

    Fifty specified lines of {sup 127}I{sub 2} were studied by Doppler-free frequency modulation spectroscopy. For each line the classification of the molecular transition was determined, hyperfine components were identified, and one well-resolved component was selected for precise determination of its absolute frequency. In 3 cases, a nearby alternate line was selected for measurement because no well-resolved component was found for the specified line. Absolute frequency determinations were made with an estimated uncertainty of 1.1 MHz by locking a dye laser to the selected hyperfine component and measuring its wave number with a high-precision Fabry-Perot wavemeter. For each line results of the absolute measurement, the line classification, and a Doppler-free spectrum are given.

  14. Absolute calibration in vivo measurement systems

    SciTech Connect

    Kruchten, D.A.; Hickman, D.P.

    1991-02-01

    Lawrence Livermore National Laboratory (LLNL) is currently investigating a new method for obtaining absolute calibration factors for radiation measurement systems used to measure internally deposited radionuclides in vivo. Absolute calibration of in vivo measurement systems will eliminate the need to generate a series of human surrogate structures (i.e., phantoms) for calibrating in vivo measurement systems. The absolute calibration of in vivo measurement systems utilizes magnetic resonance imaging (MRI) to define physiological structure, size, and composition. The MRI image provides a digitized representation of the physiological structure, which allows for any mathematical distribution of radionuclides within the body. Using Monte Carlo transport codes, the emission spectrum from the body is predicted. The in vivo measurement equipment is calibrated using the Monte Carlo code and adjusting for the intrinsic properties of the detection system. The calibration factors are verified using measurements of existing phantoms and previously obtained measurements of human volunteers. 8 refs.

  15. Stitching interferometry and absolute surface shape metrology: similarities

    NASA Astrophysics Data System (ADS)

    Bray, Michael

    2001-12-01

    Stitching interferometry is a method of analysing large optical components using a standard small interferometer. This result is obtained by taking multiple overlapping images of the large component, and numerically stitching these sub-apertures together by computing a correcting Tip- Tilt-Piston correction for each sub-aperture. All real-life measurement techniques require a calibration phase. By definition, a perfect surface does not exist. Methods abound for the accurate measurement of diameters (viz., the Three Flat Test). However, we need total surface knowledge of the reference surface, because the stitched overlap areas will suffer from the slightest deformation. One must not be induced into thinking that Stitching is the cause of this error: it simply highlights the lack of absolute knowledge of the reference surface, or the lack of adequate thermal control, issues which are often sidetracked... The goal of this paper is to highlight the above-mentioned calibration problems in interferometry in general, and in stitching interferometry in particular, and show how stitching hardware and software can be conveniently used to provide the required absolute surface shape metrology. Some measurement figures will illustrate this article.

  16. Orion Absolute Navigation System Progress and Challenge

    NASA Technical Reports Server (NTRS)

    Holt, Greg N.; D'Souza, Christopher

    2012-01-01

    The absolute navigation design of NASA's Orion vehicle is described. It has undergone several iterations and modifications since its inception, and continues as a work-in-progress. This paper seeks to benchmark the current state of the design and some of the rationale and analysis behind it. There are specific challenges to address when preparing a timely and effective design for the Exploration Flight Test (EFT-1), while still looking ahead and providing software extensibility for future exploration missions. The primary onboard measurements in a Near-Earth or Mid-Earth environment consist of GPS pseudo-range and delta-range, but for future explorations missions the use of star-tracker and optical navigation sources need to be considered. Discussions are presented for state size and composition, processing techniques, and consider states. A presentation is given for the processing technique using the computationally stable and robust UDU formulation with an Agee-Turner Rank-One update. This allows for computational savings when dealing with many parameters which are modeled as slowly varying Gauss-Markov processes. Preliminary analysis shows up to a 50% reduction in computation versus a more traditional formulation. Several state elements are discussed and evaluated, including position, velocity, attitude, clock bias/drift, and GPS measurement biases in addition to bias, scale factor, misalignment, and non-orthogonalities of the accelerometers and gyroscopes. Another consideration is the initialization of the EKF in various scenarios. Scenarios such as single-event upset, ground command, and cold start are discussed as are strategies for whole and partial state updates as well as covariance considerations. Strategies are given for dealing with latent measurements and high-rate propagation using multi-rate architecture. The details of the rate groups and the data ow between the elements is discussed and evaluated.

  17. Evaluation of the Absolute Regional Temperature Potential

    NASA Technical Reports Server (NTRS)

    Shindell, D. T.

    2012-01-01

    The Absolute Regional Temperature Potential (ARTP) is one of the few climate metrics that provides estimates of impacts at a sub-global scale. The ARTP presented here gives the time-dependent temperature response in four latitude bands (90-28degS, 28degS-28degN, 28-60degN and 60-90degN) as a function of emissions based on the forcing in those bands caused by the emissions. It is based on a large set of simulations performed with a single atmosphere-ocean climate model to derive regional forcing/response relationships. Here I evaluate the robustness of those relationships using the forcing/response portion of the ARTP to estimate regional temperature responses to the historic aerosol forcing in three independent climate models. These ARTP results are in good accord with the actual responses in those models. Nearly all ARTP estimates fall within +/-20%of the actual responses, though there are some exceptions for 90-28degS and the Arctic, and in the latter the ARTP may vary with forcing agent. However, for the tropics and the Northern Hemisphere mid-latitudes in particular, the +/-20% range appears to be roughly consistent with the 95% confidence interval. Land areas within these two bands respond 39-45% and 9-39% more than the latitude band as a whole. The ARTP, presented here in a slightly revised form, thus appears to provide a relatively robust estimate for the responses of large-scale latitude bands and land areas within those bands to inhomogeneous radiative forcing and thus potentially to emissions as well. Hence this metric could allow rapid evaluation of the effects of emissions policies at a finer scale than global metrics without requiring use of a full climate model.

  18. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  19. Novel isotopic N, N-dimethyl leucine (iDiLeu) reagents enable absolute quantification of peptides and proteins using a standard curve approach

    PubMed Central

    Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun

    2014-01-01

    Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive due to the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using Mass Differential Tags for Relative and Absolute Quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N,N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective due to their synthetic simplicity, and have increased throughput compared to previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error) while the second enables standard curve creation and analyte quantification in one run (<8% error). PMID:25377360

  20. Novel isotopic N, N-Dimethyl Leucine (iDiLeu) Reagents Enable Absolute Quantification of Peptides and Proteins Using a Standard Curve Approach

    NASA Astrophysics Data System (ADS)

    Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun

    2015-01-01

    Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).

  1. Absolute Cerebral Blood Flow Infarction Threshold for 3-Hour Ischemia Time Determined with CT Perfusion and 18F-FFMZ-PET Imaging in a Porcine Model of Cerebral Ischemia

    PubMed Central

    Cockburn, Neil; Kovacs, Michael

    2016-01-01

    CT Perfusion (CTP) derived cerebral blood flow (CBF) thresholds have been proposed as the optimal parameter for distinguishing the infarct core prior to reperfusion. Previous threshold-derivation studies have been limited by uncertainties introduced by infarct expansion between the acute phase of stroke and follow-up imaging, or DWI lesion reversibility. In this study a model is proposed for determining infarction CBF thresholds at 3hr ischemia time by comparing contemporaneously acquired CTP derived CBF maps to 18F-FFMZ-PET imaging, with the objective of deriving a CBF threshold for infarction after 3 hours of ischemia. Endothelin-1 (ET-1) was injected into the brain of Duroc-Cross pigs (n = 11) through a burr hole in the skull. CTP images were acquired 10 and 30 minutes post ET-1 injection and then every 30 minutes for 150 minutes. 370 MBq of 18F-FFMZ was injected ~120 minutes post ET-1 injection and PET images were acquired for 25 minutes starting ~155–180 minutes post ET-1 injection. CBF maps from each CTP acquisition were co-registered and converted into a median CBF map. The median CBF map was co-registered to blood volume maps for vessel exclusion, an average CT image for grey/white matter segmentation, and 18F-FFMZ-PET images for infarct delineation. Logistic regression and ROC analysis were performed on infarcted and non-infarcted pixel CBF values for each animal that developed infarct. Six of the eleven animals developed infarction. The mean CBF value corresponding to the optimal operating point of the ROC curves for the 6 animals was 12.6 ± 2.8 mL·min-1·100g-1 for infarction after 3 hours of ischemia. The porcine ET-1 model of cerebral ischemia is easier to implement then other large animal models of stroke, and performs similarly as long as CBF is monitored using CTP to prevent reperfusion. PMID:27347877

  2. Precise Measurement of the Absolute Fluorescence Yield

    NASA Astrophysics Data System (ADS)

    Ave, M.; Bohacova, M.; Daumiller, K.; Di Carlo, P.; di Giulio, C.; San Luis, P. Facal; Gonzales, D.; Hojvat, C.; Hörandel, J. R.; Hrabovsky, M.; Iarlori, M.; Keilhauer, B.; Klages, H.; Kleifges, M.; Kuehn, F.; Monasor, M.; Nozka, L.; Palatka, M.; Petrera, S.; Privitera, P.; Ridky, J.; Rizi, V.; D'Orfeuil, B. Rouille; Salamida, F.; Schovanek, P.; Smida, R.; Spinka, H.; Ulrich, A.; Verzi, V.; Williams, C.

    2011-09-01

    We present preliminary results of the absolute yield of fluorescence emission in atmospheric gases. Measurements were performed at the Fermilab Test Beam Facility with a variety of beam particles and gases. Absolute calibration of the fluorescence yield to 5% level was achieved by comparison with two known light sources--the Cherenkov light emitted by the beam particles, and a calibrated nitrogen laser. The uncertainty of the energy scale of current Ultra-High Energy Cosmic Rays experiments will be significantly improved by the AIRFLY measurement.

  3. Absolutely relative or relatively absolute: violations of value invariance in human decision making.

    PubMed

    Teodorescu, Andrei R; Moran, Rani; Usher, Marius

    2016-02-01

    Making decisions based on relative rather than absolute information processing is tied to choice optimality via the accumulation of evidence differences and to canonical neural processing via accumulation of evidence ratios. These theoretical frameworks predict invariance of decision latencies to absolute intensities that maintain differences and ratios, respectively. While information about the absolute values of the choice alternatives is not necessary for choosing the best alternative, it may nevertheless hold valuable information about the context of the decision. To test the sensitivity of human decision making to absolute values, we manipulated the intensities of brightness stimuli pairs while preserving either their differences or their ratios. Although asked to choose the brighter alternative relative to the other, participants responded faster to higher absolute values. Thus, our results provide empirical evidence for human sensitivity to task irrelevant absolute values indicating a hard-wired mechanism that precedes executive control. Computational investigations of several modelling architectures reveal two alternative accounts for this phenomenon, which combine absolute and relative processing. One account involves accumulation of differences with activation dependent processing noise and the other emerges from accumulation of absolute values subject to the temporal dynamics of lateral inhibition. The potential adaptive role of such choice mechanisms is discussed. PMID:26022836

  4. Sensitivity of disease management decision aids to temperature input errors associated with out-of-canopy and reduced time-resolution measurements

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Plant disease management decision aids typically require inputs of weather elements such as air temperature. Whereas many disease models are created based on weather elements at the crop canopy, and with relatively fine time resolution, the decision aids commonly are implemented with hourly weather...

  5. The theoretical accuracy of Runge-Kutta time discretizations for the initial boundary value problem: A careful study of the boundary error

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Gottlieb, David; Abarbanel, Saul; Don, Wai-Sun

    1993-01-01

    The conventional method of imposing time dependent boundary conditions for Runge-Kutta (RK) time advancement reduces the formal accuracy of the space-time method to first order locally, and second order globally, independently of the spatial operator. This counter intuitive result is analyzed in this paper. Two methods of eliminating this problem are proposed for the linear constant coefficient case: (1) impose the exact boundary condition only at the end of the complete RK cycle, (2) impose consistent intermediate boundary conditions derived from the physical boundary condition and its derivatives. The first method, while retaining the RK accuracy in all cases, results in a scheme with much reduced CFL condition, rendering the RK scheme less attractive. The second method retains the same allowable time step as the periodic problem. However it is a general remedy only for the linear case. For non-linear hyperbolic equations the second method is effective only for for RK schemes of third order accuracy or less. Numerical studies are presented to verify the efficacy of each approach.

  6. Bio-Inspired Stretchable Absolute Pressure Sensor Network

    PubMed Central

    Guo, Yue; Li, Yu-Hung; Guo, Zhiqiang; Kim, Kyunglok; Chang, Fu-Kuo; Wang, Shan X.

    2016-01-01

    A bio-inspired absolute pressure sensor network has been developed. Absolute pressure sensors, distributed on multiple silicon islands, are connected as a network by stretchable polyimide wires. This sensor network, made on a 4’’ wafer, has 77 nodes and can be mounted on various curved surfaces to cover an area up to 0.64 m × 0.64 m, which is 100 times larger than its original size. Due to Micro Electro-Mechanical system (MEMS) surface micromachining technology, ultrathin sensing nodes can be realized with thicknesses of less than 100 µm. Additionally, good linearity and high sensitivity (~14 mV/V/bar) have been achieved. Since the MEMS sensor process has also been well integrated with a flexible polymer substrate process, the entire sensor network can be fabricated in a time-efficient and cost-effective manner. Moreover, an accurate pressure contour can be obtained from the sensor network. Therefore, this absolute pressure sensor network holds significant promise for smart vehicle applications, especially for unmanned aerial vehicles. PMID:26729134

  7. Bio-Inspired Stretchable Absolute Pressure Sensor Network.

    PubMed

    Guo, Yue; Li, Yu-Hung; Guo, Zhiqiang; Kim, Kyunglok; Chang, Fu-Kuo; Wang, Shan X

    2016-01-01

    A bio-inspired absolute pressure sensor network has been developed. Absolute pressure sensors, distributed on multiple silicon islands, are connected as a network by stretchable polyimide wires. This sensor network, made on a 4'' wafer, has 77 nodes and can be mounted on various curved surfaces to cover an area up to 0.64 m × 0.64 m, which is 100 times larger than its original size. Due to Micro Electro-Mechanical system (MEMS) surface micromachining technology, ultrathin sensing nodes can be realized with thicknesses of less than 100 µm. Additionally, good linearity and high sensitivity (~14 mV/V/bar) have been achieved. Since the MEMS sensor process has also been well integrated with a flexible polymer substrate process, the entire sensor network can be fabricated in a time-efficient and cost-effective manner. Moreover, an accurate pressure contour can be obtained from the sensor network. Therefore, this absolute pressure sensor network holds significant promise for smart vehicle applications, especially for unmanned aerial vehicles. PMID:26729134

  8. Absolute charge calibration of scintillating screens for relativistic electron detection

    SciTech Connect

    Buck, A.; Popp, A.; Schmid, K.; Karsch, S.; Krausz, F.; Zeil, K.; Jochmann, A.; Kraft, S. D.; Sauerbrey, R.; Cowan, T.; Schramm, U.; Hidding, B.; Kudyakov, T.; Sears, C. M. S.; Veisz, L.; Pawelke, J.

    2010-03-15

    We report on new charge calibrations and linearity tests with high-dynamic range for eight different scintillating screens typically used for the detection of relativistic electrons from laser-plasma based acceleration schemes. The absolute charge calibration was done with picosecond electron bunches at the ELBE linear accelerator in Dresden. The lower detection limit in our setup for the most sensitive scintillating screen (KODAK Biomax MS) was 10 fC/mm{sup 2}. The screens showed a linear photon-to-charge dependency over several orders of magnitude. An onset of saturation effects starting around 10-100 pC/mm{sup 2} was found for some of the screens. Additionally, a constant light source was employed as a luminosity reference to simplify the transfer of a one-time absolute calibration to different experimental setups.

  9. Absolute charge calibration of scintillating screens for relativistic electron detection

    NASA Astrophysics Data System (ADS)

    Buck, A.; Zeil, K.; Popp, A.; Schmid, K.; Jochmann, A.; Kraft, S. D.; Hidding, B.; Kudyakov, T.; Sears, C. M. S.; Veisz, L.; Karsch, S.; Pawelke, J.; Sauerbrey, R.; Cowan, T.; Krausz, F.; Schramm, U.

    2010-03-01

    We report on new charge calibrations and linearity tests with high-dynamic range for eight different scintillating screens typically used for the detection of relativistic electrons from laser-plasma based acceleration schemes. The absolute charge calibration was done with picosecond electron bunches at the ELBE linear accelerator in Dresden. The lower detection limit in our setup for the most sensitive scintillating screen (KODAK Biomax MS) was 10 fC/mm2. The screens showed a linear photon-to-charge dependency over several orders of magnitude. An onset of saturation effects starting around 10-100 pC/mm2 was found for some of the screens. Additionally, a constant light source was employed as a luminosity reference to simplify the transfer of a one-time absolute calibration to different experimental setups.

  10. Absolute calibration of vacuum ultraviolet spectrograph system for plasma diagnostics

    SciTech Connect

    Yoshikawa, M.; Kubota, Y.; Kobayashi, T.; Saito, M.; Numada, N.; Nakashima, Y.; Cho, T.; Koguchi, H.; Yagi, Y.; Yamaguchi, N.

    2004-10-01

    A space- and time-resolving vacuum ultraviolet (VUV) spectrograph system has been applied to diagnose impurity ions behavior in plasmas produced in the tandem mirror GAMMA 10 and the reversed field pinch TPE-RX. We have carried out ray tracing calculations for obtaining the characteristics of the VUV spectrograph and calibration experiments to measure the absolute sensitivities of the VUV spectrograph system for the wavelength range from 100 to 1100 A. By changing the incident angle, 50.6 deg. -51.4 deg., to the spectrograph whose nominal incident angle is 51 deg., we can change the observing spectral range of the VUV spectrograph. In this article, we show the ray tracing calculation results and absolute sensitivities when the angle of incidence into the VUV spectrograph is changed, and the results of VUV spectroscopic measurement in both GAMMA 10 and TPE-RX plasmas.

  11. Absolute phase effects on CPMG-type pulse sequences

    NASA Astrophysics Data System (ADS)

    Mandal, Soumyajit; Oh, Sangwon; Hürlimann, Martin D.

    2015-12-01

    We describe and analyze the effects of transients within radio-frequency (RF) pulses on multiple-pulse NMR measurements such as the well-known Carr-Purcell-Meiboom-Gill (CPMG) sequence. These transients are functions of the absolute RF phases at the beginning and end of the pulse, and are thus affected by the timing of the pulse sequence with respect to the period of the RF waveform. Changes in transients between refocusing pulses in CPMG-type sequences can result in signal decay, persistent oscillations, changes in echo shape, and other effects. We have explored such effects by performing experiments in two different low-frequency NMR systems. The first uses a conventional tuned-and-matched probe circuit, while the second uses an ultra-broadband un-tuned or non-resonant probe circuit. We show that there are distinct differences between the absolute phase effects in these two systems, and present simple models that explain these differences.

  12. Improved Absolute Approximation Ratios for Two-Dimensional Packing Problems

    NASA Astrophysics Data System (ADS)

    Harren, Rolf; van Stee, Rob

    We consider the two-dimensional bin packing and strip packing problem, where a list of rectangles has to be packed into a minimal number of rectangular bins or a strip of minimal height, respectively. All packings have to be non-overlapping and orthogonal, i.e., axis-parallel. Our algorithm for strip packing has an absolute approximation ratio of 1.9396 and is the first algorithm to break the approximation ratio of 2 which was established more than a decade ago. Moreover, we present a polynomial time approximation scheme (mathcal{PTAS}) for strip packing where rotations by 90 degrees are permitted and an algorithm for two-dimensional bin packing with an absolute worst-case ratio of 2, which is optimal provided mathcal{P} not= mathcal{NP}.

  13. Two-stage model of African absolute motion during the last 30 million years

    NASA Astrophysics Data System (ADS)

    Pollitz, Fred F.

    1991-07-01

    The absolute motion of Africa (relative to the hotspots) for the past 30 My is modeled with two Euler vectors, with a change occurring at 6 Ma. Because of the high sensitivity of African absolute motions to errors in the absolute motions of the North America and Pacific plates, both the pre-6 Ma and post-6 Ma African absolute motions are determined simultaneously with North America and Pacific absolute motions for various epochs. Geologic data from the northern Atlantic and hotspot tracks from the African plate are used to augment previous data sets for the North America and Pacific plates. The difference between the pre-6 Ma and post-6 Ma absolute plate motions may be represented as a counterclockwise rotation about a pole at 48 °S, 84 °E, with angular velocity 0.085 °/My. This change is supported by geologic evidence along a large portion of the African plate boundary, including the Red Sea and Gulf of Aden spreading systems, the Alpine deformation zone, and the central and southern mid-Atlantic Ridge. Although the change is modeled as one abrupt transition at 6 Ma, it was most likely a gradual change spanning the period 8-4 Ma. As a likely mechanism for the change, we favor strong asthenospheric return flow from the Afar hotspot towards the southwest; this could produce the uniform southwesterly shift in absolute motion which we have inferred as well as provide a mechanism for the opening of the East African Rift. Comparing the absolute motions of the North America and Pacific plates with earlier estimates, the pole positions are revised by up to 5° and the angular velocities are decreased by 10-20%.

  14. Usage tests of oak moss absolutes containing high and low levels of atranol and chloroatranol.

    PubMed

    Mowitz, Martin; Svedman, Cecilia; Zimerson, Erik; Bruze, Magnus

    2014-07-01

    Atranol and chloroatranol are strong contact allergens in oak moss absolute, a lichen extract used in perfumery. Fifteen subjects with contact allergy to oak moss absolute underwent a repeated open application test (ROAT) using solutions of an untreated oak moss absolute (sample A) and an oak moss absolute with reduced content of atranol and chloroatranol (sample B). All subjects were in addition patch-tested with serial dilutions of samples A and B. Statistically significantly more subjects reacted to sample A than to sample B in the patch tests. No corresponding difference was observed in the ROAT, though there was a significant difference in the time required to elicit a positive reaction. Still, the ROAT indicates that the use of a cosmetic product containing oak moss absolute with reduced levels of atranol and chloroatranol is capable of eliciting an allergic reaction in previously sensitised individuals. PMID:24287679

  15. Radar error statistics for the space shuttle

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1979-01-01

    Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.

  16. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  17. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  18. Picoliter Well Array Chip-Based Digital Recombinase Polymerase Amplification for Absolute Quantification of Nucleic Acids

    PubMed Central

    Li, Zhao; Liu, Yong; Wei, Qingquan; Liu, Yuanjie; Liu, Wenwen; Zhang, Xuelian; Yu, Yude

    2016-01-01

    Absolute, precise quantification methods expand the scope of nucleic acids research and have many practical applications. Digital polymerase chain reaction (dPCR) is a powerful method for nucleic acid detection and absolute quantification. However, it requires thermal cycling and accurate temperature control, which are difficult in resource-limited conditions. Accordingly, isothermal methods, such as recombinase polymerase amplification (RPA), are more attractive. We developed a picoliter well array (PWA) chip with 27,000 consistently sized picoliter reactions (314 pL) for isothermal DNA quantification using digital RPA (dRPA) at 39°C. Sample loading using a scraping liquid blade was simple, fast, and required small reagent volumes (i.e., <20 μL). Passivating the chip surface using a methoxy-PEG-silane agent effectively eliminated cross-contamination during dRPA. Our creative optical design enabled wide-field fluorescence imaging in situ and both end-point and real-time analyses of picoliter wells in a 6-cm2 area. It was not necessary to use scan shooting and stitch serial small images together. Using this method, we quantified serial dilutions of a Listeria monocytogenes gDNA stock solution from 9 × 10-1 to 4 × 10-3 copies per well with an average error of less than 11% (N = 15). Overall dRPA-on-chip processing required less than 30 min, which was a 4-fold decrease compared to dPCR, requiring approximately 2 h. dRPA on the PWA chip provides a simple and highly sensitive method to quantify nucleic acids without thermal cycling or precise micropump/microvalve control. It has applications in fast field analysis and critical clinical diagnostics under resource-limited settings. PMID:27074005

  19. Picoliter Well Array Chip-Based Digital Recombinase Polymerase Amplification for Absolute Quantification of Nucleic Acids.

    PubMed

    Li, Zhao; Liu, Yong; Wei, Qingquan; Liu, Yuanjie; Liu, Wenwen; Zhang, Xuelian; Yu, Yude

    2016-01-01

    Absolute, precise quantification methods expand the scope of nucleic acids research and have many practical applications. Digital polymerase chain reaction (dPCR) is a powerful method for nucleic acid detection and absolute quantification. However, it requires thermal cycling and accurate temperature control, which are difficult in resource-limited conditions. Accordingly, isothermal methods, such as recombinase polymerase amplification (RPA), are more attractive. We developed a picoliter well array (PWA) chip with 27,000 consistently sized picoliter reactions (314 pL) for isothermal DNA quantification using digital RPA (dRPA) at 39°C. Sample loading using a scraping liquid blade was simple, fast, and required small reagent volumes (i.e., <20 μL). Passivating the chip surface using a methoxy-PEG-silane agent effectively eliminated cross-contamination during dRPA. Our creative optical design enabled wide-field fluorescence imaging in situ and both end-point and real-time analyses of picoliter wells in a 6-cm(2) area. It was not necessary to use scan shooting and stitch serial small images together. Using this method, we quantified serial dilutions of a Listeria monocytogenes gDNA stock solution from 9 × 10(-1) to 4 × 10(-3) copies per well with an average error of less than 11% (N = 15). Overall dRPA-on-chip processing required less than 30 min, which was a 4-fold decrease compared to dPCR, requiring approximately 2 h. dRPA on the PWA chip provides a simple and highly sensitive method to quantify nucleic acids without thermal cycling or precise micropump/microvalve control. It has applications in fast field analysis and critical clinical diagnostics under resource-limited settings. PMID:27074005

  20. Absolute partial photoionization cross sections of ozone.

    SciTech Connect

    Berkowitz, J.; Chemistry

    2008-04-01

    Despite the current concerns about ozone, absolute partial photoionization cross sections for this molecule in the vacuum ultraviolet (valence) region have been unavailable. By eclectic re-evaluation of old/new data and plausible assumptions, such cross sections have been assembled to fill this void.

  1. Solving Absolute Value Equations Algebraically and Geometrically

    ERIC Educational Resources Information Center

    Shiyuan, Wei

    2005-01-01

    The way in which students can improve their comprehension by understanding the geometrical meaning of algebraic equations or solving algebraic equation geometrically is described. Students can experiment with the conditions of the absolute value equation presented, for an interesting way to form an overall understanding of the concept.

  2. Teaching Absolute Value Inequalities to Mature Students

    ERIC Educational Resources Information Center

    Sierpinska, Anna; Bobos, Georgeana; Pruncut, Andreea

    2011-01-01

    This paper gives an account of a teaching experiment on absolute value inequalities, whose aim was to identify characteristics of an approach that would realize the potential of the topic to develop theoretical thinking in students enrolled in prerequisite mathematics courses at a large, urban North American university. The potential is…

  3. Increasing Capacity: Practice Effects in Absolute Identification

    ERIC Educational Resources Information Center

    Dodds, Pennie; Donkin, Christopher; Brown, Scott D.; Heathcote, Andrew

    2011-01-01

    In most of the long history of the study of absolute identification--since Miller's (1956) seminal article--a severe limit on performance has been observed, and this limit has resisted improvement even by extensive practice. In a startling result, Rouder, Morey, Cowan, and Pfaltz (2004) found substantially improved performance with practice in the…

  4. On Relative and Absolute Conviction in Mathematics

    ERIC Educational Resources Information Center

    Weber, Keith; Mejia-Ramos, Juan Pablo

    2015-01-01

    Conviction is a central construct in mathematics education research on justification and proof. In this paper, we claim that it is important to distinguish between absolute conviction and relative conviction. We argue that researchers in mathematics education frequently have not done so and this has lead to researchers making unwarranted claims…

  5. Absolute Points for Multiple Assignment Problems

    ERIC Educational Resources Information Center

    Adlakha, V.; Kowalski, K.

    2006-01-01

    An algorithm is presented to solve multiple assignment problems in which a cost is incurred only when an assignment is made at a given cell. The proposed method recursively searches for single/group absolute points to identify cells that must be loaded in any optimal solution. Unlike other methods, the first solution is the optimal solution. The…

  6. Nonequilibrium equalities in absolutely irreversible processes

    NASA Astrophysics Data System (ADS)

    Murashita, Yuto; Funo, Ken; Ueda, Masahito

    2015-03-01

    Nonequilibrium equalities have attracted considerable attention in the context of statistical mechanics and information thermodynamics. Integral nonequilibrium equalities reveal an ensemble property of the entropy production σ as = 1 . Although nonequilibrium equalities apply to rather general nonequilibrium situations, they break down in absolutely irreversible processes, where the forward-path probability vanishes and the entropy production diverges. We identify the mathematical origins of this inapplicability as the singularity of probability measure. As a result, we generalize conventional integral nonequilibrium equalities to absolutely irreversible processes as = 1 -λS , where λS is the probability of the singular part defined based on Lebesgue's decomposition theorem. The acquired equality contains two physical quantities related to irreversibility: σ characterizing ordinary irreversibility and λS describing absolute irreversibility. An inequality derived from the obtained equality demonstrates the absolute irreversibility leads to the fundamental lower bound on the entropy production. We demonstrate the validity of the obtained equality for a simple model.

  7. Precision absolute positional measurement of laser beams.

    PubMed

    Fitzsimons, Ewan D; Bogenstahl, Johanna; Hough, James; Killow, Christian J; Perreur-Lloyd, Michael; Robertson, David I; Ward, Henry

    2013-04-20

    We describe an instrument which, coupled with a suitable coordinate measuring machine, facilitates the absolute measurement within the machine frame of the propagation direction of a millimeter-scale laser beam to an accuracy of around ±4 μm in position and ±20 μrad in angle. PMID:23669658

  8. The NASTRAN Error Correction Information System (ECIS)

    NASA Technical Reports Server (NTRS)

    Rosser, D. C., Jr.; Rogers, J. L., Jr.

    1975-01-01

    A data management procedure, called Error Correction Information System (ECIS), is described. The purpose of this system is to implement the rapid transmittal of error information between the NASTRAN Systems Management Office (NSMO) and the NASTRAN user community. The features of ECIS and its operational status are summarized. The mode of operation for ECIS is compared to the previous error correction procedures. It is shown how the user community can have access to error information much more rapidly when using ECIS. Flow charts and time tables characterize the convenience and time saving features of ECIS.

  9. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  10. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  11. Error coding simulations

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1993-11-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  12. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  13. Medication errors: an overview for clinicians.

    PubMed

    Wittich, Christopher M; Burkle, Christopher M; Lanier, William L

    2014-08-01

    Medication error is an important cause of patient morbidity and mortality, yet it can be a confusing and underappreciated concept. This article provides a review for practicing physicians that focuses on medication error (1) terminology and definitions, (2) incidence, (3) risk factors, (4) avoidance strategies, and (5) disclosure and legal consequences. A medication error is any error that occurs at any point in the medication use process. It has been estimated by the Institute of Medicine that medication errors cause 1 of 131 outpatient and 1 of 854 inpatient deaths. Medication factors (eg, similar sounding names, low therapeutic index), patient factors (eg, poor renal or hepatic function, impaired cognition, polypharmacy), and health care professional factors (eg, use of abbreviations in prescriptions and other communications, cognitive biases) can precipitate medication errors. Consequences faced by physicians after medication errors can include loss of patient trust, civil actions, criminal charges, and medical board discipline. Methods to prevent medication errors from occurring (eg, use of information technology, better drug labeling, and medication reconciliation) have been used with varying success. When an error is discovered, patients expect disclosure that is timely, given in person, and accompanied with an apology and communication of efforts to prevent future errors. Learning more about medication errors may enhance health care professionals' ability to provide safe care to their patients. PMID:24981217

  14. Absolute Cavity Pyrgeometer to Measure the Absolute Outdoor Longwave Irradiance with Traceability to International System of Units, SI

    SciTech Connect

    Reda, I.; Zeng, J.; Scheuch, J.; Hanssen, L.; Wilthan, B.; Myers, D.; Stoffel, T.

    2012-03-01

    This article describes a method of measuring the absolute outdoor longwave irradiance using an absolute cavity pyrgeometer (ACP), U.S. Patent application no. 13/049, 275. The ACP consists of domeless thermopile pyrgeometer, gold-plated concentrator, temperature controller, and data acquisition. The dome was removed from the pyrgeometer to remove errors associated with dome transmittance and the dome correction factor. To avoid thermal convection and wind effect errors resulting from using a domeless thermopile, the gold-plated concentrator was placed above the thermopile. The concentrator is a dual compound parabolic concentrator (CPC) with 180{sup o} view angle to measure the outdoor incoming longwave irradiance from the atmosphere. The incoming irradiance is reflected from the specular gold surface of the CPC and concentrated on the 11 mm diameter of the pyrgeometer's blackened thermopile. The CPC's interior surface design and the resulting cavitation result in a throughput value that was characterized by the National Institute of Standards and Technology. The ACP was installed horizontally outdoor on an aluminum plate connected to the temperature controller to control the pyrgeometer's case temperature. The responsivity of the pyrgeometer's thermopile detector was determined by lowering the case temperature and calculating the rate of change of the thermopile output voltage versus the changing net irradiance. The responsivity is then used to calculate the absolute atmospheric longwave irradiance with an uncertainty estimate (U{sub 95}) of {+-}3.96 W m{sup 02} with traceability to the International System of Units, SI. The measured irradiance was compared with the irradiance measured by two pyrgeometers calibrated by the World Radiation Center with traceability to the Interim World Infrared Standard Group, WISG. A total of 408 readings were collected over three different nights. The calculated irradiance measured by the ACP was 1.5 W/m{sup 2} lower than that

  15. An absolute cavity pyrgeometer to measure the absolute outdoor longwave irradiance with traceability to international system of units, SI

    NASA Astrophysics Data System (ADS)

    Reda, Ibrahim; Zeng, Jinan; Scheuch, Jonathan; Hanssen, Leonard; Wilthan, Boris; Myers, Daryl; Stoffel, Tom

    2012-03-01

    This article describes a method of measuring the absolute outdoor longwave irradiance using an absolute cavity pyrgeometer (ACP), U.S. Patent application no. 13/049, 275. The ACP consists of domeless thermopile pyrgeometer, gold-plated concentrator, temperature controller, and data acquisition. The dome was removed from the pyrgeometer to remove errors associated with dome transmittance and the dome correction factor. To avoid thermal convection and wind effect errors resulting from using a domeless thermopile, the gold-plated concentrator was placed above the thermopile. The concentrator is a dual compound parabolic concentrator (CPC) with 180° view angle to measure the outdoor incoming longwave irradiance from the atmosphere. The incoming irradiance is reflected from the specular gold surface of the CPC and concentrated on the 11 mm diameter of the pyrgeometer's blackened thermopile. The CPC's interior surface design and the resulting cavitation result in a throughput value that was characterized by the National Institute of Standards and Technology. The ACP was installed horizontally outdoor on an aluminum plate connected to the temperature controller to control the pyrgeometer's case temperature. The responsivity of the pyrgeometer's thermopile detector was determined by lowering the case temperature and calculating the rate of change of the thermopile output voltage versus the changing net irradiance. The responsivity is then used to calculate the absolute atmospheric longwave irradiance with an uncertainty estimate (U95) of ±3.96 W m-2 with traceability to the International System of Units, SI. The measured irradiance was compared with the irradiance measured by two pyrgeometers calibrated by the World Radiation Center with traceability to the Interim World Infrared Standard Group, WISG. A total of 408 readings were collected over three different nights. The calculated irradiance measured by the ACP was 1.5 W/m2 lower than that measured by the two

  16. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  17. Absolute photoionization cross-section of the propargyl radical

    SciTech Connect

    Savee, John D.; Welz, Oliver; Taatjes, Craig A.; Osborn, David L.; Soorkia, Satchin; Selby, Talitha M.

    2012-04-07

    Using synchrotron-generated vacuum-ultraviolet radiation and multiplexed time-resolved photoionization mass spectrometry we have measured the absolute photoionization cross-section for the propargyl (C{sub 3}H{sub 3}) radical, {sigma}{sub propargyl}{sup ion}(E), relative to the known absolute cross-section of the methyl (CH{sub 3}) radical. We generated a stoichiometric 1:1 ratio of C{sub 3}H{sub 3} : CH{sub 3} from 193 nm photolysis of two different C{sub 4}H{sub 6} isomers (1-butyne and 1,3-butadiene). Photolysis of 1-butyne yielded values of {sigma}{sub propargyl}{sup ion}(10.213 eV)=(26.1{+-}4.2) Mb and {sigma}{sub propargyl}{sup ion}(10.413 eV)=(23.4{+-}3.2) Mb, whereas photolysis of 1,3-butadiene yielded values of {sigma}{sub propargyl}{sup ion}(10.213 eV)=(23.6{+-}3.6) Mb and {sigma}{sub propargyl}{sup ion}(10.413 eV)=(25.1{+-}3.5) Mb. These measurements place our relative photoionization cross-section spectrum for propargyl on an absolute scale between 8.6 and 10.5 eV. The cross-section derived from our results is approximately a factor of three larger than previous determinations.

  18. Absolute testing of flats in sub-stitching interferometer by rotation-shift method

    NASA Astrophysics Data System (ADS)

    Jia, Xin; Xu, Fuchao; Xie, Weimin; Li, Yun; Xing, Tingwen

    2015-09-01

    Most of the commercial available sub-aperture stitching interferometers measure the surface with a standard lens that produces a reference wavefront, and the precision of the interferometer is generally limited by the standard lens. The test accuracy can be achieved by removing the error of reference surface by the absolute testing method. When the testing accuracy (repeatability and reproducibility) is close to 1nm, in addition to the reference surface, other factors will also affect the measuring accuracy such as environment, zoom magnification, stitching precision, tooling and fixture, the characteristics of optical materials and so on. We establish a stitching system in the thousand level cleanroom. The stitching system is including the Zygo interferometer, the motion system with Bilz active isolation system at level VC-F. We review the traditional absolute flat testing methods and emphasize the method of rotation-shift functions. According to the rotation-shift method we get the profile of the reference lens and the testing lens. The problem of the rotation-shift method is the tilt error. In the motion system, we control the tilt error no more than 4 second to reduce the error. In order to obtain higher testing accuracy, we analyze the influence surface shape measurement accuracy by recording the environment error with the fluke testing equipment.

  19. Absolute calibration of Saral/altiKa on Lake Issykkul from GPS field

    NASA Astrophysics Data System (ADS)

    Crétaux, Jean-Francois; Calmant, Stephane; Romanovsky, Vladimir; Bonnefond, Pascal; Tashbaeva, Saadat; Berge-Nguyen, Muriel; Maisongrande, Philippe

    2015-04-01

    Within the framework of Jason-2 mission, a Cal-Val project including continental waters (Rivers and lakes) had been setup in 2007. It includes installation of permanent site (meteo station, limnigraphs, GPS reference point) and regular field campaign for the whole lifetime of the satellite. The lake Issykkul in Kyrgyzstan has been chosen as site dedicated to lakes following a preliminary project in 2004 on this lake. It is funded by CNES. Over the last decade more and more scientific studies were using satellite altimetry to monitor inland waters. However, same as for ocean studies, linking time series from different missions require to accurately monitoring the biases and drifts for each parameter contributing to the final estimate of the reflector height. Moreover there is clear evidence that the calibration of satellite altimetry over ocean does not apply to inland seas (e.g., corrections, retracking, geographical effects). Regional Cal/Val sites supply invaluable data to formally establish the error budget of altimetry over continental water bodies, in addition to the global mission biases and drift monitoring. Moreover the variety of calibration sites for altimetry had to be enlarged in order to have more global distribution and more robust assessment of the altimetry system, and to check if specific conditions lead to different estimation of absolute bias of the instruments. Calibration over lakes surfaces for example has interesting characteristics with respect to ocean surface: wave and ocean tides are generally low, and to summarize, dynamic variability is much smaller than in the oceanic domain. CAL/VAL activities on the oceanic domain have a long history and protocols are well established. CAL/VAL activities on lakes are much recent but in turn they address other problems such as the performance of the various tracking/retracking algorithms and more globally assess the quality of the geophysical corrections. This is achievable when measurements of

  20. A binary spelling interface with random errors.

    PubMed

    Perelmouter, J; Birbaumer, N

    2000-06-01

    An algorithm for design of a spelling interface based on a modified Huffman's algorithm is presented. This algorithm builds a full binary tree that allows to maximize an average probability to reach a leaf where a required character is located when a choice at each node is made with possible errors. A means to correct errors (a delete-function) and an optimization method to build this delete-function into the binary tree are also discussed. Such a spelling interface could be successfully applied to any menu-orientated alternative communication system when a user (typically, a patient with devastating neuromuscular handicap) is not able to express an intended single binary response, either through motor responses or by using of brain-computer interfaces, with an absolute reliability. PMID:10896195