Science.gov

Sample records for absolute timing error

  1. Absolute Time Error Calibration of GPS Receivers Using Advanced GPS Simulators

    DTIC Science & Technology

    1997-12-01

    29th Annual Precise Time a d Time Interval (PTTI) Meeting ABSOLUTE TIME ERROR CALIBRATION OF GPS RECEIVERS USING ADVANCED GPS SIMULATORS E.D...DC 20375 USA Abstract Preche time transfer eq)er&nen& using GPS with t h e stabd?v’s under ten nanoseconh are common& being reported willrbr the... time transfer communily. Relarive calibrations are done by naeasurhg the time error of one GPS receiver versus a “known master refmence receiver.” Z?t

  2. Accurate absolute GPS positioning through satellite clock error estimation

    NASA Astrophysics Data System (ADS)

    Han, S.-C.; Kwon, J. H.; Jekeli, C.

    2001-05-01

    An algorithm for very accurate absolute positioning through Global Positioning System (GPS) satellite clock estimation has been developed. Using International GPS Service (IGS) precise orbits and measurements, GPS clock errors were estimated at 30-s intervals. Compared to values determined by the Jet Propulsion Laboratory, the agreement was at the level of about 0.1 ns (3 cm). The clock error estimates were then applied to an absolute positioning algorithm in both static and kinematic modes. For the static case, an IGS station was selected and the coordinates were estimated every 30 s. The estimated absolute position coordinates and the known values had a mean difference of up to 18 cm with standard deviation less than 2 cm. For the kinematic case, data obtained every second from a GPS buoy were tested and the result from the absolute positioning was compared to a differential GPS (DGPS) solution. The mean differences between the coordinates estimated by the two methods are less than 40 cm and the standard deviations are less than 25 cm. It was verified that this poorer standard deviation on 1-s position results is due to the clock error interpolation from 30-s estimates with Selective Availability (SA). After SA was turned off, higher-rate clock error estimates (such as 1 s) could be obtained by a simple interpolation with negligible corruption. Therefore, the proposed absolute positioning technique can be used to within a few centimeters' precision at any rate by estimating 30-s satellite clock errors and interpolating them.

  3. Clock time is absolute and universal

    NASA Astrophysics Data System (ADS)

    Shen, Xinhang

    2015-09-01

    A critical error is found in the Special Theory of Relativity (STR): mixing up the concepts of the STR abstract time of a reference frame and the displayed time of a physical clock, which leads to use the properties of the abstract time to predict time dilation on physical clocks and all other physical processes. Actually, a clock can never directly measure the abstract time, but can only record the result of a physical process during a period of the abstract time such as the number of cycles of oscillation which is the multiplication of the abstract time and the frequency of oscillation. After Lorentz Transformation, the abstract time of a reference frame expands by a factor gamma, but the frequency of a clock decreases by the same factor gamma, and the resulting multiplication i.e. the displayed time of a moving clock remains unchanged. That is, the displayed time of any physical clock is an invariant of Lorentz Transformation. The Lorentz invariance of the displayed times of clocks can further prove within the framework of STR our earth based standard physical time is absolute, universal and independent of inertial reference frames as confirmed by both the physical fact of the universal synchronization of clocks on the GPS satellites and clocks on the earth, and the theoretical existence of the absolute and universal Galilean time in STR which has proved that time dilation and space contraction are pure illusions of STR. The existence of the absolute and universal time in STR has directly denied that the reference frame dependent abstract time of STR is the physical time, and therefore, STR is wrong and all its predictions can never happen in the physical world.

  4. Sub-nanometer periodic nonlinearity error in absolute distance interferometers.

    PubMed

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.

  5. Absolute Plate Velocities from Seismic Anisotropy: Importance of Correlated Errors

    NASA Astrophysics Data System (ADS)

    Gordon, R. G.; Zheng, L.; Kreemer, C.

    2014-12-01

    The orientation of seismic anisotropy inferred beneath the interiors of plates may provide a means to estimate the motions of the plate relative to the deeper mantle. Here we analyze a global set of shear-wave splitting data to estimate plate motions and to better understand the dispersion of the data, correlations in the errors, and their relation to plate speed. The errors in plate motion azimuths inferred from shear-wave splitting beneath any one tectonic plate are shown to be correlated with the errors of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. Our preferred set of angular velocities, SKS-MORVEL, is determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25±0.11º Ma-1 (95% confidence limits) right-handed about 57.1ºS, 68.6ºE. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ=19.2°) differs insignificantly from that for continental lithosphere (σ=21.6°). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ=7.4°) than for continental lithosphere (σ=14.7°). Two of the slowest-moving plates, Antarctica (vRMS=4 mm a-1, σ=29°) and Eurasia (vRMS=3 mm a-1, σ=33°), have two of the largest within-plate dispersions, which may indicate that a plate must move faster than ≈5 mm a-1 to result in seismic anisotropy useful for estimating plate motion.

  6. Absolute plate velocities from seismic anisotropy: Importance of correlated errors

    NASA Astrophysics Data System (ADS)

    Zheng, Lin; Gordon, Richard G.; Kreemer, Corné

    2014-09-01

    The errors in plate motion azimuths inferred from shear wave splitting beneath any one tectonic plate are shown to be correlated with the errors of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. Our preferred set of angular velocities, SKS-MORVEL, is determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25 ± 0.11° Ma-1 (95% confidence limits) right handed about 57.1°S, 68.6°E. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ = 19.2°) differs insignificantly from that for continental lithosphere (σ = 21.6°). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ = 7.4°) than for continental lithosphere (σ = 14.7°). Two of the slowest-moving plates, Antarctica (vRMS = 4 mm a-1, σ = 29°) and Eurasia (vRMS = 3 mm a-1, σ = 33°), have two of the largest within-plate dispersions, which may indicate that a plate must move faster than ≈ 5 mm a-1 to result in seismic anisotropy useful for estimating plate motion. The tendency of observed azimuths on the Arabia plate to be counterclockwise of plate motion may provide information about the direction and amplitude of superposed asthenospheric flow or about anisotropy in the lithospheric mantle.

  7. Students' Mathematical Work on Absolute Value: Focusing on Conceptions, Errors and Obstacles

    ERIC Educational Resources Information Center

    Elia, Iliada; Özel, Serkan; Gagatsis, Athanasios; Panaoura, Areti; Özel, Zeynep Ebrar Yetkiner

    2016-01-01

    This study investigates students' conceptions of absolute value (AV), their performance in various items on AV, their errors in these items and the relationships between students' conceptions and their performance and errors. The Mathematical Working Space (MWS) is used as a framework for studying students' mathematical work on AV and the…

  8. Absolute Timing of the Crab Pulsar with RXTE

    NASA Technical Reports Server (NTRS)

    Rots, Arnold H.; Jahoda, Keith; Lyne, Andrew G.

    2004-01-01

    We have monitored the phase of the main X-ray pulse of the Crab pulsar with the Rossi X-ray Timing Explorer (RXTE) for almost eight years, since the start of the mission in January 1996. The absolute time of RXTE's clock is sufficiently accurate to allow this phase to be compared directly with the radio profile. Our monitoring observations of the pulsar took place bi-weekly (during the periods when it was at least 30 degrees from the Sun) and we correlated the data with radio timing ephemerides derived from observations made at Jodrell Bank. We have determined the phase of the X-ray main pulse for each observation with a typical error in the individual data points of 50 microseconds. The total ensemble is consistent with a phase that is constant over the monitoring period, with the X-ray pulse leading the radio pulse by 0.01025 plus or minus 0.00120 period in phase, or 344 plus or minus 40 microseconds in time. The error estimate is dominated by a systematic error of 40 microseconds, most likely constant, arising from uncertainties in the instrumental calibration of the radio data. The statistical error is 0.00015 period, or 5 microseconds. The separation of the main pulse and interpulse appears to be unchanging at time scales of a year or less, with an average value of 0.4001 plus or minus 0.0002 period. There is no apparent variation in these values with energy over the 2-30 keV range. The lag between the radio and X-ray pulses ma be constant in phase (i.e., rotational in nature) or constant in time (i.e., due to a pathlength difference). We are not (yet) able to distinguish between these two interpretations.

  9. Error analysis in newborn screening: can quotients support the absolute values?

    PubMed

    Arneth, Borros; Hintz, Martin

    2017-03-01

    Newborn screening is performed using modern tandem mass spectrometry, which can simultaneously detect a variety of analytes, including several amino acids and fatty acids. Tandem mass spectrometry measures the diagnostic parameters as absolute concentrations and produces fragments which are used as markers of specific substances. Several prominent quotients can also be derived, which are quotients of two absolute measured concentrations. In this study, we determined the precision of both the absolute concentrations and the derived quotients. First, the measurement error of the absolute concentrations and the measurement error of the ratios were practically determined. Then, the Gaussian theory of error calculation was used. Finally, these errors were compared with one another. The practical analytical accuracies of the quotients were significantly higher (e.g., coefficient of variation (CV) = 5.1% for the phenylalanine to tyrosine (Phe/Tyr) quotient and CV = 5.6% for the Fisher quotient) than the accuracies of the absolute measured concentrations (mean CVs = 12%). According to our results, the ratios are analytically correct and, from an analytical point of view, can support the absolute values in finding the correct diagnosis.

  10. IMPROVEMENT OF SMVGEAR II ON VECTOR AND SCALAR MACHINES THROUGH ABSOLUTE ERROR TOLERANCE CONTROL (R823186)

    EPA Science Inventory

    The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...

  11. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  12. Assessing Suturing Skills in a Self-Guided Learning Setting: Absolute Symmetry Error

    ERIC Educational Resources Information Center

    Brydges, Ryan; Carnahan, Heather; Dubrowski, Adam

    2009-01-01

    Directed self-guidance, whereby trainees independently practice a skill-set in a structured setting, may be an effective technique for novice training. Currently, however, most evaluation methods require an expert to be present during practice. The study aim was to determine if absolute symmetry error, a clinically important measure that can be…

  13. Position error correction in absolute surface measurement based on a multi-angle averaging method

    NASA Astrophysics Data System (ADS)

    Wang, Weibo; Wu, Biwei; Liu, Pengfei; Liu, Jian; Tan, Jiubin

    2017-04-01

    We present a method for position error correction in absolute surface measurement based on a multi-angle averaging method. Differences in shear rotation measurements at overlapping areas can be used to estimate the unknown relative position errors of the measurements. The model and the solving of the estimation algorithm have been discussed in detail. The estimation algorithm adopts a least-squares technique to eliminate azimuthal errors caused by rotation inaccuracy. The cost functions can be minimized to determine the true values of the unknowns of Zernike polynomial coefficients and rotation angle. Experimental results show the validity of the method proposed.

  14. Error Budget for a Calibration Demonstration System for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2013-01-01

    A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.

  15. Demonstrating the Error Budget for the Climate Absolute Radiance and Refractivity Observatory Through Solar Irradiance Measurements

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2016-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.

  16. Preliminary error budget for the reflected solar instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Astrophysics Data System (ADS)

    Thome, K.; Gubbels, T.; Barnes, R.

    2011-10-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as the most critical method to determine the accuracy of climate change. The CLARREO Project will implement a spaceborne earth observation mission designed to provide rigorous SI-traceable observations (i.e., radiance, reflectance, and refractivity) that are sensitive to a wide range of key decadal change variables. The instrument suite includes emitted infrared spectrometers, global navigation receivers for radio occultation, and reflected solar spectrometers. The measurements will be acquired for a period of five years and will enable follow-on missions to extend the climate record over the decades needed to understand climate change. This work describes a preliminary error budget for the RS sensor. The RS sensor will retrieve at-sensor reflectance over the spectral range from 320 to 2300 nm with 500-m GIFOV and a 100-km swath width. The current design is based on an Offner spectrometer with two separate focal planes each with its own entrance aperture and grating covering spectral ranges of 320-640, 600-2300 nm. Reflectance is obtained from the ratio of measurements of radiance while viewing the earth's surface to measurements of irradiance while viewing the sun. The requirement for the RS instrument is that the reflectance must be traceable to SI standards at an absolute uncertainty <0.3%. The calibration approach to achieve the ambitious 0.3% absolute calibration uncertainty is predicated on a reliance on heritage hardware, reduction of sensor complexity, and adherence to detector-based calibration standards. The design above has been used to develop a preliminary error budget that meets the 0.3% absolute requirement. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and

  17. Preliminary Error Budget for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; Gubbels, Timothy; Barnes, Robert

    2011-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) plans to observe climate change trends over decadal time scales to determine the accuracy of climate projections. The project relies on spaceborne earth observations of SI-traceable variables sensitive to key decadal change parameters. The mission includes a reflected solar instrument retrieving at-sensor reflectance over the 320 to 2300 nm spectral range with 500-m spatial resolution and 100-km swath. Reflectance is obtained from the ratio of measurements of the earth s surface to those while viewing the sun relying on a calibration approach that retrieves reflectance with uncertainties less than 0.3%. The calibration is predicated on heritage hardware, reduction of sensor complexity, adherence to detector-based calibration standards, and an ability to simulate in the laboratory on-orbit sources in both size and brightness to provide the basis of a transfer to orbit of the laboratory calibration including a link to absolute solar irradiance measurements. The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe high-accuracy, long-term climate change trends and to use decadal change observations as the most critical method to determine the accuracy of climate change projections such as those in the IPCC Report. A rigorously known accuracy of both decadal change observations as well as climate projections is critical in order to enable sound policy decisions. The CLARREO Project will implement a spaceborne earth observation mission designed to provide rigorous SI traceable observations (i.e., radiance, reflectance, and refractivity) that are sensitive to a wide range of key decadal change variables, including: 1) Surface temperature and atmospheric temperature profile 2) Atmospheric water vapor profile 3) Far infrared water vapor greenhouse 4) Aerosol properties and anthropogenic aerosol direct radiative forcing 5) Total and spectral solar

  18. Generalized approach for using unbiased symmetric metrics with negative values: normalized mean bias factor and normalized mean absolute error factor

    EPA Science Inventory

    Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations...

  19. Oblique-incidence sounder measurements with absolute propagation delay timing

    SciTech Connect

    Daehler, M.

    1990-05-03

    Timing from the Global Position Satellite (GPS) system has been applied to HF oblique incidence sounder measurements to produce ionograms whose propagation delay time scale is absolutely calibrated. Such a calibration is useful for interpreting ionograms in terms of the electron density true-height profile for the ionosphere responsible for the propagation. Use of the time variations in the shape of the electron density profile, in conjunction with an HF propagation model, is expected to provide better near-term (1-24 hour) HF propagation forecasts than are available from current updating systems, which use only the MUF. Such a capability may provide the basis for HF frequency management techniques which are more efficient than current methods. Absolute timing and other techniques applicable to automatic extraction of the electron-density profile from an ionogram will be discussed.

  20. Interactions of timing and prediction error learning.

    PubMed

    Kirkpatrick, Kimberly

    2014-01-01

    Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields.

  1. Demonstrating the error budget for the climate absolute radiance and refractivity observatory through solar irradiance measurements (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Thome, Kurtis J.; McCorkel, Joel; Angal, Amit

    2016-09-01

    The goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to provide high-accuracy data for evaluation of long-term climate change trends. Essential to the CLARREO project is demonstration of SI-traceable, reflected measurements that are a factor of 10 more accurate than current state-of-the-art sensors. The CLARREO approach relies on accurate, monochromatic absolute radiance calibration in the laboratory transferred to orbit via solar irradiance knowledge. The current work describes the results of field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) that is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. Recent measurements of absolute spectral solar irradiance using SOLARIS are presented. The ground-based SOLARIS data are corrected to top-of-atmosphere values using AERONET data collected within 5 km of the SOLARIS operation. The SOLARIS data are converted to absolute irradiance using laboratory calibrations based on the Goddard Laser for Absolute Measurement of Radiance (GLAMR). Results are compared to accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.

  2. Improved Strategies and Optimization of Calibration Models for Real-time PCR Absolute Quantification

    EPA Science Inventory

    Real-time PCR absolute quantification applications rely on the use of standard curves to make estimates of DNA target concentrations in unknown samples. Traditional absolute quantification approaches dictate that a standard curve must accompany each experimental run. However, t...

  3. Formal Estimation of Errors in Computed Absolute Interaction Energies of Protein-ligand Complexes

    PubMed Central

    Faver, John C.; Benson, Mark L.; He, Xiao; Roberts, Benjamin P.; Wang, Bing; Marshall, Michael S.; Kennedy, Matthew R.; Sherrill, C. David; Merz, Kenneth M.

    2011-01-01

    A largely unsolved problem in computational biochemistry is the accurate prediction of binding affinities of small ligands to protein receptors. We present a detailed analysis of the systematic and random errors present in computational methods through the use of error probability density functions, specifically for computed interaction energies between chemical fragments comprising a protein-ligand complex. An HIV-II protease crystal structure with a bound ligand (indinavir) was chosen as a model protein-ligand complex. The complex was decomposed into twenty-one (21) interacting fragment pairs, which were studied using a number of computational methods. The chemically accurate complete basis set coupled cluster theory (CCSD(T)/CBS) interaction energies were used as reference values to generate our error estimates. In our analysis we observed significant systematic and random errors in most methods, which was surprising especially for parameterized classical and semiempirical quantum mechanical calculations. After propagating these fragment-based error estimates over the entire protein-ligand complex, our total error estimates for many methods are large compared to the experimentally determined free energy of binding. Thus, we conclude that statistical error analysis is a necessary addition to any scoring function attempting to produce reliable binding affinity predictions. PMID:21666841

  4. Generalized approach for using unbiased symmetric metrics with negative values: normalized mean bias factor and normalized mean absolute error factor

    SciTech Connect

    Gustafson, William I.; Yu, Shaocai

    2012-10-23

    Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations of these metrics are only valid for datasets with positive means. This paper presents a methodology to use and interpret the metrics with datasets that have negative means. The updated formulations give identical results compared to the original formulations for the case of positive means, so researchers are encouraged to use the updated formulations going forward without introducing ambiguity.

  5. Time dependent corrections to absolute gravity determinations in the establishment of modern gravity control

    NASA Astrophysics Data System (ADS)

    Dykowski, Przemyslaw; Krynski, Jan

    2015-04-01

    The establishment of modern gravity control with the use of exclusively absolute method of gravity determination has significant advantages as compared to the one established mostly with relative gravity measurements (e.g. accuracy, time efficiency). The newly modernized gravity control in Poland consists of 28 fundamental stations (laboratory) and 168 base stations (PBOG14 - located in the field). Gravity at the fundamental stations was surveyed with the FG5-230 gravimeter of the Warsaw University of Technology, and at the base stations - with the A10-020 gravimeter of the Institute of Geodesy and Cartography, Warsaw. This work concerns absolute gravity determinations at the base stations. Although free of common relative measurement errors (e.g. instrumental drift) and effects of network adjustment, absolute gravity determinations for the establishment of gravity control require advanced corrections due to time dependent factors, i.e. tidal and ocean loading corrections, atmospheric corrections and hydrological corrections that were not taken into account when establishing the previous gravity control in Poland. Currently available services and software allow to determine high accuracy and high temporal resolution corrections for atmospheric (based on digital weather models, e.g. ECMWF) and hydrological (based on hydrological models, e.g. GLDAS/Noah) gravitational and loading effects. These corrections are mostly used for processing observations with Superconducting Gravimeters in the Global Geodynamics Project. For the area of Poland the atmospheric correction based on weather models can differ from standard atmospheric correction by even ±2 µGal. The hydrological model shows the annual variability of ±8 µGal. In addition the standard tidal correction may differ from the one obtained from the local tidal model (based on tidal observations). Such difference at Borowa Gora Observatory reaches the level of ±1.5 µGal. Overall the sum of atmospheric and

  6. Multi-channel data acquisition system with absolute time synchronization

    NASA Astrophysics Data System (ADS)

    Włodarczyk, Przemysław; Pustelny, Szymon; Budker, Dmitry; Lipiński, Marcin

    2014-11-01

    We present a low-cost, stand-alone global-time-synchronized data acquisition system. Our prototype allows recording up to four analog signals with a 16-bit resolution in variable ranges and a maximum sampling rate of 1000 S/s. The system simultaneously acquires readouts of external sensors e.g. magnetometer or thermometer. A complete data set, including a header containing timestamp, is stored on a Secure Digital (SD) card or transmitted to a computer using Universal Serial Bus (USB). The estimated time accuracy of the data acquisition is better than ±200 ns. The device is intended for use in a global network of optical magnetometers (the Global Network of Optical Magnetometers for Exotic physics - GNOME), which aims to search for signals heralding physics beyond the Standard Model, that can be generated by ordinary spin coupling to exotic particles or anomalous spin interactions.

  7. Photonic microwave signals with zeptosecond-level absolute timing noise

    NASA Astrophysics Data System (ADS)

    Xie, Xiaopeng; Bouchand, Romain; Nicolodi, Daniele; Giunta, Michele; Hänsel, Wolfgang; Lezius, Matthias; Joshi, Abhay; Datta, Shubhashish; Alexandre, Christophe; Lours, Michel; Tremblin, Pierre-Alain; Santarelli, Giorgio; Holzwarth, Ronald; Le Coq, Yann

    2017-01-01

    Photonic synthesis of radiofrequency (RF) waveforms revived the quest for unrivalled microwave purity because of its ability to convey the benefits of optics to the microwave world. In this work, we perform a high-fidelity transfer of frequency stability between an optical reference and a microwave signal via a low-noise fibre-based frequency comb and cutting-edge photodetection techniques. We demonstrate the generation of the purest microwave signal with a fractional frequency stability below 6.5 × 10-16 at 1 s and a timing noise floor below 41 zs Hz-1/2 (phase noise below -173 dBc Hz-1 for a 12 GHz carrier). This outperforms existing sources and promises a new era for state-of-the-art microwave generation. The characterization is achieved through a heterodyne cross-correlation scheme with the lowermost detection noise. This unprecedented level of purity can impact domains such as radar systems, telecommunications and time-frequency metrology. The measurement methods developed here can benefit the characterization of a broad range of signals.

  8. System Measures Errors Between Time-Code Signals

    NASA Technical Reports Server (NTRS)

    Cree, David; Venkatesh, C. N.

    1993-01-01

    System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.

  9. Recovery of absolute phases for the fringe patterns of three selected wavelengths with improved anti-error capability

    NASA Astrophysics Data System (ADS)

    Long, Jiale; Xi, Jiangtao; Zhang, Jianmin; Zhu, Ming; Cheng, Wenqing; Li, Zhongwei; Shi, Yusheng

    2016-09-01

    In a recent published work, we proposed a technique to recover the absolute phase maps of fringe patterns with two selected fringe wavelengths. To achieve higher anti-error capability, the proposed method requires employing the fringe patterns with longer wavelengths; however, longer wavelength may lead to the degradation of the signal-to-noise ratio (SNR) in the surface measurement. In this paper, we propose a new approach to unwrap the phase maps from their wrapped versions based on the use of fringes with three different wavelengths which is characterized by improved anti-error capability and SNR. Therefore, while the previous method works on the two-phase maps obtained from six-step phase-shifting profilometry (PSP) (thus 12 fringe patterns are needed), the proposed technique performs very well on three-phase maps from three steps PSP, requiring only nine fringe patterns and hence more efficient. Moreover, the advantages of the two-wavelength method in simple implementation and flexibility in the use of fringe patterns are also reserved. Theoretical analysis and experiment results are presented to confirm the effectiveness of the proposed method.

  10. A new accuracy measure based on bounded relative error for time series forecasting

    PubMed Central

    Twycross, Jamie; Garibaldi, Jonathan M.

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred. PMID:28339480

  11. A new accuracy measure based on bounded relative error for time series forecasting.

    PubMed

    Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.

  12. An integrated model of choices and response times in absolute identification.

    PubMed

    Brown, Scott D; Marley, A A J; Donkin, Christopher; Heathcote, Andrew

    2008-04-01

    Recent theoretical developments in the field of absolute identification have stressed differences between relative and absolute processes, that is, whether stimulus magnitudes are judged relative to a shorter term context provided by recently presented stimuli or a longer term context provided by the entire set of stimuli. The authors developed a model (SAMBA: selective attention, mapping, and ballistic accumulation) that integrates shorter and longer term memory processes and accounts for both the choices made and the associated response time distributions, including sequential effects in each. The model's predictions arise as a consequence of its architecture and require estimation of only a few parameters with values that are consistent across numerous data sets. The authors show that SAMBA provides a quantitative account of benchmark choice phenomena in classical absolute identification experiments and in contemporary data involving both choice and response time.

  13. Experimental errors and artifacts for a time-domain lumped capacitor dielectric spectrometer

    NASA Astrophysics Data System (ADS)

    Eadline, Douglas J.; Leidheiser, Henry, Jr.

    1986-05-01

    An analysis of possible experimental errors and artifacts for a time-domain dielectric spectrometer (frequency range 107-109 Hz) is performed using mathematical models. The spectrometer requires an incident and reflected pulse to be referenced in time and aligned in amplitude. Effects due to time misreferencing and amplitude misalignment are studied using a simple Teflon dielectric model. The calculated spectra for misreferencing and misalignment errors for a Teflon model are compared to real spectrometer data. Time misreferencing errors greater than 40 ps produce large errors at high frequencies and the absolute time reference can vary by one-half the sampling interval. Amplitude misalignment can create both false and forced convergence at the end of the pulses. Most importantly, misreference and misalignment errors may generate pseudodielectric effects. Other general models dealing with artifacts due to the solid nature of the sample were developed to include contact resistance, contact inductance, and fringe capacitance. The inclusion of contact resistance produced loss behavior that moved to higher frequencies with lower resistances. When contact inductance is included, the ɛ' values increase in the frequency range examined. Inclusion of fringe capacitance produced a slight rise in the ɛ' values, proportional to the sample thickness. The time-domain lumped capacitor dielectric spectrometer becomes a viable tool in studies of dielectric materials at high frequencies when attention is given to possible experimental errors and artifacts.

  14. Overproduction timing errors in expert dancers.

    PubMed

    Minvielle-Moncla, Joëlle; Audiffren, Michel; Macar, Françoise; Vallet, Cécile

    2008-07-01

    The authors investigated how expert dancers achieve accurate timing under various conditions. They designed the conditions to interfere with the dancers' attention to time and to test the explanation of the interference effect provided in the attentional model of time processing. Participants were 17 expert contemporary dancers who performed a freely chosen duration while walking and executing a bilateral cyclic arm movement over a given distance. The dancers reproduced that duration in different situations of interference. The process yielded temporal overproductions, validating the attentional model and extending its application to expert populations engaged in complex motor situations. The finding that the greatest overproduction occurred in the transfer-with-improvisation condition suggests that improvisation within a time deadline requires specific training.

  15. Overcoming time-integration errors in SINDA's FWDBCK solution routine

    NASA Technical Reports Server (NTRS)

    Skladany, J. T.; Costello, F. A.

    1984-01-01

    The FWDBCK time step, which is usually chosen intuitively to achieve adequate accuracy at reasonable computational costs, can in fact lead to large errors. NASA observed such errors in solving cryogenic problems on the COBE spacecraft, but a similar error is also demonstrated for a single node radiating to space. An algorithm has been developed for selecting the time step during the course of the simulation. The error incurred when the time derivative is replaced by the FWDBCK time difference can be estimated from the Taylor-Series expression for the temperature. The algorithm selects the time step to keep this error small. The efficacy of the method is demonstrated on the COBE and single-node problems.

  16. Left-hemisphere activation is associated with enhanced vocal pitch error detection in musicians with absolute pitch

    PubMed Central

    Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A.; Larson, Charles R.

    2014-01-01

    The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing. PMID:24355545

  17. Clock error, jitter, phase error, and differential time of arrival in satellite communications

    NASA Astrophysics Data System (ADS)

    Sorace, Ron

    The maintenance of synchronization in satellite communication systems is critical in contemporary systems, since many signal processing and detection algorithms depend on ascertaining time references. Unfortunately, proper synchronism becomes more difficult to maintain at higher frequencies. Factors such as clock error or jitter, noise, and phase error at a coherent receiver may corrupt a transmitted signal and degrade synchronism at the terminations of a communication link. Further, in some systems an estimate of propagation delay is necessary, but this delay may vary stochastically with the range of the link. This paper presents a model of the components of synchronization error including a simple description of clock error and examination of recursive estimation of the propagation delay time for messages between elements in a satellite communication system. Attention is devoted to jitter, the sources of which are considered to be phase error in coherent reception and jitter in the clock itself.

  18. Peeling Away Timing Error in NetFlow Data

    NASA Astrophysics Data System (ADS)

    Trammell, Brian; Tellenbach, Bernhard; Schatzmann, Dominik; Burkhart, Martin

    In this paper, we characterize, quantify, and correct timing errors introduced into network flow data by collection and export via Cisco NetFlow version 9. We find that while some of these sources of error (clock skew, export delay) are generally implementation-dependent and known in the literature, there is an additional cyclic error of up to one second that is inherent to the design of the export protocol. We present a method for correcting this cyclic error in the presence of clock skew and export delay. In an evaluation using traffic with known timing collected from a national-scale network, we show that this method can successfully correct the cyclic error. However, there can also be other implementation-specific errors for which insufficient information remains for correction. On the routers we have deployed in our network, this limits the accuracy to about 70ms, reinforcing the point that implementation matters when conducting research on network measurement data.

  19. Sub-femtosecond absolute timing jitter with a 10 GHz hybrid photonic-microwave oscillator

    NASA Astrophysics Data System (ADS)

    Fortier, T. M.; Nelson, C. W.; Hati, A.; Quinlan, F.; Taylor, J.; Jiang, H.; Chou, C. W.; Rosenband, T.; Lemke, N.; Ludlow, A.; Howe, D.; Oates, C. W.; Diddams, S. A.

    2012-06-01

    We present an optical-electronic approach to generating microwave signals with high spectral purity. By circumventing shot noise and operating near fundamental thermal limits, we demonstrate 10 GHz signals with an absolute timing jitter for a single hybrid oscillator of 420 attoseconds (1 Hz-5 GHz).

  20. Modeling error analysis of stationary linear discrete-time filters

    NASA Technical Reports Server (NTRS)

    Patel, R.; Toda, M.

    1977-01-01

    The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.

  1. Error Representation in Time For Compressible Flow Calculations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2010-01-01

    Time plays an essential role in most real world fluid mechanics problems, e.g. turbulence, combustion, acoustic noise, moving geometries, blast waves, etc. Time dependent calculations now dominate the computational landscape at the various NASA Research Centers but the accuracy of these computations is often not well understood. In this presentation, we investigate error representation (and error control) for time-periodic problems as a prelude to the investigation of feasibility of error control for stationary statistics and space-time averages. o These statistics and averages (e.g. time-averaged lift and drag forces) are often the output quantities sought by engineers. o For systems such as the Navier-Stokes equations, pointwise error estimates deteriorate rapidly which increasing Reynolds number while statistics and averages may remain well behaved.

  2. Disentangling timing and amplitude errors in streamflow simulations

    NASA Astrophysics Data System (ADS)

    Seibert, Simon Paul; Ehret, Uwe; Zehe, Erwin

    2016-09-01

    This article introduces an improvement in the Series Distance (SD) approach for the improved discrimination and visualization of timing and magnitude uncertainties in streamflow simulations. SD emulates visual hydrograph comparison by distinguishing periods of low flow and periods of rise and recession in hydrological events. Within these periods, it determines the distance of two hydrographs not between points of equal time but between points that are hydrologically similar. The improvement comprises an automated procedure to emulate visual pattern matching, i.e. the determination of an optimal level of generalization when comparing two hydrographs, a scaled error model which is better applicable across large discharge ranges than its non-scaled counterpart, and "error dressing", a concept to construct uncertainty ranges around deterministic simulations or forecasts. Error dressing includes an approach to sample empirical error distributions by increasing variance contribution, which can be extended from standard one-dimensional distributions to the two-dimensional distributions of combined time and magnitude errors provided by SD. In a case study we apply both the SD concept and a benchmark model (BM) based on standard magnitude errors to a 6-year time series of observations and simulations from a small alpine catchment. Time-magnitude error characteristics for low flow and rising and falling limbs of events were substantially different. Their separate treatment within SD therefore preserves useful information which can be used for differentiated model diagnostics, and which is not contained in standard criteria like the Nash-Sutcliffe efficiency. Construction of uncertainty ranges based on the magnitude of errors of the BM approach and the combined time and magnitude errors of the SD approach revealed that the BM-derived ranges were visually narrower and statistically superior to the SD ranges. This suggests that the combined use of time and magnitude errors to

  3. Absolute frequency measurement at 10-16 level based on the international atomic time

    NASA Astrophysics Data System (ADS)

    Hachisu, H.; Fujieda, M.; Kumagai, M.; Ido, T.

    2016-06-01

    Referring to International Atomic Time (TAI), we measured the absolute frequency of the 87Sr lattice clock with its uncertainty of 1.1 x 10-15. Unless an optical clock is continuously operated for the five days of the TAI grid, it is required to evaluate dead time uncertainty in order to use the available five-day average of the local frequency reference. We homogeneously distributed intermittent measurements over the five-day grid of TAI, by which the dead time uncertainty was reduced to low 10-16 level. Three campaigns of the five (or four)-day consecutive measurements have resulted in the absolute frequency of the 87Sr clock transition of 429 228 004 229 872.85 (47) Hz, where the systematic uncertainty of the 87Sr optical frequency standard amounts to 8.6 x 10-17.

  4. Objective Error Criterion for Evaluation of Mapping Accuracy Based on Sensor Time-of-Flight Measurements.

    PubMed

    Barshan, Billur

    2008-12-15

    An objective error criterion is proposed for evaluating the accuracy of maps of unknown environments acquired by making range measurements with different sensing modalities and processing them with different techniques. The criterion can also be used for the assessment of goodness of fit of curves or shapes fitted to map points. A demonstrative example from ultrasonic mapping is given based on experimentally acquired time-of-flight measurements and compared with a very accurate laser map, considered as absolute reference. The results of the proposed criterion are compared with the Hausdorff metric and the median error criterion results. The error criterion is sufficiently general and flexible that it can be applied to discrete point maps acquired with other mapping techniques and sensing modalities as well.

  5. Absolute terahertz power measurement of a time-domain spectroscopy system.

    PubMed

    Globisch, Björn; Dietz, Roman J B; Göbel, Thorsten; Schell, Martin; Bohmeyer, Werner; Müller, Ralf; Steiger, Andreas

    2015-08-01

    We report on, to the best of our knowledge, the first absolute terahertz (THz) power measurement of a photoconductive emitter developed for time-domain spectroscopy (TDS). The broadband THz radiation emitted by a photoconductor optimized for the excitation with 1550-nm femtosecond pulses was measured by an ultrathin pyroelectric thin-film (UPTF) detector. We show that this detector has a spectrally flat transmission between 100 GHz and 5 THz due to special conductive electrodes on both sides of the UPTF. Its flat responsivity allows the calibration with a standard detector that is traceable to the International System of Units (SI) at the THz detector calibration facility of PTB. Absolute THz power in the range from below 1 μW to above 0.1 mW was measured.

  6. Perturbative approach to continuous-time quantum error correction

    NASA Astrophysics Data System (ADS)

    Ippoliti, Matteo; Mazza, Leonardo; Rizzi, Matteo; Giovannetti, Vittorio

    2015-04-01

    We present a discussion of the continuous-time quantum error correction introduced by J. P. Paz and W. H. Zurek [Proc. R. Soc. A 454, 355 (1998), 10.1098/rspa.1998.0165]. We study the general Lindbladian which describes the effects of both noise and error correction in the weak-noise (or strong-correction) regime through a perturbative expansion. We use this tool to derive quantitative aspects of the continuous-time dynamics both in general and through two illustrative examples: the three-qubit and five-qubit stabilizer codes, which can be independently solved by analytical and numerical methods and then used as benchmarks for the perturbative approach. The perturbatively accessible time frame features a short initial transient in which error correction is ineffective, followed by a slow decay of the information content consistent with the known facts about discrete-time error correction in the limit of fast operations. This behavior is explained in the two case studies through a geometric description of the continuous transformation of the state space induced by the combined action of noise and error correction.

  7. Characterizing Complex Time Series from the Scaling of Prediction Error.

    NASA Astrophysics Data System (ADS)

    Hinrichs, Brant Eric

    This thesis concerns characterizing complex time series from the scaling of prediction error. We use the global modeling technique of radial basis function approximation to build models from a state-space reconstruction of a time series that otherwise appears complicated or random (i.e. aperiodic, irregular). Prediction error as a function of prediction horizon is obtained from the model using the direct method. The relationship between the underlying dynamics of the time series and the logarithmic scaling of prediction error as a function of prediction horizon is investigated. We use this relationship to characterize the dynamics of both a model chaotic system and physical data from the optic tectum of an attentive pigeon exhibiting the important phenomena of nonstationary neuronal oscillations in response to visual stimuli.

  8. Calculation of retention time tolerance windows with absolute confidence from shared liquid chromatographic retention data.

    PubMed

    Boswell, Paul G; Abate-Pella, Daniel; Hewitt, Joshua T

    2015-09-18

    Compound identification by liquid chromatography-mass spectrometry (LC-MS) is a tedious process, mainly because authentic standards must be run on a user's system to be able to confidently reject a potential identity from its retention time and mass spectral properties. Instead, it would be preferable to use shared retention time/index data to narrow down the identity, but shared data cannot be used to reject candidates with an absolute level of confidence because the data are strongly affected by differences between HPLC systems and experimental conditions. However, a technique called "retention projection" was recently shown to account for many of the differences. In this manuscript, we discuss an approach to calculate appropriate retention time tolerance windows for projected retention times, potentially making it possible to exclude candidates with an absolute level of confidence, without needing to have authentic standards of each candidate on hand. In a range of multi-segment gradients and flow rates run among seven different labs, the new approach calculated tolerance windows that were significantly more appropriate for each retention projection than global tolerance windows calculated for retention projections or linear retention indices. Though there were still some small differences between the labs that evidently were not taken into account, the calculated tolerance windows only needed to be relaxed by 50% to make them appropriate for all labs. Even then, 42% of the tolerance windows calculated in this study without standards were narrower than those required by WADA for positive identification, where standards must be run contemporaneously.

  9. Absolute quantification by droplet digital PCR versus analog real-time PCR

    PubMed Central

    Hindson, Christopher M; Chevillet, John R; Briggs, Hilary A; Gallichotte, Emily N; Ruf, Ingrid K; Hindson, Benjamin J; Vessella, Robert L; Tewari, Muneesh

    2014-01-01

    Nanoliter-sized droplet technology paired with digital PCR (ddPCR) holds promise for highly precise, absolute nucleic acid quantification. Our comparison of microRNA quantification by ddPCR and real-time PCR revealed greater precision (coefficients of variation decreased by 37–86%) and improved day-to-day reproducibility (by a factor of seven) of ddPCR but with comparable sensitivity. When we applied ddPCR to serum microRNA biomarker analysis, this translated to superior diagnostic performance for identifying individuals with cancer. PMID:23995387

  10. The Impact of Medical Interpretation Method on Time and Errors

    PubMed Central

    Kapelusznik, Luciano; Prakash, Kavitha; Gonzalez, Javier; Orta, Lurmag Y.; Tseng, Chi-Hong; Changrani, Jyotsna

    2007-01-01

    Background Twenty-two million Americans have limited English proficiency. Interpreting for limited English proficient patients is intended to enhance communication and delivery of quality medical care. Objective Little is known about the impact of various interpreting methods on interpreting speed and errors. This investigation addresses this important gap. Design Four scripted clinical encounters were used to enable the comparison of equivalent clinical content. These scripts were run across four interpreting methods, including remote simultaneous, remote consecutive, proximate consecutive, and proximate ad hoc interpreting. The first 3 methods utilized professional, trained interpreters, whereas the ad hoc method utilized untrained staff. Measurements Audiotaped transcripts of the encounters were coded, using a prespecified algorithm to determine medical error and linguistic error, by coders blinded to the interpreting method. Encounters were also timed. Results Remote simultaneous medical interpreting (RSMI) encounters averaged 12.72 vs 18.24 minutes for the next fastest mode (proximate ad hoc) (p = 0.002). There were 12 times more medical errors of moderate or greater clinical significance among utterances in non-RSMI encounters compared to RSMI encounters (p = 0.0002). Conclusions Whereas limited by the small number of interpreters involved, our study found that RSMI resulted in fewer medical errors and was faster than non-RSMI methods of interpreting. PMID:17957418

  11. 75 FR 15371 - Time Error Correction Reliability Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-29

    ... Energy Regulatory Commission 18 CFR Part 40 Time Error Correction Reliability Standard March 18, 2010... Correction Reliability Standard developed by the North American Electric Reliability Corporation (NERC) in order for NERC to develop several modifications to the proposed Reliability Standard. The...

  12. An Error Score Model for Time-Limit Tests

    ERIC Educational Resources Information Center

    Ven, A. H. G. S. van der

    1976-01-01

    A more generalized error model for time-limit tests is developed. Model estimates are derived for right-attempted and wrong-attempted correlations both within the same test and between different tests. A comparison is made between observed correlations and their model counterparts and a fair agreement is found between observed and expected…

  13. Heat conduction errors and time lag in cryogenic thermometer installations

    NASA Technical Reports Server (NTRS)

    Warshawsky, I.

    1973-01-01

    Installation practices are recommended that will increase rate of heat exchange between the thermometric sensing element and the cryogenic fluid and that will reduce the rate of undesired heat transfer to higher-temperature objects. Formulas and numerical data are given that help to estimate the magnitude of heat-conduction errors and of time lag in response.

  14. Real-Time Minimization of Tracking Error for Aircraft Systems

    NASA Technical Reports Server (NTRS)

    Garud, Sumedha; Kaneshige, John T.; Krishnakumar, Kalmanje S.; Kulkarni, Nilesh V.; Burken, John

    2013-01-01

    This technology presents a novel, stable, discrete-time adaptive law for flight control in a Direct adaptive control (DAC) framework. Where errors are not present, the original control design has been tuned for optimal performance. Adaptive control works towards achieving nominal performance whenever the design has modeling uncertainties/errors or when the vehicle suffers substantial flight configuration change. The baseline controller uses dynamic inversion with proportional-integral augmentation. On-line adaptation of this control law is achieved by providing a parameterized augmentation signal to a dynamic inversion block. The parameters of this augmentation signal are updated to achieve the nominal desired error dynamics. If the system senses that at least one aircraft component is experiencing an excursion and the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, then the neural network (NN) modeling of aircraft operation may be changed.

  15. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    NASA Technical Reports Server (NTRS)

    Beck, S. M.

    1975-01-01

    A mobile self-contained Faraday cup system for beam current measurments of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 + or - 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV.

  16. Method of excess fractions with application to absolute distance metrology: wavelength selection and the effects of common error sources.

    PubMed

    Falaggis, Konstantinos; Towers, David P; Towers, Catherine E

    2012-09-20

    Multiwavelength interferometry (MWI) is a well established technique in the field of optical metrology. Previously, we have reported a theoretical analysis of the method of excess fractions that describes the mutual dependence of unambiguous measurement range, reliability, and the measurement wavelengths. In this paper wavelength, selection strategies are introduced that are built on the theoretical description and maximize the reliability in the calculated fringe order for a given measurement range, number of wavelengths, and level of phase noise. Practical implementation issues for an MWI interferometer are analyzed theoretically. It is shown that dispersion compensation is best implemented by use of reference measurements around absolute zero in the interferometer. Furthermore, the effects of wavelength uncertainty allow the ultimate performance of an MWI interferometer to be estimated.

  17. Absolute frequency measurement with uncertainty below 1× 10^{-15} using International Atomic Time

    NASA Astrophysics Data System (ADS)

    Hachisu, Hidekazu; Petit, Gérard; Ido, Tetsuya

    2017-01-01

    The absolute frequency of the ^{87}Sr clock transition measured in 2015 (Jpn J Appl Phys 54:112401, 2015) was reevaluated using an improved frequency link to the SI second. The scale interval of International Atomic Time (TAI) that we used as the reference was calibrated for an evaluation interval of 5 days instead of the conventional interval of 1 month which is regularly employed in Circular T. The calibration on a 5-day basis removed the uncertainty in assimilating the TAI scale of the 5-day mean to that of the 1-month mean. The reevaluation resulted in the total uncertainty of 10^{-16} level for the first time without local cesium fountains. Since there are presumably no correlations among systematic shifts of cesium fountains worldwide, the measurement is not limited by the systematic uncertainty of a specific primary frequency standard.

  18. The Question of Absolute Space and Time Directions in Relation to Molecular Chirality, Parity Violation, and Biomolecular Homochirality

    SciTech Connect

    Quack, Martin

    2001-03-21

    The questions of the absolute directions of space and time or the “observability” of absolute time direction as well as absolute handedness-left or right- are related to the fundamental symmetries of physics C, P, T as well as their combinations, in particular CPT, and their violations, such as parity violation. At the same time there is a relation to certain still open questions in chemistry concerning the fundamental physical- chemical principles of molecular chirality and in biochemistry concerning the selection of homochirality in evolution. In the lecture we shall introduce the concepts and then report new theoretical results from our work on parity violation in chiral molecules, showing order of magnitude increases with respect to previously accepted values. We discus as well our current experimental efforts. We shall briefly mention the construction of an absolute molecular clock.

  19. The Question of Absolute Space and Time Directions in Relation to Molecular Chirality, Parity Violation, and Biomolecular Homochirality

    SciTech Connect

    Quack, Martin

    2001-03-21

    The questions of the absolute directions of space and time or the 'observability' of absolute time direction as well as absolute handedness - left or right - are related to the fundamental symmetries of physics C, P, T as well as their combinations, in particular CPT, and their violations, such as parity violation. At the same time there is a relation to certain still open questions in chemistry concerning the fundamental physical-chemical principles of molecular chirality and in biochemistry concerning the selection of homochirality in evolution. In the lecture we shall introduce the concepts and then report new theoretical results from our work on parity violation in chiral molecules, showing order of magnitude increases with respect to previously accepted values. We discuss as well our current experimental efforts. We shall briefly mention the construction of an absolute molecular clock.

  20. Absolute measurements of night-time electron density using ISR gyro lines

    NASA Astrophysics Data System (ADS)

    Bhatt, Asti; Kelley, Michael; Nicolls, Michael; Sulzer, Michael

    2012-07-01

    Gyro line in Incoherent Scatter Spectrum is the underused cousin of the more popular Plasma line. This is because it is very weak during the day and stronger during dawn and dusk hours. When the electron density is such that the electron plasma frequency drops below the electron gyro frequency, the gyro line frequency becomes proportional to the electron density. This is during a time when the plasma line is no longer detected, and we have no other means for getting precise measurements for absolute electron density. In this paper, we will present a linear equation for the gyro line frequency and measurements from the Arecibo radar in Puerto Rico, showing comparison with the plasma line data and derived electron density.

  1. Alignment between seafloor spreading directions and absolute plate motions through time

    NASA Astrophysics Data System (ADS)

    Williams, Simon E.; Flament, Nicolas; Müller, R. Dietmar

    2016-02-01

    The history of seafloor spreading in the ocean basins provides a detailed record of relative motions between Earth's tectonic plates since Pangea breakup. Determining how tectonic plates have moved relative to the Earth's deep interior is more challenging. Recent studies of contemporary plate motions have demonstrated links between relative plate motion and absolute plate motion (APM), and with seismic anisotropy in the upper mantle. Here we explore the link between spreading directions and APM since the Early Cretaceous. We find a significant alignment between APM and spreading directions at mid-ocean ridges; however, the degree of alignment is influenced by geodynamic setting, and is strongest for mid-Atlantic spreading ridges between plates that are not directly influenced by time-varying slab pull. In the Pacific, significant mismatches between spreading and APM direction may relate to a major plate-mantle reorganization. We conclude that spreading fabric can be used to improve models of APM.

  2. A California statewide three-dimensional seismic velocity model from both absolute and differential times

    USGS Publications Warehouse

    Lin, G.; Thurber, C.H.; Zhang, H.; Hauksson, E.; Shearer, P.M.; Waldhauser, F.; Brocher, T.M.; Hardebeck, J.

    2010-01-01

    We obtain a seismic velocity model of the California crust and uppermost mantle using a regional-scale double-difference tomography algorithm. We begin by using absolute arrival-time picks to solve for a coarse three-dimensional (3D) P velocity (VP) model with a uniform 30 km horizontal node spacing, which we then use as the starting model for a finer-scale inversion using double-difference tomography applied to absolute and differential pick times. For computational reasons, we split the state into 5 subregions with a grid spacing of 10 to 20 km and assemble our final statewide VP model by stitching together these local models. We also solve for a statewide S-wave model using S picks from both the Southern California Seismic Network and USArray, assuming a starting model based on the VP results and a VP=VS ratio of 1.732. Our new model has improved areal coverage compared with previous models, extending 570 km in the SW-NE directionand 1320 km in the NW-SE direction. It also extends to greater depth due to the inclusion of substantial data at large epicentral distances. Our VP model generally agrees with previous separate regional models for northern and southern California, but we also observe some new features, such as high-velocity anomalies at shallow depths in the Klamath Mountains and Mount Shasta area, somewhat slow velocities in the northern Coast Ranges, and slow anomalies beneath the Sierra Nevada at midcrustal and greater depths. This model can be applied to a variety of regional-scale studies in California, such as developing a unified statewide earthquake location catalog and performing regional waveform modeling.

  3. Absolute calibration method for nanosecond-resolved, time-streaked, fiber optic light collection, spectroscopy systems

    NASA Astrophysics Data System (ADS)

    Johnston, Mark D.; Oliver, Bryan V.; Droemer, Darryl W.; Frogget, Brent; Crain, Marlon D.; Maron, Yitzhak

    2012-08-01

    This paper describes a convenient and accurate method to calibrate fast (<1 ns resolution) streaked, fiber optic light collection, spectroscopy systems. Such systems are inherently difficult to calibrate due to the lack of sufficiently intense, calibrated light sources. Such a system is used to collect spectral data on plasmas generated in electron beam diodes fielded on the RITS-6 accelerator (8-12MV, 140-200kA) at Sandia National Laboratories. On RITS, plasma light is collected through a small diameter (200 μm) optical fiber and recorded on a fast streak camera at the output of a 1 meter Czerny-Turner monochromator. For this paper, a 300 W xenon short arc lamp (Oriel Model 6258) was used as the calibration source. Since the radiance of the xenon arc varies from cathode to anode, just the area around the tip of the cathode ("hotspot") was imaged onto the fiber, to produce the highest intensity output. To compensate for chromatic aberrations, the signal was optimized at each wavelength measured. Output power was measured using 10 nm bandpass interference filters and a calibrated photodetector. These measurements give power at discrete wavelengths across the spectrum, and when linearly interpolated, provide a calibration curve for the lamp. The shape of the spectrum is determined by the collective response of the optics, monochromator, and streak tube across the spectral region of interest. The ratio of the spectral curve to the measured bandpass filter curve at each wavelength produces a correction factor (Q) curve. This curve is then applied to the experimental data and the resultant spectra are given in absolute intensity units (photons/sec/cm2/steradian/nm). Error analysis shows this method to be accurate to within +/- 20%, which represents a high level of accuracy for this type of measurement.

  4. Absolute calibration method for nanosecond-resolved, time-streaked, fiber optic light collection, spectroscopy systems.

    PubMed

    Johnston, Mark D; Oliver, Bryan V; Droemer, Darryl W; Frogget, Brent; Crain, Marlon D; Maron, Yitzhak

    2012-08-01

    This paper describes a convenient and accurate method to calibrate fast (<1 ns resolution) streaked, fiber optic light collection, spectroscopy systems. Such systems are inherently difficult to calibrate due to the lack of sufficiently intense, calibrated light sources. Such a system is used to collect spectral data on plasmas generated in electron beam diodes fielded on the RITS-6 accelerator (8-12MV, 140-200kA) at Sandia National Laboratories. On RITS, plasma light is collected through a small diameter (200 μm) optical fiber and recorded on a fast streak camera at the output of a 1 meter Czerny-Turner monochromator. For this paper, a 300 W xenon short arc lamp (Oriel Model 6258) was used as the calibration source. Since the radiance of the xenon arc varies from cathode to anode, just the area around the tip of the cathode ("hotspot") was imaged onto the fiber, to produce the highest intensity output. To compensate for chromatic aberrations, the signal was optimized at each wavelength measured. Output power was measured using 10 nm bandpass interference filters and a calibrated photodetector. These measurements give power at discrete wavelengths across the spectrum, and when linearly interpolated, provide a calibration curve for the lamp. The shape of the spectrum is determined by the collective response of the optics, monochromator, and streak tube across the spectral region of interest. The ratio of the spectral curve to the measured bandpass filter curve at each wavelength produces a correction factor (Q) curve. This curve is then applied to the experimental data and the resultant spectra are given in absolute intensity units (photons/sec/cm(2)/steradian/nm). Error analysis shows this method to be accurate to within +∕- 20%, which represents a high level of accuracy for this type of measurement.

  5. An Integrated Model of Choices and Response Times in Absolute Identification

    ERIC Educational Resources Information Center

    Brown, Scott D.; Marley, A. A. J.; Donkin, Christopher; Heathcote, Andrew

    2008-01-01

    Recent theoretical developments in the field of absolute identification have stressed differences between relative and absolute processes, that is, whether stimulus magnitudes are judged relative to a shorter term context provided by recently presented stimuli or a longer term context provided by the entire set of stimuli. The authors developed a…

  6. Real-Time Parameter Estimation Using Output Error

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2014-01-01

    Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.

  7. Relationship between Brazilian airline pilot errors and time of day.

    PubMed

    de Mello, M T; Esteves, A M; Pires, M L N; Santos, D C; Bittencourt, L R A; Silva, R S; Tufik, S

    2008-12-01

    Flight safety is one of the most important and frequently discussed issues in aviation. Recent accident inquiries have raised questions as to how the work of flight crews is organized and the extent to which these conditions may have been contributing factors to accidents. Fatigue is based on physiologic limitations, which are reflected in performance deficits. The purpose of the present study was to provide an analysis of the periods of the day in which pilots working for a commercial airline presented major errors. Errors made by 515 captains and 472 co-pilots were analyzed using data from flight operation quality assurance systems. To analyze the times of day (shifts) during which incidents occurred, we divided the light-dark cycle (24:00) in four periods: morning, afternoon, night, and early morning. The differences of risk during the day were reported as the ratio of morning to afternoon, morning to night and morning to early morning error rates. For the purposes of this research, level 3 events alone were taken into account, since these were the most serious in which company operational limits were exceeded or when established procedures were not followed. According to airline flight schedules, 35% of flights take place in the morning period, 32% in the afternoon, 26% at night, and 7% in the early morning. Data showed that the risk of errors increased by almost 50% in the early morning relative to the morning period (ratio of 1:1.46). For the period of the afternoon, the ratio was 1:1.04 and for the night a ratio of 1:1.05 was found. These results showed that the period of the early morning represented a greater risk of attention problems and fatigue.

  8. Reference optical phantoms for diffuse optical spectroscopy. Part 1--Error analysis of a time resolved transmittance characterization method.

    PubMed

    Bouchard, Jean-Pierre; Veilleux, Israël; Jedidi, Rym; Noiseux, Isabelle; Fortin, Michel; Mermut, Ozzy

    2010-05-24

    Development, production quality control and calibration of optical tissue-mimicking phantoms require a convenient and robust characterization method with known absolute accuracy. We present a solid phantom characterization technique based on time resolved transmittance measurement of light through a relatively small phantom sample. The small size of the sample enables characterization of every material batch produced in a routine phantoms production. Time resolved transmittance data are pre-processed to correct for dark noise, sample thickness and instrument response function. Pre-processed data are then compared to a forward model based on the radiative transfer equation solved through Monte Carlo simulations accurately taking into account the finite geometry of the sample. The computational burden of the Monte-Carlo technique was alleviated by building a lookup table of pre-computed results and using interpolation to obtain modeled transmittance traces at intermediate values of the optical properties. Near perfect fit residuals are obtained with a fit window using all data above 1% of the maximum value of the time resolved transmittance trace. Absolute accuracy of the method is estimated through a thorough error analysis which takes into account the following contributions: measurement noise, system repeatability, instrument response function stability, sample thickness variation refractive index inaccuracy, time correlated single photon counting system time based inaccuracy and forward model inaccuracy. Two sigma absolute error estimates of 0.01 cm(-1) (11.3%) and 0.67 cm(-1) (6.8%) are obtained for the absorption coefficient and reduced scattering coefficient respectively.

  9. Easy Absolute Values? Absolutely

    ERIC Educational Resources Information Center

    Taylor, Sharon E.; Mittag, Kathleen Cage

    2015-01-01

    The authors teach a problem-solving course for preservice middle-grades education majors that includes concepts dealing with absolute-value computations, equations, and inequalities. Many of these students like mathematics and plan to teach it, so they are adept at symbolic manipulations. Getting them to think differently about a concept that they…

  10. Absolute exponential stability of recurrent neural networks with Lipschitz-continuous activation functions and time delays.

    PubMed

    Cao, Jinde; Wang, Jun

    2004-04-01

    This paper investigates the absolute exponential stability of a general class of delayed neural networks, which require the activation functions to be partially Lipschitz continuous and monotone nondecreasing only, but not necessarily differentiable or bounded. Three new sufficient conditions are derived to ascertain whether or not the equilibrium points of the delayed neural networks with additively diagonally stable interconnection matrices are absolutely exponentially stable by using delay Halanay-type inequality and Lyapunov function. The stability criteria are also suitable for delayed optimization neural networks and delayed cellular neural networks whose activation functions are often nondifferentiable or unbounded. The results herein answer a question: if a neural network without any delay is absolutely exponentially stable, then under what additional conditions, the neural networks with delay is also absolutely exponentially stable.

  11. Effect of Time Step On Atmospheric Model Systematic Errors

    NASA Astrophysics Data System (ADS)

    Williamson, D. L.

    Semi-Lagrangian approximations are becoming more common in operational Numer- ical Weather Prediction models because of the efficiency allowed by their long time steps. The early work that demonstrated that semi-Lagrangian forecasts were compa- rable to Eulerian in accuracy were based on mid-latitude short-range forecasts which were dominated by dynamical processes. These indicated no significant loss of accu- racy with semi-Lagrangian approximations and long time steps. Today, subgrid-scale parameterizations play a larger role in even short range forecasts. While not ignored, the effect of a longer time step on the parameterizations has been less thoroughly stud- ied. We present results from the NCAR CCM3 that indicate that the systematic errors in tropical precipitation patterns can depend on the time step. The actual dependency depends on the parameterization suite of the model. We identify the dependency in aqua-planet integrations. With the CCM3 parameterization suite, longer time steps re- sult in double precipitation maxima straddling the SST maximum while shorter time steps result in a single precipitation maximum over the SST maximum. Other param- eterization suites behave differently. The cause of the dependency will be discussed.

  12. Multi-Channel Optical Coherence Elastography Using Relative and Absolute Shear-Wave Time of Flight

    PubMed Central

    Elyas, Eli; Grimwood, Alex; Erler, Janine T.; Robinson, Simon P.; Cox, Thomas R.; Woods, Daniel; Clowes, Peter; De Luca, Ramona; Marinozzi, Franco; Fromageau, Jérémie; Bamber, Jeffrey C.

    2017-01-01

    Elastography, the imaging of elastic properties of soft tissues, is well developed for macroscopic clinical imaging of soft tissues and can provide useful information about various pathological processes which is complementary to that provided by the original modality. Scaling down of this technique should ply the field of cellular biology with valuable information with regard to elastic properties of cells and their environment. This paper evaluates the potential to develop such a tool by modifying a commercial optical coherence tomography (OCT) device to measure the speed of shear waves propagating in a three-dimensional (3D) medium. A needle, embedded in the gel, was excited to vibrate along its long axis and the displacement as a function of time and distance from the needle associated with the resulting shear waves was detected using four M-mode images acquired simultaneously using a commercial four-channel swept-source OCT system. Shear-wave time of arrival (TOA) was detected by tracking the axial OCT-speckle motion using cross-correlation methods. Shear-wave speed was then calculated from inter-channel differences of TOA for a single burst (the relative TOA method) and compared with the shear-wave speed determined from positional differences of TOA for a single channel over multiple bursts (the absolute TOA method). For homogeneous gels the relative method provided shear-wave speed with acceptable precision and accuracy when judged against the expected linear dependence of shear modulus on gelatine concentration (R2 = 0.95) and ultimate resolution capabilities limited by 184μm inter-channel distance. This overall approach shows promise for its eventual provision as a research tool in cancer cell biology. Further work is required to optimize parameters such as vibration frequency, burst length and amplitude, and to assess the lateral and axial resolutions of this type of device as well as to create 3D elastograms. PMID:28107368

  13. SMN transcript levels in leukocytes of SMA patients determined by absolute real-time PCR

    PubMed Central

    Tiziano, Francesco Danilo; Pinto, Anna Maria; Fiori, Stefania; Lomastro, Rosa; Messina, Sonia; Bruno, Claudio; Pini, Antonella; Pane, Marika; D'Amico, Adele; Ghezzo, Alessandro; Bertini, Enrico; Mercuri, Eugenio; Neri, Giovanni; Brahe, Christina

    2010-01-01

    Spinal muscular atrophy (SMA) is an autosomal recessive neuromuscular disorder caused by homozygous mutations of the SMN1 gene. Three forms of SMA are recognized (type I–III) on the basis of clinical severity. All patients have at least one or more (usually 2–4) copies of a highly homologous gene (SMN2), which produces insufficient levels of functional SMN protein, because of alternative splicing of exon 7. Recently, evidence has been provided that SMN2 expression can be enhanced by pharmacological treatment. However, no reliable biomarkers are available to test the molecular efficacy of the treatments. At present, the only potential biomarker is the dosage of SMN products in peripheral blood. However, the demonstration that SMN full-length (SMN-fl) transcript levels are reduced in leukocytes of patients compared with controls remains elusive (except for type I). We have developed a novel assay based on absolute real-time PCR, which allows the quantification of SMN1-fl/SMN2-fl transcripts. For the first time, we have shown that SMN-fl levels are reduced in leukocytes of type II–III patients compared with controls. We also found that transcript levels are related to clinical severity as in type III patients SMN2-fl levels are significantly higher compared with type II and directly correlated with functional ability in type II patients and with age of onset in type III patients. Moreover, in haploidentical siblings with discordant phenotype, the less severely affected individuals showed significantly higher transcript levels. Our study shows that SMN2-fl dosage in leukocytes can be considered a reliable biomarker and can provide the rationale for SMN dosage in clinical trials. PMID:19603064

  14. Multi-Channel Optical Coherence Elastography Using Relative and Absolute Shear-Wave Time of Flight.

    PubMed

    Elyas, Eli; Grimwood, Alex; Erler, Janine T; Robinson, Simon P; Cox, Thomas R; Woods, Daniel; Clowes, Peter; De Luca, Ramona; Marinozzi, Franco; Fromageau, Jérémie; Bamber, Jeffrey C

    2017-01-01

    Elastography, the imaging of elastic properties of soft tissues, is well developed for macroscopic clinical imaging of soft tissues and can provide useful information about various pathological processes which is complementary to that provided by the original modality. Scaling down of this technique should ply the field of cellular biology with valuable information with regard to elastic properties of cells and their environment. This paper evaluates the potential to develop such a tool by modifying a commercial optical coherence tomography (OCT) device to measure the speed of shear waves propagating in a three-dimensional (3D) medium. A needle, embedded in the gel, was excited to vibrate along its long axis and the displacement as a function of time and distance from the needle associated with the resulting shear waves was detected using four M-mode images acquired simultaneously using a commercial four-channel swept-source OCT system. Shear-wave time of arrival (TOA) was detected by tracking the axial OCT-speckle motion using cross-correlation methods. Shear-wave speed was then calculated from inter-channel differences of TOA for a single burst (the relative TOA method) and compared with the shear-wave speed determined from positional differences of TOA for a single channel over multiple bursts (the absolute TOA method). For homogeneous gels the relative method provided shear-wave speed with acceptable precision and accuracy when judged against the expected linear dependence of shear modulus on gelatine concentration (R2 = 0.95) and ultimate resolution capabilities limited by 184μm inter-channel distance. This overall approach shows promise for its eventual provision as a research tool in cancer cell biology. Further work is required to optimize parameters such as vibration frequency, burst length and amplitude, and to assess the lateral and axial resolutions of this type of device as well as to create 3D elastograms.

  15. Voice Onset Time in Consonant Cluster Errors: Can Phonetic Accommodation Differentiate Cognitive from Motor Errors?

    ERIC Educational Resources Information Center

    Pouplier, Marianne; Marin, Stefania; Waltl, Susanne

    2014-01-01

    Purpose: Phonetic accommodation in speech errors has traditionally been used to identify the processing level at which an error has occurred. Recent studies have challenged the view that noncanonical productions may solely be due to phonetic, not phonological, processing irregularities, as previously assumed. The authors of the present study…

  16. ABSOLUTE TIMING OF THE CRAB PULSAR WITH THE INTEGRAL/SPI TELESCOPE

    SciTech Connect

    Molkov, S.; Jourdain, E.; Roques, J. P.

    2010-01-01

    We have investigated the pulse shape evolution of the Crab pulsar emission in the hard X-ray domain of the electromagnetic spectrum. In particular, we have studied the alignment of the Crab pulsar phase profiles measured in the hard X-rays and in other wavebands. To obtain the hard X-ray pulse profiles, we have used six years (2003-2009, with a total exposure of about 4 Ms) of publicly available data of the SPI telescope on-board the International Gamma-Ray Astrophysics Laboratory observatory, folded with the pulsar time solution derived from the Jodrell Bank Crab Pulsar Monthly Ephemeris. We found that the main pulse in the hard X-ray 20-100 keV energy band leads the radio one by 8.18 +- 0.46 milliperiods in phase, or 275 +- 15 mus in time. Quoted errors represent only statistical uncertainties. Our systematic error is estimated to be approx40 mus and is mainly caused by the radio measurement uncertainties. In hard X-rays, the average distance between the main pulse and interpulse on the phase plane is 0.3989 +- 0.0009. To compare our findings in hard X-rays with the soft 2-20 keV X-ray band, we have used data of quasi-simultaneous Crab observations with the proportional counter array monitor on-board the Rossi X-Ray Timing Explorer mission. The time lag and the pulses separation values measured in the 3-20 keV band are 0.00933 +- 0.00016 (corresponding to 310 +- 6 mus) and 0.40016 +- 0.00028 parts of the cycle, respectively. While the pulse separation values measured in soft X-rays and hard X-rays agree, the time lags are statistically different. Additional analysis show that the delay between the radio and X-ray signals varies with energy in the 2-300 keV energy range. We explain such a behavior as due to the superposition of two independent components responsible for the Crab pulsed emission in this energy band.

  17. Testing and error analysis of a real-time controller

    NASA Technical Reports Server (NTRS)

    Savolaine, C. G.

    1983-01-01

    Inexpensive ways to organize and conduct system testing that were used on a real-time satellite network control system are outlined. This system contains roughly 50,000 lines of executable source code developed by a team of eight people. For a small investment of staff, the system was thoroughly tested, including automated regression testing, before field release. Detailed records were kept for fourteen months, during which several versions of the system were written. A separate testing group was not established, but testing itself was structured apart from the development process. The errors found during testing are examined by frequency per subsystem by size and complexity as well as by type. The code was released to the user in March, 1983. To date, only a few minor problems found with the system during its pre-service testing and user acceptance has been good.

  18. New time-domain three-point error separation methods for measurement roundness and spindle error motion

    NASA Astrophysics Data System (ADS)

    Liu, Wenwen; Tao, Tingting; Zeng, Hao

    2016-10-01

    Error separation is a key technology for online measuring spindle radial error motion or artifact form error, such as roundness and cylindricity. Three time-domain three-point error separation methods are proposed based on solving the minimum norm solution of the linear equations. Three laser displacement sensors are used to collect a set of discrete measurements recorded, by which a group of linear measurement equations is derived according to the criterion of prior separation form (PSF), prior separation spindle error motion (PSM) or synchronous separation both form and spindle error motion (SSFM). The work discussed the correlations between the angles of three sensors in measuring system, rank of coefficient matrix in the measurement equations and harmonics distortions in the separation results, revealed the regularities of the first order harmonics distortion and recommended the applicable situation of the each method. Theoretical research and large simulations show that SSFM is the more precision method because of the lower distortion.

  19. A BAYESIAN METHOD FOR CALCULATING REAL-TIME QUANTITATIVE PCR CALIBRATION CURVES USING ABSOLUTE PLASMID DNA STANDARDS

    EPA Science Inventory

    In real-time quantitative PCR studies using absolute plasmid DNA standards, a calibration curve is developed to estimate an unknown DNA concentration. However, potential differences in the amplification performance of plasmid DNA compared to genomic DNA standards are often ignore...

  20. Absolute plate motion of Africa around Hawaii-Emperor bend time

    NASA Astrophysics Data System (ADS)

    Maher, S. M.; Wessel, P.; Müller, R. D.; Williams, S. E.; Harada, Y.

    2015-06-01

    Numerous regional plate reorganizations and the coeval ages of the Hawaiian Emperor bend (HEB) and Louisville bend of 50-47 Ma have been interpreted as a possible global tectonic plate reorganization at ˜chron 21 (47.9 Ma). Yet for a truly global event we would expect a contemporaneous change in Africa absolute plate motion (APM) reflected by physical evidence distributed on the Africa Plate. This evidence has been postulated to take the form of the Réunion-Mascarene bend which exhibits many HEB-like features, such as a large angular change close to ˜chron 21. However, the Réunion hotspot trail has recently been interpreted as a sequence of continental fragments with incidental hotspot volcanism. Here we show that the alternative Réunion-Mascarene Plateau trail can also satisfy the age progressions and geometry of other hotspot trails on the Africa Plate. The implied motion, suggesting a pivoting of Africa from 67 to 50 Ma, could explain the apparent bifurcation of the Tristan hotspot chain, the age reversals seen along the Walvis Ridge, the sharp curve of the Canary trail, and the diffuse nature of the St. Helena chain. To test this hypothesis further we made a new Africa APM model that extends back to ˜80 Ma using a modified version of the Hybrid Polygonal Finite Rotation Method. This method uses seamount chains and their associated hotspots as geometric constraints for the model, and seamount age dates to determine APM through time. While this model successfully explains many of the volcanic features, it implies an unrealistically fast global lithospheric net rotation, as well as improbable APM trajectories for many other plates, including the Americas, Eurasia and Australia. We contrast this speculative model with a more conventional model in which the Mascarene Plateau is excluded in favour of the Chagos-Laccadive Ridge rotated into the Africa reference frame. This second model implies more realistic net lithospheric rotation and far-field APMs, but

  1. Using Graphs for Fast Error Term Approximation of Time-varying Datasets

    SciTech Connect

    Nuber, C; LaMar, E C; Pascucci, V; Hamann, B; Joy, K I

    2003-02-27

    We present a method for the efficient computation and storage of approximations of error tables used for error estimation of a region between different time steps in time-varying datasets. The error between two time steps is defined as the distance between the data of these time steps. Error tables are used to look up the error between different time steps of a time-varying dataset, especially when run time error computation is expensive. However, even the generation of error tables itself can be expensive. For n time steps, the exact error look-up table (which stores the error values for all pairs of time steps in a matrix) has a memory complexity and pre-processing time complexity of O(n2), and O(1) for error retrieval. Our approximate error look-up table approach uses trees, where the leaf nodes represent original time steps, and interior nodes contain an average (or best-representative) of the children nodes. The error computed on an edge of a tree describes the distance between the two nodes on that edge. Evaluating the error between two different time steps requires traversing a path between the two leaf nodes, and accumulating the errors on the traversed edges. For n time steps, this scheme has a memory complexity and pre-processing time complexity of O(nlog(n)), a significant improvement over the exact scheme; the error retrieval complexity is O(log(n)). As we do not need to calculate all possible n2 error terms, our approach is a fast way to generate the approximation.

  2. A statistical comparison of EEG time- and time-frequency domain representations of error processing.

    PubMed

    Munneke, Gert-Jan; Nap, Tanja S; Schippers, Eveline E; Cohen, Michael X

    2015-08-27

    Successful behavior relies on error detection and subsequent remedial adjustment of behavior. Researchers have identified two electrophysiological signatures of error processing: the time-domain error-related negativity (ERN), and the time-frequency domain increased power in the delta/theta frequency bands (~2-8 Hz). The relationship between these two signatures is not entirely clear: on the one hand they occur after the same type of event and with similar latency, but on the other hand, the time-domain ERP component contains only phase-locked activity whereas the time-frequency response additionally contains non-phase-locked dynamics. Here we examined the ERN and error-related delta/theta activity in relation to each other, focusing on within-subject analyses that utilize single-trial data. Using logistic regression, we constructed three statistical models in which the accuracy of each trial was predicted from the ERN, delta/theta power, or both. We found that both the ERN and delta/theta power worked roughly equally well as predictors of single-trial accuracy (~70% accurate prediction). Furthermore, a model including both measures provided a stronger overall prediction compared to either model alone. Based on these findings two conclusions are drawn: first, the phase-locked part of the EEG signal appears to be roughly as predictive of single-trial response accuracy as the non-phase-locked part; second, the single-trial ERP and delta/theta power contain both overlapping and independent information.

  3. Mapping the Origins of Time: Scalar Errors in Infant Time Estimation

    ERIC Educational Resources Information Center

    Addyman, Caspar; Rocha, Sinead; Mareschal, Denis

    2014-01-01

    Time is central to any understanding of the world. In adults, estimation errors grow linearly with the length of the interval, much faster than would be expected of a clock-like mechanism. Here we present the first direct demonstration that this is also true in human infants. Using an eye-tracking paradigm, we examined 4-, 6-, 10-, and…

  4. A method for in situ absolute DD yield calibration of neutron time-of-flight detectors on OMEGA using CR-39-based proton detectors

    DOE PAGES

    Waugh, C. J.; Rosenberg, M. J.; Zylstra, A. B.; ...

    2015-05-27

    Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition,more » comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.« less

  5. A method for in situ absolute DD yield calibration of neutron time-of-flight detectors on OMEGA using CR-39-based proton detectors

    SciTech Connect

    Waugh, C. J.; Rosenberg, M. J.; Zylstra, A. B.; Frenje, J. A.; Seguin, F. H.; Petrasso, R. D.; Glebov, V. Yu.; Sangster, T. C.; Stoeckl, C.

    2015-05-27

    Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition, comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.

  6. A method for in situ absolute DD yield calibration of neutron time-of-flight detectors on OMEGA using CR-39-based proton detectors

    SciTech Connect

    Waugh, C. J. Zylstra, A. B.; Frenje, J. A.; Séguin, F. H.; Petrasso, R. D.; Rosenberg, M. J.; Glebov, V. Yu.; Sangster, T. C.; Stoeckl, C.

    2015-05-15

    Neutron time of flight (nTOF) detectors are used routinely to measure the absolute DD neutron yield at OMEGA. To check the DD yield calibration of these detectors, originally calibrated using indium activation systems, which in turn were cross-calibrated to NOVA nTOF detectors in the early 1990s, a direct in situ calibration method using CR-39 range filter proton detectors has been successfully developed. By measuring DD neutron and proton yields from a series of exploding pusher implosions at OMEGA, a yield calibration coefficient of 1.09 ± 0.02 (relative to the previous coefficient) was determined for the 3m nTOF detector. In addition, comparison of these and other shots indicates that significant reduction in charged particle flux anisotropies is achieved when bang time occurs significantly (on the order of 500 ps) after the trailing edge of the laser pulse. This is an important observation as the main source of the yield calibration error is due to particle anisotropies caused by field effects. The results indicate that the CR-39-nTOF in situ calibration method can serve as a valuable technique for calibrating and reducing the uncertainty in the DD absolute yield calibration of nTOF detector systems on OMEGA, the National Ignition Facility, and laser megajoule.

  7. The primary motor cortex is associated with learning the absolute, but not relative, timing dimension of a task: A tDCS study.

    PubMed

    Apolinário-Souza, Tércio; Romano-Silva, Marco Aurélio; de Miranda, Débora Marques; Malloy-Diniz, Leandro Fernandes; Benda, Rodolfo Novellino; Ugrinowitsch, Herbert; Lage, Guilherme Menezes

    2016-06-01

    The functional role of the primary motor cortex (M1) in the production of movement parameters, such as length, direction and force, is well known; however, whether M1 is associated with the parametric adjustments in the absolute timing dimension of the task remains unknown. Previous studies have not applied tasks and analyses that could separate the absolute (variant) and relative (invariant) dimensions. We applied transcranial direct current stimulation (tDCS) to M1 before motor practice to facilitate motor learning. A sequential key-pressing task was practiced with two goals: learning the relative timing dimension and learning the absolute timing dimension. All effects of the stimulation of M1 were observed only in the absolute dimension of the task. Mainly, the stimulation was associated with better performance in the transfer test in the absolute dimension. Taken together, our results indicate that M1 is an important area for learning the absolute timing dimension of a motor sequence.

  8. Repeated quantum error correction on a continuously encoded qubit by real-time feedback.

    PubMed

    Cramer, J; Kalb, N; Rol, M A; Hensen, B; Blok, M S; Markham, M; Twitchen, D J; Hanson, R; Taminiau, T H

    2016-05-05

    Reliable quantum information processing in the face of errors is a major fundamental and technological challenge. Quantum error correction protects quantum states by encoding a logical quantum bit (qubit) in multiple physical qubits. To be compatible with universal fault-tolerant computations, it is essential that states remain encoded at all times and that errors are actively corrected. Here we demonstrate such active error correction on a continuously protected logical qubit using a diamond quantum processor. We encode the logical qubit in three long-lived nuclear spins, repeatedly detect phase errors by non-destructive measurements, and apply corrections by real-time feedback. The actively error-corrected qubit is robust against errors and encoded quantum superposition states are preserved beyond the natural dephasing time of the best physical qubit in the encoding. These results establish a powerful platform to investigate error correction under different types of noise and mark an important step towards fault-tolerant quantum information processing.

  9. Series Distance - a metric for the quantification of hydrograph errors and forecast uncertainty, simultaneously for timing and magnitude

    NASA Astrophysics Data System (ADS)

    Ehret, U.; Seibert, S.

    2013-12-01

    Applying metrics to quantify the similarity or dissimilarity of hydrographs is a central task in hydrological modeling, used both in model calibration and the evaluation of simulations or forecasts. Motivated by the shortcomings of standard objective metrics such as the Root Mean Square Error (RMSE) or the Mean Absolute Peak Time Error (MAPTE) and the advantages of visual inspection as a powerful tool for simultaneous, case-specific and multi-criteria (yet subjective) evaluation, we will present an objective metric termed Series Distance (Ehret and Zehe, 2011), which is in close accordance with visual evaluation. The Series Distance quantifies the similarity of two hydrographs not as the sum of amplitude differences at similar points in time (as e.g. RMSE or NASH do), but as the sum of space-time distances between hydrologically similar points of hydrograph pairs (e.g. observation and simulation) which indicates the same underlying hydrological process (e.g. event start, first half of the first rising limb, first peak etc.), which is in close concordance with visual inspection. The challenge is to automatically identify hydrologically similar points in pairs of hydrographs, which includes identification of events in the hydrographs and distinction of relevant and non-relevant rise/fall segments within events. With Series Distance, amplitude and timing errors are calculated simultaneously but separately, i.e. it returns bivariate distributions of timing and amplitude errors. These bivariate error distributions can be applied to determine time-amplitude 'uncertainty clouds' around predictions or forecasts instead of solely magnitude-error based 'uncertainty ranges' based on e.g. RMSE error distributions. This has the potential to reduce, at equal levels of exceedance probability, the size of the uncertainty range around a prediction or forecast, as timing uncertainty is not falsely represented as amplitude uncertainty. We will present the theory of Series Distance as

  10. Error Correction for Foot Clearance in Real-Time Measurement

    NASA Astrophysics Data System (ADS)

    Wahab, Y.; Bakar, N. A.; Mazalan, M.

    2014-04-01

    Mobility performance level, fall related injuries, unrevealed disease and aging stage can be detected through examination of gait pattern. The gait pattern is normally directly related to the lower limb performance condition in addition to other significant factors. For that reason, the foot is the most important part for gait analysis in-situ measurement system and thus directly affects the gait pattern. This paper reviews the development of ultrasonic system with error correction using inertial measurement unit for gait analysis in real life measurement of foot clearance. This paper begins with the related literature where the necessity of measurement is introduced. Follow by the methodology section, problem and solution. Next, this paper explains the experimental setup for the error correction using the proposed instrumentation, results and discussion. Finally, this paper shares the planned future works.

  11. Reduction of MRC error review time through the simplified and classified MRC result

    NASA Astrophysics Data System (ADS)

    Lee, Casper W.; Lin, Jason C.; Chen, Frank F.

    2009-04-01

    As the Manufacturing Rule Check (MRC) error counts are very huge, it has been getting difficult to review by each point and maybe some of the design errors will be ignored. It's necessary to reduce the review error counts and improve the checking methods. The paper presents an error classification function and auto-waived mechanism for decreasing the repeated MRC errors in MRC report. In auto-waived mechanism, the report will omit the error point if it is same as previous report and the defect location output will keep all of the error points for Do Not Inspection Area (DNIR) reference. (DNIR needs customer's approval). Furthermore, it is possible to develop an auto-waived function to skip the confirmed errors which is provided by customer with a marking information table or GDS/OASIS database. Besides, this paper also presents how these errors can be grouping and reducing checking time.

  12. Single trial time-frequency domain analysis of error processing in post-traumatic stress disorder.

    PubMed

    Clemans, Zachary A; El-Baz, Ayman S; Hollifield, Michael; Sokhadze, Estate M

    2012-09-13

    Error processing studies in psychology and psychiatry are relatively common. Event-related potentials (ERPs) are often used as measures of error processing, two such response-locked ERPs being the error-related negativity (ERN) and the error-related positivity (Pe). The ERN and Pe occur following committed error in reaction time tasks as low frequency (4-8 Hz) electroencephalographic (EEG) oscillations registered at the midline fronto-central sites. We created an alternative method for analyzing error processing using time-frequency analysis in the form of a wavelet transform. A study was conducted in which subjects with PTSD and healthy control completed a forced-choice task. Single trial EEG data from errors in the task were processed using a continuous wavelet transform. Coefficients from the transform that corresponded to the theta range were averaged to isolate a theta waveform in the time-frequency domain. Measures called the time-frequency ERN and Pe were obtained from these waveforms for five different channels and then averaged to obtain a single time-frequency ERN and Pe for each error trial. A comparison of the amplitude and latency for the time-frequency ERN and Pe between the PTSD and control group was performed. A significant group effect was found on the amplitude of both measures. These results indicate that the developed single trial time-frequency error analysis method is suitable for examining error processing in PTSD and possibly other psychiatric disorders.

  13. Method and apparatus for detecting timing errors in a system oscillator

    DOEpatents

    Gliebe, Ronald J.; Kramer, William R.

    1993-01-01

    A method of detecting timing errors in a system oscillator for an electronic device, such as a power supply, includes the step of comparing a system oscillator signal with a delayed generated signal and generating a signal representative of the timing error when the system oscillator signal is not identical to the delayed signal. An LED indicates to an operator that a timing error has occurred. A hardware circuit implements the above-identified method.

  14. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-05-15

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  15. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

    2010-07-13

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  16. Precision errors, least significant change, and monitoring time interval in pediatric measurements of bone mineral density, body composition, and mechanostat parameters by GE lunar prodigy.

    PubMed

    Jaworski, Maciej; Pludowski, Pawel

    2013-01-01

    Dual-energy X-ray absorptiometry (DXA) method is widely used in pediatrics in the study of bone density and body composition. However, there is a limit to how precise DXA can estimate bone and body composition measures in children. The study was aimed to (1) evaluate precision errors for bone mineral density, bone mass and bone area, body composition, and mechanostat parameters, (2) assess the relationships between precision errors and anthropometric parameters, and (3) calculate a "least significant change" and "monitoring time interval" values for DXA measures in children of wide age range (5-18yr) using GE Lunar Prodigy densitometer. It is observed that absolute precision error values were different for thin and standard technical modes of DXA measures and depended on age, body weight, and height. In contrast, relative precision error values expressed in percentages were similar for thin and standard modes (except total body bone mineral density [TBBMD]) and were not related to anthropometric variables (except TBBMD). Concluding, due to stability of percentage coefficient of variation values in wide range of age, the use of precision error expressed in percentages, instead of absolute error, appeared as convenient in pediatric population.

  17. Continuous monitoring of absolute cerebral blood flow by combining diffuse correlation spectroscopy and time-resolved near-infrared technology

    NASA Astrophysics Data System (ADS)

    Diop, Mamadou; Lee, Ting-Yim; St. Lawrence, Keith

    2011-02-01

    Continuous bedside monitoring of cerebral blood flow (CBF) in patients recovering from brain injury could improve the detection of impaired substrate delivery, which can exacerbate injury and worsen outcome. Diffuse correlation spectroscopy (DCS) provides the ability to monitor perfusion changes continuously, but it is difficult to quantify absolute blood flow - leading to uncertainties as to whether or not CBF has fallen to ischemic levels. To continuously measure CBF, we propose to calibrate DCS data using a single time-point, time-resolved near-infrared (TR-NIR) technique for measuring absolute CBF. Experiments were conducted on newborn piglets in which CBF was increased by raising the arterial tension of CO2 (40-62 mmHg) and decreased by carotid occlusion. For validation, values of CBF measured by TR-NIR were converted into blood flow changes and compared to CBF changes measured by DCS. A strong correlation between perfusion changes from the two techniques was revealed (slope = 0.98 and R2 = 0.96), suggesting that a single time-point CBF measurement by TR-NIR can be used to convert continuous DCS data into units of CBF (ml/100g/min).

  18. Space-Time Error Representation and Estimation in Navier-Stokes Calculations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2006-01-01

    The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.

  19. Solid-state track recorder dosimetry device to measure absolute reaction rates and neutron fluence as a function of time

    DOEpatents

    Gold, Raymond; Roberts, James H.

    1989-01-01

    A solid state track recording type dosimeter is disclosed to measure the time dependence of the absolute fission rates of nuclides or neutron fluence over a period of time. In a primary species an inner recording drum is rotatably contained within an exterior housing drum that defines a series of collimating slit apertures overlying windows defined in the stationary drum through which radiation can enter. Film type solid state track recorders are positioned circumferentially about the surface of the internal recording drum to record such radiation or its secondary products during relative rotation of the two elements. In another species both the recording element and the aperture element assume the configuration of adjacent disks. Based on slit size of apertures and relative rotational velocity of the inner drum, radiation parameters within a test area may be measured as a function of time and spectra deduced therefrom.

  20. Distinguishing Error from Chaos in Ecological Time Series

    NASA Astrophysics Data System (ADS)

    Sugihara, George; Grenfell, Bryan; May, Robert M.

    1990-11-01

    Over the years, there has been much discussion about the relative importance of environmental and biological factors in regulating natural populations. Often it is thought that environmental factors are associated with stochastic fluctuations in population density, and biological ones with deterministic regulation. We revisit these ideas in the light of recent work on chaos and nonlinear systems. We show that completely deterministic regulatory factors can lead to apparently random fluctuations in population density, and we then develop a new method (that can be applied to limited data sets) to make practical distinctions between apparently noisy dynamics produced by low-dimensional chaos and population variation that in fact derives from random (high-dimensional)noise, such as environmental stochasticity or sampling error. To show its practical use, the method is first applied to models where the dynamics are known. We then apply the method to several sets of real data, including newly analysed data on the incidence of measles in the United Kingdom. Here the additional problems of secular trends and spatial effects are explored. In particular, we find that on a city-by-city scale measles exhibits low-dimensional chaos (as has previously been found for measles in New York City), whereas on a larger, country-wide scale the dynamics appear as a noisy two-year cycle. In addition to shedding light on the basic dynamics of some nonlinear biological systems, this work dramatizes how the scale on which data is collected and analysed can affect the conclusions drawn.

  1. Real-time error detection but not error correction drives automatic visuomotor adaptation.

    PubMed

    Hinder, Mark R; Riek, Stephan; Tresilian, James R; de Rugy, Aymar; Carson, Richard G

    2010-03-01

    We investigated the role of visual feedback of task performance in visuomotor adaptation. Participants produced novel two degrees of freedom movements (elbow flexion-extension, forearm pronation-supination) to move a cursor towards visual targets. Following trials with no rotation, participants were exposed to a 60 degrees visuomotor rotation, before returning to the non-rotated condition. A colour cue on each trial permitted identification of the rotated/non-rotated contexts. Participants could not see their arm but received continuous and concurrent visual feedback (CF) of a cursor representing limb position or post-trial visual feedback (PF) representing the movement trajectory. Separate groups of participants who received CF were instructed that online modifications of their movements either were, or were not, permissible as a means of improving performance. Feedforward-mediated performance improvements occurred for both CF and PF groups in the rotated environment. Furthermore, for CF participants this adaptation occurred regardless of whether feedback modifications of motor commands were permissible. Upon re-exposure to the non-rotated environment participants in the CF, but not PF, groups exhibited post-training aftereffects, manifested as greater angular deviations from a straight initial trajectory, with respect to the pre-rotation trials. Accordingly, the nature of the performance improvements that occurred was dependent upon the timing of the visual feedback of task performance. Continuous visual feedback of task performance during task execution appears critical in realising automatic visuomotor adaptation through a recalibration of the visuomotor mapping that transforms visual inputs into appropriate motor commands.

  2. Ambient Temperature Changes and the Impact to Time Measurement Error

    NASA Astrophysics Data System (ADS)

    Ogrizovic, V.; Gucevic, J.; Delcev, S.

    2012-12-01

    Measurements in Geodetic Astronomy are mainly outdoors and performed during a night, when the temperature often decreases very quickly. The time-keeping during a measuring session is provided by collecting UTC time ticks from a GPS receiver and transferring them to a laptop computer. An interrupt handler routine processes received UTC impulses in real-time and calculates the clock parameters. The characteristics of the computer quartz clock are influenced by temperature changes of the environment. We exposed the laptop to different environmental temperature conditions, and calculate the clock parameters for each environmental model. The results show that the laptop used for time-keeping in outdoor measurements should be kept in a stable temperature environment, at temperatures near 20° C.

  3. Assessment of systematic measurement errors for acoustic travel-time tomography of the atmosphere.

    PubMed

    Vecherin, Sergey N; Ostashev, Vladimir E; Wilson, D Keith

    2013-09-01

    Two algorithms are described for assessing systematic errors in acoustic travel-time tomography of the atmosphere, the goal of which is to reconstruct the temperature and wind velocity fields given the transducers' locations and the measured travel times of sound propagating between each speaker-microphone pair. The first algorithm aims at assessing the errors simultaneously with the mean field reconstruction. The second algorithm uses the results of the first algorithm to identify the ray paths corrupted by the systematic errors and then estimates these errors more accurately. Numerical simulations show that the first algorithm can improve the reconstruction when relatively small systematic errors are present in all paths. The second algorithm significantly improves the reconstruction when systematic errors are present in a few, but not all, ray paths. The developed algorithms were applied to experimental data obtained at the Boulder Atmospheric Observatory.

  4. The sine wave protocol: decrease movement time without increasing errors.

    PubMed

    Boyle, Jason B; Kennedy, Deanna M; Wang, Chaoyi; Shea, Charles H

    2014-01-01

    Practice tracking a sine wave template has been shown (J. B. Boyle, D. Kennedy, & C. H. Shea, 2012) to greatly enhance performance on a difficult Fitts task of the same amplitude. The purpose of the experiment was to replicate this finding and determine whether enhancements related to the sine wave practice are specific to the amplitude experienced during the sine wave practice. Following sine wave or Fitts task practice with amplitudes of 16° or 24°, participants were tested under the conditions they had practiced under (Test 1) and then all groups were tested under Fitts task conditions (Test 2; ID = 6, amplitude = 16°). Participants who practiced with the sine wave templates were able to move faster on Test 2 where a 16° amplitude Fitts task was used than participants that had practiced either the 16° or 24° amplitude Fitts tasks. The movements produced by the sine groups on Test 2 were not only faster than the movements of the Fitts groups on Test 2, but dwell time was lower with percent time to peak velocity and harmonicity higher for the Sine groups than for the Fitts groups. The decreased movement times for the sine groups on Test 2 were accomplished with hits or endpoint variability similar to that of the Fitts group.

  5. Catchment-scale variability of absolute versus temporal anomaly soil moisture: Time-invariant part not always plays the leading role

    NASA Astrophysics Data System (ADS)

    Gao, Xiaodong; Zhao, Xining; Si, Bing Cheng; Brocca, Luca; Hu, Wei; Wu, Pute

    2015-10-01

    Recently, the characterization of soil moisture spatiotemporal variability is recommended to consider temporal soil moisture anomalies because of their distinctive behaviors with absolute soil moisture and their importance in hydrological applications. Here we characterized soil moisture spatiotemporal variability in the Yuanzegou catchment (0.58 km2) on the Loess Plateau of China, considering both absolute soil moisture and temporal anomalies. The dataset contained soil moisture observations in the 0-80 cm between 2009 and 2011 at 78 sampling locations. The spatial variance of time-invariant temporal means was shown to be the primary contributor (61.7-76.2%) to the total variance but the magnitude of this contribution was much lower than observed in large-scale studies. The seasonal variation in contribution can be attributed into differences in soil wetness conditions; lower contribution was found at intermediate wetness for spatial variances of temporal mean and temporal anomalies. Furthermore, the upward-convex relationship between spatial variance and spatial means of absolute soil moisture was mainly characterized by the covariance of temporal mean and temporal anomalies. Time stability of absolute soil moisture and its components were analyzed by using both the "accuracy" metric mean relative difference (MRD) and the "precision" metric variance of relative difference (VRD). As MRD was considered, time stability of absolute soil moisture primarily characterized time-invariant patterns. However, as VRD was used, the time stability of absolute soil moisture characterized only a small part of time-invariant or -variant pattern.

  6. Correlated errors in geodetic time series: Implications for time-dependent deformation

    USGS Publications Warehouse

    Langbein, J.; Johnson, H.

    1997-01-01

    addition, the seasonal noise can be as large as 3 mm in amplitude but typically is less than 0.5 mm. Because of the presence of random-walk noise in these time series, modeling and interpretation of the geodetic data must account for this source of error. By way of example we show that estimating the time-varying strain tensor (a form of spatial averaging) from geodetic data having both random-walk and white noise error components results in seemingly significant variations in the rate of strain accumulation; spatial averaging does reduce the size of both noise components but not their relative influence on the resulting strain accumulation model. Copyright 1997 by the American Geophysical Union.

  7. Neither One-Time Negative Screening Tests nor Negative Colposcopy Provides Absolute Reassurance against Cervical Cancer

    PubMed Central

    Castle, Philip E.; Rodríguez, Ana C.; Burk, Robert D.; Herrero, Rolando; Hildesheim, Allan; Solomon, Diane; Sherman, Mark E.; Jeronimo, Jose; Alfaro, Mario; Morales, Jorge; Guillén, Diego; Hutchinson, Martha L.; Wacholder, Sholom; Schiffman, Mark

    2009-01-01

    A population sample of 10,049 women living in Guanacaste, Costa Rica was recruited into a natural history of human papillomavirus (HPV) and cervical neoplasia study in 1993–4. At the enrollment visit, we applied multiple state-of-the-art cervical cancer screening methods to detect prevalent cervical cancer and to prevent subsequent cervical cancers by the timely detection and treatment of precancerous lesions. Women were screened at enrollment with 3 kinds of cytology (often reviewed by more than one pathologist), visual inspection, and Cervicography. Any positive screening test led to colposcopic referral and biopsy and/or excisional treatment of CIN2 or worse. We retrospectively tested stored specimens with an early HPV test (Hybrid Capture Tube Test) and for >40 HPV genotypes using a research PCR assay. We followed women typically 5–7 years and some up to 11 years. Nonetheless, sixteen cases of invasive cervical cancer were diagnosed during follow-up. Six cancer cases were failures at enrollment to detect abnormalities by cytology screening; three of the six were also negative at enrollment by sensitive HPV DNA testing. Seven cancers represent failures of colposcopy to diagnose cancer or a precancerous lesion in screen-positive women. Finally, three cases arose despite attempted excisional treatment of precancerous lesions. Based on this evidence, we suggest that no current secondary cervical cancer prevention technologies applied once in a previously under-screened population is likely to be 100% efficacious in preventing incident diagnoses of invasive cervical cancer. PMID:19569231

  8. Real-time detection and elimination of nonorthogonality error in interference fringe processing

    SciTech Connect

    Hu Haijiang; Zhang Fengdeng

    2011-05-20

    In the measurement system of interference fringe, the nonorthogonality error is a main error source that influences the precision and accuracy of the measurement system. The detection and elimination of the error has been an important target. A novel method that only uses the cross-zero detection and the counting is proposed to detect and eliminate the nonorthogonality error in real time. This method can be simply realized by means of the digital logic device, because it does not invoke trigonometric functions and inverse trigonometric functions. And it can be widely used in the bidirectional subdivision systems of a Moire fringe and other optical instruments.

  9. Real-time detection and elimination of nonorthogonality error in interference fringe processing.

    PubMed

    Hu, Haijiang; Zhang, Fengdeng

    2011-05-20

    In the measurement system of interference fringe, the nonorthogonality error is a main error source that influences the precision and accuracy of the measurement system. The detection and elimination of the error has been an important target. A novel method that only uses the cross-zero detection and the counting is proposed to detect and eliminate the nonorthogonality error in real time. This method can be simply realized by means of the digital logic device, because it does not invoke trigonometric functions and inverse trigonometric functions. And it can be widely used in the bidirectional subdivision systems of a Moiré fringe and other optical instruments.

  10. Human error and time of occurrence in hazardous material events in mining and manufacturing.

    PubMed

    Ruckart, Perri Zeitz; Burgess, Paula A

    2007-04-11

    Human error has played a role in several large-scale hazardous materials events. To assess how human error and time of occurrence may have contributed to acute chemical releases, data from the Hazardous Substances Emergency Events Surveillance (HSEES) system for 1996-2003 were analyzed. Analyses were restricted to events in mining or manufacturing where human error was a contributing factor. The temporal distribution of releases was also evaluated to determine if the night shift impacted releases due to human error. Human error-related events in mining and manufacturing resulted in almost four times as many events with victims and almost three times as many events with evacuations compared with events in these industries where human error was not a contributing factor (10.3% versus 2.7% and 11.8% versus 4.5%, respectively). Time of occurrence of events attributable to human error in mining and manufacturing showed a widespread distribution for number of events, events with victims and evacuations, and hospitalizations and deaths, without apparent increased occurrence during the night shift. Utilizing human factor engineering in both front-end ergonomic design and retrospective incident investigation provides one potential systematic approach that may help minimize human error in workplace-related acute chemical releases and their resulting injuries.

  11. Error and timing analysis of multiple time-step integration methods for molecular dynamics

    NASA Astrophysics Data System (ADS)

    Han, Guowen; Deng, Yuefan; Glimm, James; Martyna, Glenn

    2007-02-01

    Molecular dynamics simulations of biomolecules performed using multiple time-step integration methods are hampered by resonance instabilities. We analyze the properties of a simple 1D linear system integrated with the symplectic reference system propagator MTS (r-RESPA) technique following earlier work by others. A closed form expression for the time step dependent Hamiltonian which corresponds to r-RESPA integration of the model is derived. This permits us to present an analytic formula for the dependence of the integration accuracy on short-range force cutoff range. A detailed analysis of the force decomposition for the standard Ewald summation method is then given as the Ewald method is a good candidate to achieve high scaling on modern massively parallel machines. We test the new analysis on a realistic system, a protein in water. Under Langevin dynamics with a weak friction coefficient ( ζ=1 ps) to maintain temperature control and using the SHAKE algorithm to freeze out high frequency vibrations, we show that the 5 fs resonance barrier present when all degrees of freedom are unconstrained is postponed to ≈12 fs. An iso-error boundary with respect to the short-range cutoff range and multiple time step size agrees well with the analytical results which are valid due to dominance of the high frequency modes in determining integrator accuracy. Using r-RESPA to treat the long range interactions results in a 6× increase in efficiency for the decomposition described in the text.

  12. Automatic Time Stepping with Global Error Control for Groundwater Flow Models

    SciTech Connect

    Tang, Guoping

    2008-09-01

    An automatic time stepping with global error control is proposed for the time integration of the diffusion equation to simulate groundwater flow in confined aquifers. The scheme is based on an a posteriori error estimate for the discontinuous Galerkin (dG) finite element methods. A stability factor is involved in the error estimate and it is used to adapt the time step and control the global temporal error for the backward difference method. The stability factor can be estimated by solving a dual problem. The stability factor is not sensitive to the accuracy of the dual solution and the overhead computational cost can be minimized by solving the dual problem using large time steps. Numerical experiments are conducted to show the application and the performance of the automatic time stepping scheme. Implementation of the scheme can lead to improvement in accuracy and efficiency for groundwater flow models.

  13. Repeated quantum error correction on a continuously encoded qubit by real-time feedback

    PubMed Central

    Cramer, J.; Kalb, N.; Rol, M. A.; Hensen, B.; Blok, M. S.; Markham, M.; Twitchen, D. J.; Hanson, R.; Taminiau, T. H.

    2016-01-01

    Reliable quantum information processing in the face of errors is a major fundamental and technological challenge. Quantum error correction protects quantum states by encoding a logical quantum bit (qubit) in multiple physical qubits. To be compatible with universal fault-tolerant computations, it is essential that states remain encoded at all times and that errors are actively corrected. Here we demonstrate such active error correction on a continuously protected logical qubit using a diamond quantum processor. We encode the logical qubit in three long-lived nuclear spins, repeatedly detect phase errors by non-destructive measurements, and apply corrections by real-time feedback. The actively error-corrected qubit is robust against errors and encoded quantum superposition states are preserved beyond the natural dephasing time of the best physical qubit in the encoding. These results establish a powerful platform to investigate error correction under different types of noise and mark an important step towards fault-tolerant quantum information processing. PMID:27146630

  14. Calibration of diffuse correlation spectroscopy with a time-resolved near-infrared technique to yield absolute cerebral blood flow measurements

    PubMed Central

    Diop, Mamadou; Verdecchia, Kyle; Lee, Ting-Yim; St Lawrence, Keith

    2011-01-01

    A primary focus of neurointensive care is the prevention of secondary brain injury, mainly caused by ischemia. A noninvasive bedside technique for continuous monitoring of cerebral blood flow (CBF) could improve patient management by detecting ischemia before brain injury occurs. A promising technique for this purpose is diffuse correlation spectroscopy (DCS) since it can continuously monitor relative perfusion changes in deep tissue. In this study, DCS was combined with a time-resolved near-infrared technique (TR-NIR) that can directly measure CBF using indocyanine green as a flow tracer. With this combination, the TR-NIR technique can be used to convert DCS data into absolute CBF measurements. The agreement between the two techniques was assessed by concurrent measurements of CBF changes in piglets. A strong correlation between CBF changes measured by TR-NIR and changes in the scaled diffusion coefficient measured by DCS was observed (R2 = 0.93) with a slope of 1.05 ± 0.06 and an intercept of 6.4 ± 4.3% (mean ± standard error). PMID:21750781

  15. Conditional probability distribution (CPD) method in temperature based death time estimation: Error propagation analysis.

    PubMed

    Hubig, Michael; Muggenthaler, Holger; Mall, Gita

    2014-05-01

    Bayesian estimation applied to temperature based death time estimation was recently introduced as conditional probability distribution or CPD-method by Biermann and Potente. The CPD-method is useful, if there is external information that sets the boundaries of the true death time interval (victim last seen alive and found dead). CPD allows computation of probabilities for small time intervals of interest (e.g. no-alibi intervals of suspects) within the large true death time interval. In the light of the importance of the CPD for conviction or acquittal of suspects the present study identifies a potential error source. Deviations in death time estimates will cause errors in the CPD-computed probabilities. We derive formulae to quantify the CPD error as a function of input error. Moreover we observed the paradox, that in cases, in which the small no-alibi time interval is located at the boundary of the true death time interval, adjacent to the erroneous death time estimate, CPD-computed probabilities for that small no-alibi interval will increase with increasing input deviation, else the CPD-computed probabilities will decrease. We therefore advise not to use CPD if there is an indication of an error or a contra-empirical deviation in the death time estimates, that is especially, if the death time estimates fall out of the true death time interval, even if the 95%-confidence intervals of the estimate still overlap the true death time interval.

  16. Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error

    NASA Astrophysics Data System (ADS)

    Jung, Insung; Koo, Lockjo; Wang, Gi-Nam

    2008-11-01

    The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.

  17. Deciphering relative timing of fabric development in granitoids with similar absolute ages based on AMS study (Dharwar Craton, South India)

    NASA Astrophysics Data System (ADS)

    Bhatt, Sandeep; Rana, Virendra; Mamtani, Manish A.

    2017-01-01

    Anisotropy of Magnetic Susceptibility (AMS) data are presented from the Koppal Granitoid (Dharwar Craton, South India) that has U-Pb zircon age of 2528 ± 9 Ma. The magnetic fabric is oriented in NNE-SSW direction. This is parallel to the planar structures that developed during regional D3 deformation, but oblique to the NNW-SSE oriented magnetic foliation as well as field foliation (D1/D2 deformation) recorded in the country rock Peninsular Gneiss. Variation in the intensity of fabric within the granitoid is mapped. It is inferred that the emplacement of Koppal Granitoid took place by ballooning and fabric development within the pluton was syntectonic with regional D3. These results are compared with the time-relationship between emplacement/fabric development and regional deformation reported from the Mulgund Granite (2555 ± 6 Ma; U-Pb zircon), which is also located in the Dharwar Craton and is equivalent to the Koppal Granitoid in age. This granite is known to have emplaced syntectonically with regional D1/D2 deformation, and is thus not related to the same deformation event as the Koppal Granitoid, despite their similar absolute ages. It is argued that in the study area, D3 is ≤2537 Ma, while D1/D2 is ≥2549 Ma in age. Thus, this study highlights the use of AMS in (a) deciphering the relative timing of regional deformation and emplacement of granitoids of equivalent age and (b) constraining the timing of regional superposed deformation events.

  18. Development of Real-Time Error Ellipses as an Indicator of Kalman Filter Performance.

    DTIC Science & Technology

    1984-03-01

    S q often than 3 to 5 seconds. However, before the HP-86 can e considered feasible for real-time Kalman filtr procssinz, more investigaz ion i: needi...Subtitle) S. TYPE OF REPORT & PERIOD COVERED Development of Real-Time Error Master’s Thesis; Ellipses as an Indicator of Kalman March 1984 Filter...SUPP.LEETARY NOTES 19. KEY WORDS (Cmntine on reveo ole, It ndeeaey md Identil by block number) Error Ellipsoids; Kalman Filter; Extended Kalman Filter

  19. Calibration method of the time synchronization error of many data acquisition nodes in the chained system

    NASA Astrophysics Data System (ADS)

    Jiang, Jia-jia; Duan, Fa-jie; Chen, Jin; Zhang, Chao; Wang, Kai; Chang, Zong-jie

    2012-08-01

    Time synchronization is very important in a distributed chained seismic acquisition system with a large number of data acquisition nodes (DANs). The time synchronization error has two causes. On the one hand, there is a large accumulated propagation delay when commands propagate from the analysis and control system to multiple distant DANs, which makes it impossible for different DANs to receive the same command synchronously. Unfortunately, the propagation delay of commands (PDCs) varies in different application environments. On the other hand, the phase jitter of both the master clock and the clock recovery phase-locked loop, which is designed to extract the timing signal, may also cause the time synchronization error. In this paper, in order to achieve accurate time synchronization, a novel calibration method is proposed which can align the PDCs of all of the DANs in real time and overcome the time synchronization error caused by the phase jitter. Firstly, we give a quantitative analysis of the time synchronization error caused by both the PDCs and the phase jitter. Secondly, we propose a back and forth model (BFM) and a transmission delay measurement method (TDMM) to overcome these difficulties. Furthermore, the BFM is designed as the hardware configuration to measure the PDCs and calibrate the time synchronization error. The TDMM is used to measure the PDCs accurately. Thirdly, in order to overcome the time synchronization error caused by the phase jitter, a compression and mapping algorithm (CMA) is presented. Finally, based on the proposed BFM, TDMM and CMA, a united calibration algorithm is developed to overcome the time synchronization error caused by both the PDCs and the phase jitter. The simulation experiment results show the effectiveness of the calibration method proposed in this paper.

  20. Finite-time normal mode disturbances and error growth during Southern Hemisphere blocking

    NASA Astrophysics Data System (ADS)

    Wei, Mozheng; Frederiksen, Jorgen S.

    2005-01-01

    The structural organization of initially random errors evolving in a barotropic tangent linear model, with time-dependent basic states taken from analyses, is examined for cases of block development, maturation and decay in the Southern Hemisphere atmosphere during April, November, and December 1989. The statistics of 100 evolved errors are studied for six-day periods and compared with the growth and structures of fast growing normal modes and finite-time normal modes (FTNMs). The amplification factors of most initially random errors are slightly less than those of the fastest growing FTNM for the same time interval. During their evolution, the standard deviations of the error fields become concentrated in the regions of rapid dynamical development, particularly associated with developing and decaying blocks. We have calculated probability distributions and the mean and standard deviations of pattern correlations between each of the 100 evolved error fields and the five fastest growing FTNMs for the same time interval. The mean of the largest pattern correlation, taken over the five fastest growing FTNMs, increases with increasing time interval to a value close to 0.6 or larger after six days. FTNM 1 generally, but not always, gives the largest mean pattern correlation with error fields. Corresponding pattern correlations with the fast growing normal modes of the instantaneous basic state flow are significant but lower than with FTNMs. Mean pattern correlations with fast growing FTNMs increase further when the time interval is increased beyond six days.

  1. Some Sources of Error in the Transcription of Real Time in Spoken Discourse.

    ERIC Educational Resources Information Center

    O'Connell, Daniel C.; Kowal, Sabine

    1990-01-01

    Discusses such errors in transcribing real time in spoken discourse as inconsistent use of transcriptional conventions; use of transcriptional symbols with multiple meanings; measurement problems; some cross-purposes of real-time transcription; neglect of time between onset and offset of speech and silence transcription; and transcriptions that…

  2. Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.

    PubMed

    Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał

    2016-08-01

    Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014.

  3. Absolute calibration of the Greenland time scale: implications for Antarctic time scales and for Δ 14C

    NASA Astrophysics Data System (ADS)

    Shackleton, N. J.; Fairbanks, R. G.; Chiu, Tzu-chien; Parrenin, F.

    2004-07-01

    We propose a new age scale for the two ice cores (GRIP and GISP2) that were drilled at Greenland summit, based on accelerator mass spectrometry 14C dating of foraminifera in core MD95-2042 (Paleoceanography 15 (2000) 565), calibrated by means of recently obtained paired 14C and 230Th measurements on pristine corals (Marine radiocarbon calibration curve spanning 10,500 to 50,000 years BP (thousand years before present) Based on paired 230Th/ 234U/ 238U and 14C dates on Pristine Corals Geological Society of America Bulletin, 2003, submitted for publication). The record of core MD95-2042 can be correlated very precisely to the Greenland ice cores. Between 30 and 40 ka BP our scale is 1.4 ka older than the GRIP SS09sea time scale (Journal of Quaternary Science 16 (2001) 299). At the older end of Marine Isotope Stage 3 we use published 230Th dates from speleothems to calibrate the record. Using this scale we show a Δ 14C record that is broadly consistent with the modelled record (Earth Planet. Sci. Lett. 200 (2002) 177) and with the data of Hughen et al. (Science 303 (2004) 202), but not consistent with the high values obtained by Beck et al. (Science 292 (2001) 2453) or by Voelker et al. (Radiocarbon 40 (1998) 517). We show how a set of age scales for the Antarctic ice cores can be derived that are both fully consistent with the Greenland scale, and glaciologically reasonable.

  4. Temporal Prediction Errors Affect Short-Term Memory Scanning Response Time.

    PubMed

    Limongi, Roberto; Silva, Angélica M

    2016-11-01

    The Sternberg short-term memory scanning task has been used to unveil cognitive operations involved in time perception. Participants produce time intervals during the task, and the researcher explores how task performance affects interval production - where time estimation error is the dependent variable of interest. The perspective of predictive behavior regards time estimation error as a temporal prediction error (PE), an independent variable that controls cognition, behavior, and learning. Based on this perspective, we investigated whether temporal PEs affect short-term memory scanning. Participants performed temporal predictions while they maintained information in memory. Model inference revealed that PEs affected memory scanning response time independently of the memory-set size effect. We discuss the results within the context of formal and mechanistic models of short-term memory scanning and predictive coding, a Bayes-based theory of brain function. We state the hypothesis that our finding could be associated with weak frontostriatal connections and weak striatal activity.

  5. Optimal Threshold and Time of Absolute Lymphocyte Count Assessment for Outcome Prediction after Bone Marrow Transplantation.

    PubMed

    Bayraktar, Ulas D; Milton, Denái R; Guindani, Michele; Rondon, Gabriela; Chen, Julianne; Al-Atrash, Gheath; Rezvani, Katayoun; Champlin, Richard; Ciurea, Stefan O

    2016-03-01

    The recovery pace of absolute lymphocyte count (ALC) is prognostic after hematopoietic stem cell transplantation. Previous studies have evaluated a wide range of ALC cutoffs and time points for predicting outcomes. We aimed to determine the optimal ALC value for outcome prediction after bone marrow transplantation (BMT). A total of 518 patients who underwent BMT for acute leukemia or myelodysplastic syndrome between 1999 and 2010 were divided into a training set and a test set to assess the prognostic value of ALC on days 30, 60, 90, 120, 180, as well as the first post-transplantation day of an ALC of 100, 200, 300, 400, 500, and 1000/μL. In the training set, the best predictor of overall survival (OS), relapse-free survival (RFS), and nonrelapse mortality (NRM) was ALC on day 60. In the entire patient cohort, multivariable analyses demonstrated significantly better OS, RFS, and NRM and lower incidence of graft-versus-host disease (GVHD) in patients with an ALC >300/μL on day 60 post-BMT, both including and excluding patients who developed GVHD before day 60. Among the patient-, disease-, and transplant-related factors assessed, only busulfan-based conditioning was significantly associated with higher ALC values on day 60 in both cohorts. The optimal ALC cutoff for predicting outcomes after BMT is 300/μL on day 60 post-transplantation.

  6. Tissue-specific Calibration of Real-time PCR Facilitates Absolute Quantification of Plasmid DNA in Biodistribution Studies

    PubMed Central

    Ho, Joan K; White, Paul J; Pouton, Colin W

    2016-01-01

    Analysis of the tissue distribution of plasmid DNA after administration of nonviral gene delivery systems is best accomplished using quantitative real-time polymerase chain reaction (qPCR), although published strategies do not allow determination of the absolute mass of plasmid delivered to different tissues. Generally, data is expressed as the mass of plasmid relative to the mass of genomic DNA (gDNA) in the sample. This strategy is adequate for comparisons of efficiency of delivery to a single site but it does not allow direct comparison of delivery to multiple tissues, as the mass of gDNA extracted per unit mass of each tissue is different. We show here that by constructing qPCR standard curves for each tissue it is possible to determine the dose of intact plasmid remaining in each tissue, which is a more useful parameter when comparing the fates of different formulations of DNA. We exemplify the use of this tissue-specific qPCR method by comparing the delivery of naked DNA, cationic DNA complexes, and neutral PEGylated DNA complexes after intramuscular injection. Generally, larger masses of intact plasmid were present 24 hours after injection of DNA complexes, and neutral complexes resulted in delivery of a larger mass of intact plasmid to the spleen. PMID:27701400

  7. Absolute Zero

    NASA Astrophysics Data System (ADS)

    Donnelly, Russell J.; Sheibley, D.; Belloni, M.; Stamper-Kurn, D.; Vinen, W. F.

    2006-12-01

    Absolute Zero is a two hour PBS special attempting to bring to the general public some of the advances made in 400 years of thermodynamics. It is based on the book “Absolute Zero and the Conquest of Cold” by Tom Shachtman. Absolute Zero will call long-overdue attention to the remarkable strides that have been made in low-temperature physics, a field that has produced 27 Nobel Prizes. It will explore the ongoing interplay between science and technology through historical examples including refrigerators, ice machines, frozen foods, liquid oxygen and nitrogen as well as much colder fluids such as liquid hydrogen and liquid helium. A website has been established to promote the series: www.absolutezerocampaign.org. It contains information on the series, aimed primarily at students at the middle school level. There is a wealth of material here and we hope interested teachers will draw their student’s attention to this website and its substantial contents, which have been carefully vetted for accuracy.

  8. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.

    PubMed

    Waheeb, Waddah; Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.

  9. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting

    PubMed Central

    Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network. PMID:27959927

  10. On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)

    NASA Astrophysics Data System (ADS)

    Huffman, G. J.

    2013-12-01

    Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate

  11. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  12. Unavoidable Errors: A Spatio-Temporal Analysis of Time-Course and Neural Sources of Evoked Potentials Associated with Error Processing in a Speeded Task

    ERIC Educational Resources Information Center

    Vocat, Roland; Pourtois, Gilles; Vuilleumier, Patrik

    2008-01-01

    The detection of errors is known to be associated with two successive neurophysiological components in EEG, with an early time-course following motor execution: the error-related negativity (ERN/Ne) and late positivity (Pe). The exact cognitive and physiological processes contributing to these two EEG components, as well as their functional…

  13. Interference peak detection based on FPGA for real-time absolute distance ranging with dual-comb lasers

    NASA Astrophysics Data System (ADS)

    Ni, Kai; Dong, Hao; Zhou, Qian; Xu, Mingfei; Li, Xinghui; Wu, Guanhao

    2015-08-01

    Absolute distance measurement using dual femtosecond comb lasers can achieve higher accuracy and faster measurement speed, which makes it more and more attractive. The data processing flow consists of four steps: interference peak detection, fast Fourier transform (FFT), phase fitting and compensation of index of refraction. A realtime data processing system based on Field-Programmable Gate Array (FPGA) for dual-comb ranging has been newly developed. The design and implementation of the interference peak detection algorithm by FPGA and Verilog language is introduced in this paper, which is viewed as the most complicated part and an important guarantee for system precision and reliability. An adaptive sliding window for scanning is used to detect peaks. In the process of detection, the algorithm stores 16 sample data as a detection unit and calculates the average of each unit. The average result is used to determine the vertical center height of the sliding window. The algorithm estimates the noise intensity of each detection unit, and then calculates the average of the noise strength of successive 128 units. The noise average is used to calculate the signal to noise ratio of the current working environment, which is used to adjust the height of the sliding window. This adaptive sliding window helps to eliminate fake peaks caused by noise. The whole design is based on the way of pipeline, which can improves the real-time throughput of the overall peak detection module. Its execution speed is up to 140MHz in the FPGA, and the peak can be detected in 16 clock cycle when it appears.

  14. Leptin in Whales: Validation and Measurement of mRNA Expression by Absolute Quantitative Real-Time PCR

    PubMed Central

    Ball, Hope C.; Holmes, Robert K.; Londraville, Richard L.; Thewissen, Johannes G. M.; Duff, Robert Joel

    2013-01-01

    Leptin is the primary hormone in mammals that regulates adipose stores. Arctic adapted cetaceans maintain enormous adipose depots, suggesting possible modifications of leptin or receptor function. Determining expression of these genes is the first step to understanding the extreme physiology of these animals, and the uniqueness of these animals presents special challenges in estimating and comparing expression levels of mRNA transcripts. Here, we compare expression of two model genes, leptin and leptin-receptor gene-related product (OB-RGRP), using two quantitative real-time PCR (qPCR) methods: “relative” and “absolute”. To assess the expression of leptin and OB-RGRP in cetacean tissues, we first examined how relative expression of those genes might differ when normalized to four common endogenous control genes. We performed relative expression qPCR assays measuring the amplification of these two model target genes relative to amplification of 18S ribosomal RNA (18S), ubiquitously expressed transcript (Uxt), ribosomal protein 9 (Rs9) and ribosomal protein 15 (Rs15) endogenous controls. Results demonstrated significant differences in the expression of both genes when different control genes were employed; emphasizing a limitation of relative qPCR assays, especially in studies where differences in physiology and/or a lack of knowledge regarding levels and patterns of expression of common control genes may possibly affect data interpretation. To validate the absolute quantitative qPCR methods, we evaluated the effects of plasmid structure, the purity of the plasmid standard preparation and the influence of type of qPCR “background” material on qPCR amplification efficiencies and copy number determination of both model genes, in multiple tissues from one male bowhead whale. Results indicate that linear plasmids are more reliable than circular plasmid standards, no significant differences in copy number estimation based upon background material used, and

  15. Measurement of the Absolute Magnitude and Time Courses of Mitochondrial Membrane Potential in Primary and Clonal Pancreatic Beta-Cells.

    PubMed

    Gerencser, Akos A; Mookerjee, Shona A; Jastroch, Martin; Brand, Martin D

    2016-01-01

    The aim of this study was to simplify, improve and validate quantitative measurement of the mitochondrial membrane potential (ΔψM) in pancreatic β-cells. This built on our previously introduced calculation of the absolute magnitude of ΔψM in intact cells, using time-lapse imaging of the non-quench mode fluorescence of tetramethylrhodamine methyl ester and a bis-oxonol plasma membrane potential (ΔψP) indicator. ΔψM is a central mediator of glucose-stimulated insulin secretion in pancreatic β-cells. ΔψM is at the crossroads of cellular energy production and demand, therefore precise assay of its magnitude is a valuable tool to study how these processes interplay in insulin secretion. Dispersed islet cell cultures allowed cell type-specific, single-cell observations of cell-to-cell heterogeneity of ΔψM and ΔψP. Glucose addition caused hyperpolarization of ΔψM and depolarization of ΔψP. The hyperpolarization was a monophasic step increase, even in cells where the ΔψP depolarization was biphasic. The biphasic response of ΔψP was associated with a larger hyperpolarization of ΔψM than the monophasic response. Analysis of the relationships between ΔψP and ΔψM revealed that primary dispersed β-cells responded to glucose heterogeneously, driven by variable activation of energy metabolism. Sensitivity analysis of the calibration was consistent with β-cells having substantial cell-to-cell variations in amounts of mitochondria, and this was predicted not to impair the accuracy of determinations of relative changes in ΔψM and ΔψP. Finally, we demonstrate a significant problem with using an alternative ΔψM probe, rhodamine 123. In glucose-stimulated and oligomycin-inhibited β-cells the principles of the rhodamine 123 assay were breached, resulting in misleading conclusions.

  16. Absolute Summ

    NASA Astrophysics Data System (ADS)

    Phillips, Alfred, Jr.

    Summ means the entirety of the multiverse. It seems clear, from the inflation theories of A. Guth and others, that the creation of many universes is plausible. We argue that Absolute cosmological ideas, not unlike those of I. Newton, may be consistent with dynamic multiverse creations. As suggested in W. Heisenberg's uncertainty principle, and with the Anthropic Principle defended by S. Hawking, et al., human consciousness, buttressed by findings of neuroscience, may have to be considered in our models. Predictability, as A. Einstein realized with Invariants and General Relativity, may be required for new ideas to be part of physics. We present here a two postulate model geared to an Absolute Summ. The seedbed of this work is part of Akhnaton's philosophy (see S. Freud, Moses and Monotheism). Most important, however, is that the structure of human consciousness, manifest in Kenya's Rift Valley 200,000 years ago as Homo sapiens, who were the culmination of the six million year co-creation process of Hominins and Nature in Africa, allows us to do the physics that we do. .

  17. Depth Errors Analysis and Correction for Time-of-Flight (ToF) Cameras.

    PubMed

    He, Ying; Liang, Bin; Zou, Yu; He, Jin; Yang, Jun

    2017-01-05

    Time-of-Flight (ToF) cameras, a technology which has developed rapidly in recent years, are 3D imaging sensors providing a depth image as well as an amplitude image with a high frame rate. As a ToF camera is limited by the imaging conditions and external environment, its captured data are always subject to certain errors. This paper analyzes the influence of typical external distractions including material, color, distance, lighting, etc. on the depth error of ToF cameras. Our experiments indicated that factors such as lighting, color, material, and distance could cause different influences on the depth error of ToF cameras. However, since the forms of errors are uncertain, it's difficult to summarize them in a unified law. To further improve the measurement accuracy, this paper proposes an error correction method based on Particle Filter-Support Vector Machine (PF-SVM). Moreover, the experiment results showed that this method can effectively reduce the depth error of ToF cameras to 4.6 mm within its full measurement range (0.5-5 m).

  18. Depth Errors Analysis and Correction for Time-of-Flight (ToF) Cameras

    PubMed Central

    He, Ying; Liang, Bin; Zou, Yu; He, Jin; Yang, Jun

    2017-01-01

    Time-of-Flight (ToF) cameras, a technology which has developed rapidly in recent years, are 3D imaging sensors providing a depth image as well as an amplitude image with a high frame rate. As a ToF camera is limited by the imaging conditions and external environment, its captured data are always subject to certain errors. This paper analyzes the influence of typical external distractions including material, color, distance, lighting, etc. on the depth error of ToF cameras. Our experiments indicated that factors such as lighting, color, material, and distance could cause different influences on the depth error of ToF cameras. However, since the forms of errors are uncertain, it’s difficult to summarize them in a unified law. To further improve the measurement accuracy, this paper proposes an error correction method based on Particle Filter-Support Vector Machine (PF-SVM). Moreover, the experiment results showed that this method can effectively reduce the depth error of ToF cameras to 4.6 mm within its full measurement range (0.5–5 m). PMID:28067767

  19. Structure and dating errors in the geologic time scale and periodicity in mass extinctions

    NASA Technical Reports Server (NTRS)

    Stothers, Richard B.

    1989-01-01

    Structure in the geologic time scale reflects a partly paleontological origin. As a result, ages of Cenozoic and Mesozoic stage boundaries exhibit a weak 28-Myr periodicity that is similar to the strong 26-Myr periodicity detected in mass extinctions of marine life by Raup and Sepkoski. Radiometric dating errors in the geologic time scale, to which the mass extinctions are stratigraphically tied, do not necessarily lessen the likelihood of a significant periodicity in mass extinctions, but do spread the acceptable values of the period over the range 25-27 Myr for the Harland et al. time scale or 25-30 Myr for the DNAG time scale. If the Odin time scale is adopted, acceptable periods fall between 24 and 33 Myr, but are not robust against dating errors. Some indirect evidence from independently-dated flood-basalt volcanic horizons tends to favor the Odin time scale.

  20. Absolute Photometry

    NASA Astrophysics Data System (ADS)

    Hartig, George

    1990-12-01

    The absolute sensitivity of the FOS will be determined in SV by observing 2 stars at 3 epochs, first in 3 apertures (1.0", 0.5", and 0.3" circular) and then in 1 aperture (1.0" circular). In cycle 1, one star, BD+28D4211 will be observed in the 1.0" aperture to establish the stability of the sensitivity and flat field characteristics and improve the accuracy obtained in SV. This star will also be observed through the paired apertures since these are not calibrated in SV. The stars will be observed in most detector/grating combinations. The data will be averaged to form the inverse sensitivity functions required by RSDP.

  1. Covariate measurement error correction methods in mediation analysis with failure time data.

    PubMed

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk.

  2. 20 CFR 410.671 - Revision for error or other reason; time limitation generally.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Revision for error or other reason; time limitation generally. 410.671 Section 410.671 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL..., Other Determinations, Administrative Review, Finality of Decisions, and Representation of Parties §...

  3. Effects of holding time and measurement error on culturing Legionella in environmental water samples.

    PubMed

    Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G

    2014-10-01

    Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding.

  4. 5 CFR 1605.16 - Claims for correction of employing agency errors; time limitations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... months before it was discovered, the agency may exercise sound discretion in deciding whether to correct... a claim to correct any such error after that time, the agency may do so at its sound discretion. (c... employing agency provides the participant with good cause for requiring a longer period to decide the...

  5. 5 CFR 1605.16 - Claims for correction of employing agency errors; time limitations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... months before it was discovered, the agency may exercise sound discretion in deciding whether to correct... a claim to correct any such error after that time, the agency may do so at its sound discretion. (c... employing agency provides the participant with good cause for requiring a longer period to decide the...

  6. 5 CFR 1605.16 - Claims for correction of employing agency errors; time limitations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... months before it was discovered, the agency may exercise sound discretion in deciding whether to correct... a claim to correct any such error after that time, the agency may do so at its sound discretion. (c... employing agency provides the participant with good cause for requiring a longer period to decide the...

  7. Measurement of the Absolute Magnitude and Time Courses of Mitochondrial Membrane Potential in Primary and Clonal Pancreatic Beta-Cells

    PubMed Central

    Gerencser, Akos A.; Mookerjee, Shona A.; Jastroch, Martin; Brand, Martin D.

    2016-01-01

    The aim of this study was to simplify, improve and validate quantitative measurement of the mitochondrial membrane potential (ΔψM) in pancreatic β-cells. This built on our previously introduced calculation of the absolute magnitude of ΔψM in intact cells, using time-lapse imaging of the non-quench mode fluorescence of tetramethylrhodamine methyl ester and a bis-oxonol plasma membrane potential (ΔψP) indicator. ΔψM is a central mediator of glucose-stimulated insulin secretion in pancreatic β-cells. ΔψM is at the crossroads of cellular energy production and demand, therefore precise assay of its magnitude is a valuable tool to study how these processes interplay in insulin secretion. Dispersed islet cell cultures allowed cell type-specific, single-cell observations of cell-to-cell heterogeneity of ΔψM and ΔψP. Glucose addition caused hyperpolarization of ΔψM and depolarization of ΔψP. The hyperpolarization was a monophasic step increase, even in cells where the ΔψP depolarization was biphasic. The biphasic response of ΔψP was associated with a larger hyperpolarization of ΔψM than the monophasic response. Analysis of the relationships between ΔψP and ΔψM revealed that primary dispersed β-cells responded to glucose heterogeneously, driven by variable activation of energy metabolism. Sensitivity analysis of the calibration was consistent with β-cells having substantial cell-to-cell variations in amounts of mitochondria, and this was predicted not to impair the accuracy of determinations of relative changes in ΔψM and ΔψP. Finally, we demonstrate a significant problem with using an alternative ΔψM probe, rhodamine 123. In glucose-stimulated and oligomycin-inhibited β-cells the principles of the rhodamine 123 assay were breached, resulting in misleading conclusions. PMID:27404273

  8. Time lapse imaging of water content with geoelectrical methods: on the interest of working with absolute water content data

    NASA Astrophysics Data System (ADS)

    Dumont, Gaël; Pilawski, Tamara; Robert, Tanguy; Hermans, Thomas; Garré, Sarah; Nguyen, Frederic

    2016-04-01

    The electrical resistivity tomography is a suitable method to estimate the water content of a waste material and detect changes in water content. Various ERT profiles, both static data and time-lapse, where acquired on a landfill during the Minerve project. In the literature, the relative change of resistivity (Δρ/ρ) is generally computed. For saline or heat tracer tests in the saturated zone, the Δρ/ρ can be easily translated into pore water conductivity or underground temperature changes (provided that the initial salinity or temperature condition is homogeneous over the ERT panel extension). For water content changes in the vadose zone resulting of an infiltration event or injection experiment, many authors also work with the Δρ/ρ or relative changes of water content Δθ/θ (linked to the change of resistivity through one single parameter: the Archie's law exponent "m"). This parameter is not influenced by the underground temperature and pore fluid conductivity (ρ¬w) condition but is influenced by the initial water content distribution. Therefore, you never know if the loss of Δθ/θ signal is representative of the limit of the infiltration front or more humid initial condition. Another approach for the understanding of the infiltration process is the assessment of the absolute change of water content (Δθ). This requires the direct computation of the water content of the waste from the resistivity data. For that purpose, we used petrophysical laws calibrated with laboratory experiments and our knowledge of the in situ temperature and pore fluid conductivity parameters. Then, we investigated water content changes in the waste material after a rainfall event (Δθ= Δθ/θ* θ). This new observation is really representatives of the quantity of water infiltrated in the waste material. However, the uncertainty in the pore fluid conductivity value may influence the computed water changes (Δθ=k*m√(ρw) ; where "m" is the Archie's law exponent

  9. Time-domain study on reproducibility of laser-based soft-error simulation

    NASA Astrophysics Data System (ADS)

    Itsuji, Hiroaki; Kobayashi, Daisuke; Lourenco, Nelson E.; Hirose, Kazuyuki

    2017-04-01

    Studied is the soft error issue, which is a circuit malfunction caused by ion-radiation-induced noise currents. We have developed a laser-based soft-error simulation system to emulate the noise and evaluate its reproducibility in the time domain. It is found that this system, which utilizes a two-photon absorption process, can reproduce the shape of ion-induced transient currents, which are assumed to be induced from neutrons at the ground level. A technique used to extract the initial carrier structure inside the device is also presented.

  10. Research on Time-series Modeling and Filtering Methods for MEMS Gyroscope Random Drift Error

    NASA Astrophysics Data System (ADS)

    Wang, Xiao Yi; Meng, Xiu Yun

    2017-03-01

    The precision of MEMS gyroscope is reduced by random drift error. This paper applied time series analysis to model random drift error of MEMS gyroscope. Based on the model established, Kalman filter was employed to compensate for the error. To overcome the disadvantages of conventional Kalman filter, Sage-Husa adaptive filtering algorithm was utilized to improve the accuracy of filtering results and the orthogonal property of innovation in the process of filtering was utilized to deal with outliers. The results showed that, compared with conventional Kalman filter, the modified filter can not only enhance filter accuracy, but also resist to outliers and this assured the stability of filtering thus improving the performance of gyroscopes.

  11. Real-time drift error compensation in a self-reference frequency-scanning fiber interferometer

    NASA Astrophysics Data System (ADS)

    Tao, Long; Liu, Zhigang; Zhang, Weibo; Liu, Zhe; Hong, Jun

    2017-01-01

    In order to eliminate the fiber drift errors in a frequency-scanning fiber interferometer, we propose a self-reference frequency-scanning fiber interferometer composed of two fiber Michelson interferometers sharing common optical paths of fibers. One interferometer defined as reference interferometer is used to monitor the optical path length drift in real time and establish a measurement fixed origin. The other is used as a measurement interferometer to acquire the information from the target. Because the measured optical path differences of the reference and measurement interferometers by frequency-scanning interferometry include the same fiber drift errors, the errors can be eliminated by subtraction of the former optical path difference from the latter optical path difference. A prototype interferometer was developed in our research, and experimental results demonstrate its robustness and stability.

  12. Cortical delta activity reflects reward prediction error and related behavioral adjustments, but at different times.

    PubMed

    Cavanagh, James F

    2015-04-15

    Recent work has suggested that reward prediction errors elicit a positive voltage deflection in the scalp-recorded electroencephalogram (EEG); an event sometimes termed a reward positivity. However, a strong test of this proposed relationship remains to be defined. Other important questions remain unaddressed: such as the role of the reward positivity in predicting future behavioral adjustments that maximize reward. To answer these questions, a three-armed bandit task was used to investigate the role of positive prediction errors during trial-by-trial exploration and task-set based exploitation. The feedback-locked reward positivity was characterized by delta band activities, and these related EEG features scaled with the degree of a computationally derived positive prediction error. However, these phenomena were also dissociated: the computational model predicted exploitative action selection and related response time speeding whereas the feedback-locked EEG features did not. Compellingly, delta band dynamics time-locked to the subsequent bandit (the P3) successfully predicted these behaviors. These bandit-locked findings included an enhanced parietal to motor cortex delta phase lag that correlated with the degree of response time speeding, suggesting a mechanistic role for delta band activities in motivating action selection. This dissociation in feedback vs. bandit locked EEG signals is interpreted as a differentiation in hierarchically distinct types of prediction error, yielding novel predictions about these dissociable delta band phenomena during reinforcement learning and decision making.

  13. A neighbourhood analysis based technique for real-time error concealment in H.264 intra pictures

    NASA Astrophysics Data System (ADS)

    Beesley, Steven T. C.; Grecos, Christos; Edirisinghe, Eran

    2007-02-01

    H.264s extensive use of context-based adaptive binary arithmetic or variable length coding makes streams highly susceptible to channel errors, a common occurrence over networks such as those used by mobile devices. Even a single bit error will cause a decoder to discard all stream data up to the next fixed length resynchronisation point, the worst scenario is that an entire slice is lost. In cases where retransmission and forward error concealment are not possible, a decoder should conceal any erroneous data in order to minimise the impact on the viewer. Stream errors can often be spotted early in the decode cycle of a macroblock which if aborted can provide unused processor cycles, these can instead be used to conceal errors at minimal cost, even as part of a real time system. This paper demonstrates a technique that utilises Sobel convolution kernels to quickly analyse the neighbourhood surrounding erroneous macroblocks before performing a weighted multi-directional interpolation. This generates significantly improved statistical (PSNR) and visual (IEEE structural similarity) results when compared to the commonly used weighted pixel value averaging. Furthermore it is also computationally scalable, both during analysis and concealment, achieving maximum performance from the spare processing power available.

  14. Error estimation in multitemporal InSAR deformation time series, with application to Lanzarote, Canary Islands

    NASA Astrophysics Data System (ADS)

    GonzáLez, Pablo J.; FernáNdez, José

    2011-10-01

    Interferometric Synthetic Aperture Radar (InSAR) is a reliable technique for measuring crustal deformation. However, despite its long application in geophysical problems, its error estimation has been largely overlooked. Currently, the largest problem with InSAR is still the atmospheric propagation errors, which is why multitemporal interferometric techniques have been successfully developed using a series of interferograms. However, none of the standard multitemporal interferometric techniques, namely PS or SB (Persistent Scatterers and Small Baselines, respectively) provide an estimate of their precision. Here, we present a method to compute reliable estimates of the precision of the deformation time series. We implement it for the SB multitemporal interferometric technique (a favorable technique for natural terrains, the most usual target of geophysical applications). We describe the method that uses a properly weighted scheme that allows us to compute estimates for all interferogram pixels, enhanced by a Montecarlo resampling technique that properly propagates the interferogram errors (variance-covariances) into the unknown parameters (estimated errors for the displacements). We apply the multitemporal error estimation method to Lanzarote Island (Canary Islands), where no active magmatic activity has been reported in the last decades. We detect deformation around Timanfaya volcano (lengthening of line-of-sight ˜ subsidence), where the last eruption in 1730-1736 occurred. Deformation closely follows the surface temperature anomalies indicating that magma crystallization (cooling and contraction) of the 300-year shallow magmatic body under Timanfaya volcano is still ongoing.

  15. The impact of navigation satellite ephemeris error on common-view time transfer.

    PubMed

    Sun, Hongwei; Yuan, Haibo; Zhang, Hong

    2010-01-01

    The impact of navigation satellite ephemeris error on satellite common-view time transfer was analyzed. The impact varies depending on the angle in view of a satellite relative to a user (elevation) and the baseline distance between 2 users. Some extents of the impact were figured out for several elevations and different baseline. As an example, results from several common-view time transfer links in China via Compass satellite were given.

  16. Mitigation of Second-Order Ionospheric Error for Real-Time PPP Users in Europe

    NASA Astrophysics Data System (ADS)

    Abdelazeem, Mohamed

    2016-07-01

    Currently, the international global navigation satellite system (GNSS) real-time service (IGS-RTS) products are used extensively for real-time precise point positioning and ionosphere modeling applications. The major challenge of the dual frequency real-time precise point positioning (RT-PPP) is that the solution requires relatively long time to converge to the centimeter-level accuracy. This relatively long convergence time results essentially from the un-modeled high-order ionospheric errors. To overcome this challenge, a method for the second-order ionospheric delay mitigation, which represents the bulk of the high-order ionospheric errors, is proposed for RT-PPP users in Europe. A real-time regional ionospheric model (RT-RIM) over Europe is developed using the IGS-RTS precise satellite orbit and clock products. GPS observations from a regional network consisting of 60 IGS and EUREF reference stations are processed using the Bernese 5.2 software package in order to extract the real-time vertical total electron content (RT-VTEC). The proposed RT-RIM has spatial and temporal resolution of 1º×1º and 15 minutes, respectively. In order to investigate the effect of the second-order ionospheric delay on the RT-PPP solution, new GPS data sets from another reference stations are used. The examined stations are selected to represent different latitudes. The GPS observations are corrected from the second-order ionospheric errors using the extracted RT-VTEC values. In addition, the IGS-RTS precise orbit and clock products are used to account for the satellite orbit and clock errors, respectively. It is shown that the RT-PPP convergence time and positioning accuracy are improved when the second-order ionospheric delay is accounted for.

  17. Development of absolute quantification method for genotype-specific Babesia microti using real-time PCR and practical experimental tips of real-time PCR.

    PubMed

    Ohmori, Shiho; Nagano-Fujii, Motoko; Saito-Ito, Atsuko

    2016-10-01

    Babesia microti, a rodent babesia, is known as a pathogen of zoonosis, human babesiosis, is composed of several genotypes of small subunit ribosomal RNA gene (SSUrDNA) and different genotypes have been suggested to have different infectivity and pathogenicity to humans. We established a real-time PCR assay using SYBR Green I, which allows specific detection and absolute quantification for each SSUrDNA-type-B. microti of four SSUrDNA-types found in Japanese rodents even in mixed infection. In this assay, four genotype-specific primer pairs targeted on internal transcribed spacer 1 or 2 sequences were used. Primer pairs have the characteristics for a high specificity for homologous genotype DNA. The calibration curves of cycle threshold (Ct) values versus log concentrations of DNA for all four genotypes were linear over 10(7) fold range of DNA concentrations with correlation coefficient from 0.95 to 1 and sufficient amplification efficiency from 90% to 110%. The standard curves for all four genotypes were not changed even in the presence of heterologous DNA. In this paper, we introduce how to establish and perform the genotype-specific real-time PCR and our practical experimental tips to be recommended.

  18. Absolute calibration of optical flats

    DOEpatents

    Sommargren, Gary E.

    2005-04-05

    The invention uses the phase shifting diffraction interferometer (PSDI) to provide a true point-by-point measurement of absolute flatness over the surface of optical flats. Beams exiting the fiber optics in a PSDI have perfect spherical wavefronts. The measurement beam is reflected from the optical flat and passed through an auxiliary optic to then be combined with the reference beam on a CCD. The combined beams include phase errors due to both the optic under test and the auxiliary optic. Standard phase extraction algorithms are used to calculate this combined phase error. The optical flat is then removed from the system and the measurement fiber is moved to recombine the two beams. The newly combined beams include only the phase errors due to the auxiliary optic. When the second phase measurement is subtracted from the first phase measurement, the absolute phase error of the optical flat is obtained.

  19. Error correction in short time steps during the application of quantum gates

    SciTech Connect

    Castro, L.A. de Napolitano, R.D.J.

    2016-04-15

    We propose a modification of the standard quantum error-correction method to enable the correction of errors that occur due to the interaction with a noisy environment during quantum gates without modifying the codification used for memory qubits. Using a perturbation treatment of the noise that allows us to separate it from the ideal evolution of the quantum gate, we demonstrate that in certain cases it is necessary to divide the logical operation in short time steps intercalated by correction procedures. A prescription of how these gates can be constructed is provided, as well as a proof that, even for the cases when the division of the quantum gate in short time steps is not necessary, this method may be advantageous for reducing the total duration of the computation.

  20. Accelerating Time-Varying Hardware Volume Rendering Using TSP Trees and Color-Based Error Metrics

    NASA Technical Reports Server (NTRS)

    Ellsworth, David; Chiang, Ling-Jen; Shen, Han-Wei; Kwak, Dochan (Technical Monitor)

    2000-01-01

    This paper describes a new hardware volume rendering algorithm for time-varying data. The algorithm uses the Time-Space Partitioning (TSP) tree data structure to identify regions within the data that have spatial or temporal coherence. By using this coherence, the rendering algorithm can improve performance when the volume data is larger than the texture memory capacity by decreasing the amount of textures required. This coherence can also allow improved speed by appropriately rendering flat-shaded polygons instead of textured polygons, and by not rendering transparent regions. To reduce the polygonization overhead caused by the use of the hierarchical data structure, we introduce an optimization method using polygon templates. The paper also introduces new color-based error metrics, which more accurately identify coherent regions compared to the earlier scalar-based metrics. By showing experimental results from runs using different data sets and error metrics, we demonstrate that the new methods give substantial improvements in volume rendering performance.

  1. Separable responses to error, ambiguity, and reaction time in cingulo-opercular task control regions.

    PubMed

    Neta, Maital; Schlaggar, Bradley L; Petersen, Steven E

    2014-10-01

    The dorsal anterior cingulate (dACC), along with the closely affiliated anterior insula/frontal operculum, have been demonstrated to show three types of task control signals across a wide variety of tasks. One of these signals, a transient signal that is thought to represent performance feedback, shows greater activity to error than correct trials. Other work has found similar effects for uncertainty/ambiguity or conflict, though some argue that dACC activity is, instead, modulated primarily by other processes more reflected in reaction time. Here, we demonstrate that, rather than a single explanation, multiple information processing operations are crucial to characterizing the function of these brain regions, by comparing operations within a single paradigm. Participants performed two tasks in an fMRI experimental session: (1) deciding whether or not visually presented word pairs rhyme, and (2) rating auditorily presented single words as abstract or concrete. A pilot was used to identify ambiguous stimuli for both tasks (e.g., word pair: BASS/GRACE; single word: CHANGE). We found greater cingulo-opercular activity for errors and ambiguous trials than clear/correct trials, with a robust effect of reaction time. The effects of error and ambiguity remained when reaction time was regressed out, although the differences decreased. Further stepwise regression of response consensus (agreement across participants for each stimulus; a proxy for ambiguity) decreased differences between ambiguous and clear trials, but left error-related differences almost completely intact. These observations suggest that trial-wise responses in cingulo-opercular regions monitor multiple performance indices, including accuracy, ambiguity, and reaction time.

  2. SEPARABLE RESPONSES TO ERROR, AMBIGUITY, AND REACTION TIME IN CINGULO-OPERCULAR TASK CONTROL REGIONS

    PubMed Central

    Neta, Maital; Schlaggar, Bradley L.; Petersen, Steven E.

    2014-01-01

    The dorsal anterior cingulate (dACC), along with the closely affiliated anterior insula/frontal operculum have been demonstrated to show three types of task control signals across a wide variety of tasks. One of these signals, a transient signal that is thought to represent performance feedback, shows greater activity to error than correct trials. Other work has found similar effects for uncertainty/ambiguity or conflict, though some argue that dACC activity is, instead, modulated primarily by other processes more reflected in reaction time. Here, we demonstrate that, rather than a single explanation, multiple information processing operations are crucial to characterizing the function of these brain regions, by comparing operations within a single paradigm. Participants performed two tasks in an fMRI experimental session: (1) deciding whether or not visually presented word pairs rhyme, and (2) rating auditorily presented single words as abstract or concrete. A pilot was used to identify ambiguous stimuli for both tasks (e.g., word pair: BASS/GRACE; single word: CHANGE). We found greater cingulo-opercular activity for errors and ambiguous trials than clear/correct trials, with a robust effect of reaction time. The effects of error and ambiguity remained when reaction time was regressed out, although the differences decreased. Further stepwise regression of response consensus (agreement across participants for each stimulus; a proxy for ambiguity) decreased differences between ambiguous and clear trials, but left error-related differences almost completely intact. These observations suggest that trial-wise responses in cinguloopercular regions monitor multiple performance indices, including accuracy, ambiguity, and reaction time. PMID:24887509

  3. Error Analysis of the IGS repro2 Station Position Time Series

    NASA Astrophysics Data System (ADS)

    Rebischung, P.; Ray, J.; Benoist, C.; Metivier, L.; Altamimi, Z.

    2015-12-01

    Eight Analysis Centers (ACs) of the International GNSS Service (IGS) have completed a second reanalysis campaign (repro2) of the GNSS data collected by the IGS global tracking network back to 1994, using the latest available models and methodology. The AC repro2 contributions include in particular daily terrestrial frame solutions, the first time with sub-weekly resolution for the full IGS history. The AC solutions, comprising positions for 1848 stations with daily polar motion coordinates, were combined to form the IGS contribution to the next release of the International Terrestrial Reference Frame (ITRF2014). Inter-AC position consistency is excellent, about 1.5 mm horizontal and 4 mm vertical. The resulting daily combined frames were then stacked into a long-term cumulative frame assuming generally linear motions, which constitutes the GNSS input to the ITRF2014 inter-technique combination. A special challenge involved identifying the many position discontinuities, averaging about 1.8 per station. A stacked periodogram of the station position residual time series from this long-term solution reveals a number of unexpected spectral lines (harmonics of the GPS draconitic year, fortnightly tidal lines) on top of a white+flicker background noise and strong seasonal variations. In this study, we will present results from station- and AC-specific analyses of the noise and periodic errors present in the IGS repro2 station position time series. So as to better understand their sources, and in view of developing a spatio-temporal error model, we will focus in particular on the spatial distribution of the noise characteristics and of the periodic errors. By computing AC-specific long-term frames and analyzing the respective residual time series, we will additionally study how the characteristics of the noise and of the periodic errors depend on the adopted analysis strategy and reduction software.

  4. A model for the time-order error in contrast discrimination.

    PubMed

    Alcala-Quintana, Rocı O; Garcı A-Perez, Miguel A

    2011-06-01

    Trials in a temporal two-interval forced-choice discrimination experiment consist of two sequential intervals presenting stimuli that differ from one another as to magnitude along some continuum. The observer must report in which interval the stimulus had a larger magnitude. The standard difference model from signal detection theory analyses poses that order of presentation should not affect the results of the comparison, something known as the balance condition (J.-C. Falmagne, 1985, in Elements of Psychophysical Theory). But empirical data prove otherwise and consistently reveal what Fechner (1860/1966, in Elements of Psychophysics) called time-order errors, whereby the magnitude of the stimulus presented in one of the intervals is systematically underestimated relative to the other. Here we discuss sensory factors (temporary desensitization) and procedural glitches (short interstimulus or intertrial intervals and response bias) that might explain the time-order error, and we derive a formal model indicating how these factors make observed performance vary with presentation order despite a single underlying mechanism. Experimental results are also presented illustrating the conventional failure of the balance condition and testing the hypothesis that time-order errors result from contamination by the factors included in the model.

  5. Absolute x-ray and neutron calibration of CVD-diamond-based time-of-flight detectors for the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Rosenthal, A.; Kabadi, N. V.; Sio, H.; Rinderknecht, H.; Gatu Johnson, M.; Frenje, J. A.; Seguin, F. H.; Petrasso, R. D.; Glebov, V.; Forrest, C.; Knauer, J.

    2016-10-01

    The particle-time-of-flight (pTOF) detector at the National Ignition Facility routinely measures proton and neutron nuclear bang-times in inertial confinement fusion (ICF) implosions. The active detector medium in pTOF is a chemical vapor deposition (CVD) diamond biased to 250 - 1500 V. This work discusses an absolute measurement of CVD diamond sensitivity to continuous neutrons and x-rays. Although the impulse response of the detector is regularly measured on a diagnostic timing shot, absolute sensitivity of the detector's response to neutrons and x-rays has not been fully established. X-ray, DD-n, and DT-n sources at the MIT HEDP Accelerator Facility provide continuous sources for testing. CVD diamond detectors are also fielded on OMEGA experiments to measure sensitivity to impulse DT-n. Implications for absolute neutron yield measurements at the NIF using pTOF detectors will be discussed. This work was supported in part by the U.S. DoE and LLNL.

  6. PULSAR TIMING ERRORS FROM ASYNCHRONOUS MULTI-FREQUENCY SAMPLING OF DISPERSION MEASURE VARIATIONS

    SciTech Connect

    Lam, M. T.; Cordes, J. M.; Chatterjee, S.; Dolch, T.

    2015-03-10

    Free electrons in the interstellar medium cause frequency-dependent delays in pulse arrival times due to both scattering and dispersion. Multi-frequency measurements are used to estimate and remove dispersion delays. In this paper, we focus on the effect of any non-simultaneity of multi-frequency observations on dispersive delay estimation and removal. Interstellar density variations combined with changes in the line of sight from pulsar and observer motions cause dispersion measure (DM) variations with an approximately power-law power spectrum, augmented in some cases by linear trends. We simulate time series, estimate the magnitude and statistical properties of timing errors that result from non-simultaneous observations, and derive prescriptions for data acquisition that are needed in order to achieve a specified timing precision. For nearby, highly stable pulsars, measurements need to be simultaneous to within about one day in order for the timing error from asynchronous DM correction to be less than about 10 ns. We discuss how timing precision improves when increasing the number of dual-frequency observations used in DM estimation for a given epoch. For a Kolmogorov wavenumber spectrum, we find about a factor of two improvement in precision timing when increasing from two to three observations but diminishing returns thereafter.

  7. Real-Time Baseline Error Estimation and Correction for GNSS/Strong Motion Seismometer Integration

    NASA Astrophysics Data System (ADS)

    Li, C. Y. N.; Groves, P. D.; Ziebart, M. K.

    2014-12-01

    Accurate and rapid estimation of permanent surface displacement is required immediately after a slip event for earthquake monitoring or tsunami early warning. It is difficult to achieve the necessary accuracy and precision at high- and low-frequencies using GNSS or seismometry alone. GNSS and seismic sensors can be integrated to overcome the limitations of each. Kalman filter algorithms with displacement and velocity states have been developed to combine GNSS and accelerometer observations to obtain the optimal displacement solutions. However, the sawtooth-like phenomena caused by the bias or tilting of the sensor decrease the accuracy of the displacement estimates. A three-dimensional Kalman filter algorithm with an additional baseline error state has been developed. An experiment with both a GNSS receiver and a strong motion seismometer mounted on a movable platform and subjected to known displacements was carried out. The results clearly show that the additional baseline error state enables the Kalman filter to estimate the instrument's sensor bias and tilt effects and correct the state estimates in real time. Furthermore, the proposed Kalman filter algorithm has been validated with data sets from the 2010 Mw 7.2 El Mayor-Cucapah Earthquake. The results indicate that the additional baseline error state can not only eliminate the linear and quadratic drifts but also reduce the sawtooth-like effects from the displacement solutions. The conventional zero-mean baseline-corrected results cannot show the permanent displacements after an earthquake; the two-state Kalman filter can only provide stable and optimal solutions if the strong motion seismometer had not been moved or tilted by the earthquake. Yet the proposed Kalman filter can achieve the precise and accurate displacements by estimating and correcting for the baseline error at each epoch. The integration filters out noise-like distortions and thus improves the real-time detection and measurement capability

  8. Errors in visuo-haptic and haptic-haptic location matching are stable over long periods of time.

    PubMed

    Kuling, Irene A; Brenner, Eli; Smeets, Jeroen B J

    2016-05-01

    People make systematic errors when they move their unseen dominant hand to a visual target (visuo-haptic matching) or to their other unseen hand (haptic-haptic matching). Why they make such errors is still unknown. A key question in determining the reason is to what extent individual participants' errors are stable over time. To examine this, we developed a method to quantify the consistency. With this method, we studied the stability of systematic matching errors across time intervals of at least a month. Within this time period, individual subjects' matches were as consistent as one could expect on the basis of the variability in the individual participants' performance within each session. Thus individual participants make quite different systematic errors, but in similar circumstances they make the same errors across long periods of time.

  9. Timing will Tell: Constraining Pulsar Timing Errors in the Search for Gravitational Waves

    NASA Astrophysics Data System (ADS)

    Schwab, Ellianna; Ransom, Scott M.; NANOGrav

    2017-01-01

    Millisecond pulsars produce extremely precise, clock-like electromagnetic radiation pulses. Theoretically, noise in the arrival times (TOAs) of these individual pulses could be used to measure nanohertz-frequency gravitational waves. However, variability in the individual pulse shapes and TOAs due to intrinsic effects of the pulsar, known as pulsar jitter, can mask the noise caused by gravitational waves. We examine the effects of both brightness and time resolution on jitter in a sample of 10 millisecond pulsars observed by the NANOGrav collaboration regularly over an 11-year observation span. We find that several pulsars show quantifiable jitter on their brightest days while others do not, and that jitter grows more pronounced both in pulsars with a high signal-to-noise ratio and as the observer approaches the time resolution of a single millisecond pulse. We provide two methods of quantifying jitter to allow for comparison, both between observations of different pulsars and between observations of the same pulsar on different days.

  10. Impact of a time-dependent background error covariance matrix on air quality analysis

    NASA Astrophysics Data System (ADS)

    Jaumouillé, E.; Massart, S.; Piacentini, A.; Cariolle, D.; Peuch, V.-H.

    2012-09-01

    In this article we study the influence of different characteristics of our assimilation system on surface ozone analyses over Europe. Emphasis is placed on the evaluation of the background error covariance matrix (BECM). Data assimilation systems require a BECM in order to obtain an optimal representation of the physical state. A posteriori diagnostics are an efficient way to check the consistency of the used BECM. In this study we derived a diagnostic to estimate the BECM. On the other hand, an increasingly used approach to obtain such a covariance matrix is to estimate it from an ensemble of perturbed assimilation experiments. We applied this method, combined with variational assimilation, while analysing the surface ozone distribution over Europe. We first show that the resulting covariance matrix is strongly time (hourly and seasonally) and space dependent. We then built several configurations of the background error covariance matrix with none, one or two of its components derived from the ensemble estimation. We used each of these configurations to produce surface ozone analyses. All the analyses are compared between themselves and compared to assimilated data or data from independent validation stations. The configurations are very well correlated with the validation stations, but with varying regional and seasonal characteristics. The largest correlation is obtained with the experiments using time- and space-dependent correlation of the background errors. Results show that our assimilation process is efficient in bringing the model assimilations closer to the observations than the direct simulation, but we cannot conclude which BECM configuration is the best. The impact of the background error covariances configuration on four-days forecasts is also studied. Although mostly positive, the impact depends on the season and lasts longer during the winter season.

  11. Impact of a time-dependent background error covariance matrix on air quality analysis

    NASA Astrophysics Data System (ADS)

    Jaumouillé, E.; Massart, S.; Piacentini, A.; Cariolle, D.; Peuch, V.-H.

    2012-04-01

    In this article we study the influence of different characteristics of our assimilation system on the surface ozone analyses over Europe. Emphasis is placed on the evaluation of the background error covariance matrix (BECM). Data assimilation systems require a BECM in order to obtain an optimal representation of the physical state. A posteriori diagnostics are an efficient way to check the consistency of the used BECM. In this study we derived a diagnostic to estimate the BECM. On the other hand an increasingly used approach to obtain such a covariance matrix is to estimate it from an ensemble of perturbed assimilation experiments. We applied this method, combined with variational assimilation, while analysing the surface ozone distribution over Europe. We first show that the resulting covariance matrix is strongly time (hourly and seasonally) and space dependent. We then built several configurations of the background error covariance matrix with none, one or two of its components derived from the ensemble estimation. We used each of these configurations to produce surface ozone analyses. All the analyses are compared between themselves and compared to assimilated data or data from independent validation stations. The configurations are very well correlated with the validation stations, but with varying regional and seasonal characteristics. The largest correlation is obtained with the experiments using time and space dependent correlation of the background errors. Results show that our assimilation process is efficient in bringing the model assimilations closer to the observations than the direct simulation, but we cannot conclude which BECM configuration is the best. The impact of the background error covariances configuration on four-days forecasts is also studied. Although mostly positive, the impact depends on the season and lasts longer during the winter season.

  12. Accounting for baseline differences and measurement error in the analysis of change over time.

    PubMed

    Braun, Julia; Held, Leonhard; Ledergerber, Bruno

    2014-01-15

    If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy.

  13. Adaptive correction method for an OCXO and investigation of analytical cumulative time error upper bound.

    PubMed

    Zhou, Hui; Kunz, Thomas; Schwartz, Howard

    2011-01-01

    Traditional oscillators used in timing modules of CDMA and WiMAX base stations are large and expensive. Applying cheaper and smaller, albeit more inaccurate, oscillators in timing modules is an interesting research challenge. An adaptive control algorithm is presented to enhance the oscillators to meet the requirements of base stations during holdover mode. An oscillator frequency stability model is developed for the adaptive control algorithm. This model takes into account the control loop which creates the correction signal when the timing module is in locked mode. A recursive prediction error method is used to identify the system model parameters. Simulation results show that an oscillator enhanced by our adaptive control algorithm improves the oscillator performance significantly, compared with uncorrected oscillators. Our results also show the benefit of explicitly modeling the control loop. Finally, the cumulative time error upper bound of such enhanced oscillators is investigated analytically and comparison results between the analytical and simulated upper bound are provided. The results show that the analytical upper bound can serve as a practical guide for system designers.

  14. Dynamic time warping in phoneme modeling for fast pronunciation error detection.

    PubMed

    Miodonska, Zuzanna; Bugdol, Marcin D; Krecichwost, Michal

    2016-02-01

    The presented paper describes a novel approach to the detection of pronunciation errors. It makes use of the modeling of well-pronounced and mispronounced phonemes by means of the Dynamic Time Warping (DTW) algorithm. Four approaches that make use of the DTW phoneme modeling were developed to detect pronunciation errors: Variations of the Word Structure (VoWS), Normalized Phoneme Distances Thresholding (NPDT), Furthest Segment Search (FSS) and Normalized Furthest Segment Search (NFSS). The performance evaluation of each module was carried out using a speech database of correctly and incorrectly pronounced words in the Polish language, with up to 10 patterns of every trained word from a set of 12 words having different phonetic structures. The performance of DTW modeling was compared to Hidden Markov Models (HMM) that were used for the same four approaches (VoWS, NPDT, FSS, NFSS). The average error rate (AER) was the lowest for DTW with NPDT (AER=0.287) and scored better than HMM with FSS (AER=0.473), which was the best result for HMM. The DTW modeling was faster than HMM for all four approaches. This technique can be used for computer-assisted pronunciation training systems that can work with a relatively small training speech corpus (less than 20 patterns per word) to support speech therapy at home.

  15. Sieve Estimation of Constant and Time-Varying Coefficients in Nonlinear Ordinary Differential Equation Models by Considering Both Numerical Error and Measurement Error.

    PubMed

    Xue, Hongqi; Miao, Hongyu; Wu, Hulin

    2010-01-01

    This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge-Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the p-order numerical algorithm goes to zero at a rate faster than n(-1/(p∧4)), the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we have shown that the numerical solution-based NLS estimator and the sieve NLS estimator are strongly consistent. The sieve estimator of constant parameters is asymptotically normal with the same asymptotic co-variance as that of the case where the true ODE solution is exactly known, while the estimator of the time-varying parameter has the optimal convergence rate under some regularity conditions. The theoretical results are also developed for the case when the step size of the ODE numerical solver does not go to zero fast enough or the numerical error is comparable to the measurement error. We illustrate our approach with both simulation studies and clinical data on HIV viral dynamics.

  16. Sieve Estimation of Constant and Time-Varying Coefficients in Nonlinear Ordinary Differential Equation Models by Considering Both Numerical Error and Measurement Error

    PubMed Central

    Xue, Hongqi; Miao, Hongyu; Wu, Hulin

    2010-01-01

    This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge–Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the p-order numerical algorithm goes to zero at a rate faster than n−1/(p∧4), the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we have shown that the numerical solution-based NLS estimator and the sieve NLS estimator are strongly consistent. The sieve estimator of constant parameters is asymptotically normal with the same asymptotic co-variance as that of the case where the true ODE solution is exactly known, while the estimator of the time-varying parameter has the optimal convergence rate under some regularity conditions. The theoretical results are also developed for the case when the step size of the ODE numerical solver does not go to zero fast enough or the numerical error is comparable to the measurement error. We illustrate our approach with both simulation studies and clinical data on HIV viral dynamics. PMID:21132064

  17. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    NASA Astrophysics Data System (ADS)

    Langbein, John

    2017-02-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  18. Assessment of Systematic Measurement Errors for Acoustic Travel-Time Tomography of the Atmosphere

    DTIC Science & Technology

    2013-01-01

    times obtained with Algorithm 3, the reconstructions become relatively accurate, see Figs. 6(g), 6( h ), and 6(i). The magnitudes of all fields are...Temperature. (e) u0 þ u. (f) v0 þ v. Reconstruction with the estimated systematic errors by Algorithm 3: (g) Temperature. ( h ) u0 þ u. (i) v0 þ v. TABLE V...tomographic monitoring of the atmospheric surface layer,” J. Atmos. Ocean. Technol. 11, 751–769 (1994). 2A. Ziemann, K. Arnold, and A. Raabe , “Acoustic

  19. Adaptive error detection for HDR/PDR brachytherapy: Guidance for decision making during real-time in vivo point dosimetry

    SciTech Connect

    Kertzscher, Gustavo Andersen, Claus E.; Tanderup, Kari

    2014-05-15

    Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusive dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was

  20. Efficient simulation for fixed-receiver bistatic SAR with time and frequency synchronization errors

    NASA Astrophysics Data System (ADS)

    Yan, Feifei; Chang, Wenge; Li, Xiangyang

    2015-12-01

    Raw signal simulation is a useful tool for synthetic aperture radar (SAR) system design, mission planning, processing algorithm testing, and inversion algorithm design. Time and frequency synchronization is the key technique of bistatic SAR (BiSAR) system, and raw data simulation is an effective tool for verifying the time and frequency synchronization techniques. According to the two-dimensional (2-D) frequency spectrum of fixed-receiver BiSAR, a rapid raw data simulation approach with time and frequency synchronization errors is proposed in this paper. Through 2-D inverse Stolt transform in 2-D frequency domain and phase compensation in range-Doppler frequency domain, this method can significantly improve the efficiency of scene raw data simulation. Simulation results of point targets and extended scene are presented to validate the feasibility and efficiency of the proposed simulation approach.

  1. Representation of layer-counted proxy records as probability densities on error-free time axes

    NASA Astrophysics Data System (ADS)

    Boers, Niklas; Goswami, Bedartha; Ghil, Michael

    2016-04-01

    Time series derived from paleoclimatic proxy records exhibit substantial dating uncertainties in addition to the measurement errors of the proxy values. For radiometrically dated proxy archives, Goswami et al. [1] have recently introduced a framework rooted in Bayesian statistics that successfully propagates the dating uncertainties from the time axis to the proxy axis. The resulting proxy record consists of a sequence of probability densities over the proxy values, conditioned on prescribed age values. One of the major benefits of this approach is that the proxy record is represented on an accurate, error-free time axis. Such unambiguous dating is crucial, for instance, in comparing different proxy records. This approach, however, is not directly applicable to proxy records with layer-counted chronologies, as for example ice cores, which are typically dated by counting quasi-annually deposited ice layers. Hence the nature of the chronological uncertainty in such records is fundamentally different from that in radiometrically dated ones. Here, we introduce a modification of the Goswami et al. [1] approach that is specifically designed for layer-counted proxy records, instead of radiometrically dated ones. We apply our method to isotope ratios and dust concentrations in the NGRIP core, using a published 60,000-year chronology [2]. It is shown that the further one goes into the past, the more the layer-counting errors accumulate and lead to growing uncertainties in the probability density sequence for the proxy values that results from the proposed approach. For the older parts of the record, these uncertainties affect more and more a statistically sound estimation of proxy values. This difficulty implies that great care has to be exercised when comparing and in particular aligning specific events among different layer-counted proxy records. On the other hand, when attempting to derive stochastic dynamical models from the proxy records, one is only interested in the

  2. Absolute-structure reports.

    PubMed

    Flack, Howard D

    2013-08-01

    All the 139 noncentrosymmetric crystal structures published in Acta Crystallographica Section C between January 2011 and November 2012 inclusive have been used as the basis of a detailed study of the reporting of absolute structure. These structure determinations cover a wide range of space groups, chemical composition and resonant-scattering contribution. Defining A and D as the average and difference of the intensities of Friedel opposites, their level of fit has been examined using 2AD and selected-D plots. It was found, regardless of the expected resonant-scattering contribution to Friedel opposites, that the Friedel-difference intensities are often dominated by random uncertainty and systematic error. An analysis of data collection strategy is provided. It is found that crystal-structure determinations resulting in a Flack parameter close to 0.5 may not necessarily be from crystals twinned by inversion. Friedifstat is shown to be a robust estimator of the resonant-scattering contribution to Friedel opposites, very little affected by the particular space group of a structure nor by the occupation of special positions. There is considerable confusion in the text of papers presenting achiral noncentrosymmetric crystal structures. Recommendations are provided for the optimal way of treating noncentrosymmetric crystal structures for which the experimenter has no interest in determining the absolute structure.

  3. Post-event human decision errors: operator action tree/time reliability correlation

    SciTech Connect

    Hall, R E; Fragola, J; Wreathall, J

    1982-11-01

    This report documents an interim framework for the quantification of the probability of errors of decision on the part of nuclear power plant operators after the initiation of an accident. The framework can easily be incorporated into an event tree/fault tree analysis. The method presented consists of a structure called the operator action tree and a time reliability correlation which assumes the time available for making a decision to be the dominating factor in situations requiring cognitive human response. This limited approach decreases the magnitude and complexity of the decision modeling task. Specifically, in the past, some human performance models have attempted prediction by trying to emulate sequences of human actions, or by identifying and modeling the information processing approach applicable to the task. The model developed here is directed at describing the statistical performance of a representative group of hypothetical individuals responding to generalized situations.

  4. EEG error potentials detection and classification using time-frequency features for robot reinforcement learning.

    PubMed

    Boubchir, Larbi; Touati, Youcef; Daachi, Boubaker; Chérif, Arab Ali

    2015-08-01

    In thought-based steering of robots, error potentials (ErrP) can appear when the action resulting from the brain-machine interface (BMI) classifier/controller does not correspond to the user's thought. Using the Steady State Visual Evoked Potentials (SSVEP) techniques, ErrP, which appear when a classification error occurs, are not easily recognizable by only examining the temporal or frequency characteristics of EEG signals. A supplementary classification process is therefore needed to identify them in order to stop the course of the action and back up to a recovery state. This paper presents a set of time-frequency (t-f) features for the detection and classification of EEG ErrP in extra-brain activities due to misclassification observed by a user exploiting non-invasive BMI and robot control in the task space. The proposed features are able to characterize and detect ErrP activities in the t-f domain. These features are derived from the information embedded in the t-f representation of EEG signals, and include the Instantaneous Frequency (IF), t-f information complexity, SVD information, energy concentration and sub-bands' energies. The experiment results on real EEG data show that the use of the proposed t-f features for detecting and classifying EEG ErrP achieved an overall classification accuracy up to 97% for 50 EEG segments using 2-class SVM classifier.

  5. The impact of a closed‐loop electronic prescribing and administration system on prescribing errors, administration errors and staff time: a before‐and‐after study

    PubMed Central

    Franklin, Bryony Dean; O'Grady, Kara; Donyai, Parastou; Jacklin, Ann; Barber, Nick

    2007-01-01

    Objectives To assess the impact of a closed‐loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Design, setting and participants Before‐and‐after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Intervention Closed‐loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Main outcome measures Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Results Prescribing errors were identified in 3.8% of 2450 medication orders pre‐intervention and 2.0% of 2353 orders afterwards (p<0.001; χ2 test). MAEs occurred in 7.0% of 1473 non‐intravenous doses pre‐intervention and 4.3% of 1139 afterwards (p = 0.005; χ2 test). Patient identity was not checked for 82.6% of 1344 doses pre‐intervention and 18.9% of 1291 afterwards (p<0.001; χ2 test). Medical staff required 15 s to prescribe a regular inpatient drug pre‐intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre‐intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; χ2 test). Conclusions A closed‐loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before

  6. Teaching Absolute Value Meaningfully

    ERIC Educational Resources Information Center

    Wade, Angela

    2012-01-01

    What is the meaning of absolute value? And why do teachers teach students how to solve absolute value equations? Absolute value is a concept introduced in first-year algebra and then reinforced in later courses. Various authors have suggested instructional methods for teaching absolute value to high school students (Wei 2005; Stallings-Roberts…

  7. Statistical analysis of error rate of large-scale single flux quantum logic circuit by considering fluctuation of timing parameters

    NASA Astrophysics Data System (ADS)

    Yamanashi, Yuki; Masubuchi, Kota; Yoshikawa, Nobuyuki

    2016-11-01

    The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.

  8. Static Analysis of Run-Time Errors in Embedded Critical Parallel C Programs

    NASA Astrophysics Data System (ADS)

    Miné, Antoine

    We present a static analysis by Abstract Interpretation to check for run-time errors in parallel C programs. Following our work on Astrée, we focus on embedded critical programs without recursion nor dynamic memory allocation, but extend the analysis to a static set of threads. Our method iterates a slightly modified non-parallel analysis over each thread in turn, until thread interferences stabilize. We prove the soundness of the method with respect to a sequential consistent semantics and a reasonable weakly consistent memory semantics. We then show how to take into account mutual exclusion and thread priorities through partitioning over the scheduler state. We present preliminary experimental results analyzing a real program with our prototype, Thésée, and demonstrate the scalability of our approach.

  9. Verdict: Time-Dependent Density Functional Theory "Not Guilty" of Large Errors for Cyanines.

    PubMed

    Jacquemin, Denis; Zhao, Yan; Valero, Rosendo; Adamo, Carlo; Ciofini, Ilaria; Truhlar, Donald G

    2012-04-10

    We assess the accuracy of eight Minnesota density functionals (M05 through M08-SO) and two others (PBE and PBE0) for the prediction of electronic excitation energies of a family of four cyanine dyes. We find that time-dependent density functional theory (TDDFT) with the five most recent of these functionals (from M06-HF through M08-SO) is able to predict excitation energies for cyanine dyes within 0.10-0.36 eV accuracy with respect to the most accurate available Quantum Monte Carlo calculations, providing a comparable accuracy to the latest generation of CASPT2 calculations, which have errors of 0.16-0.34 eV. Therefore previous conclusions that TDDFT cannot treat cyanine dyes reasonably accurately must be revised.

  10. Time-dependent neural processing of auditory feedback during voice pitch error detection.

    PubMed

    Behroozmand, Roozbeh; Liu, Hanjun; Larson, Charles R

    2011-05-01

    The neural responses to sensory consequences of a self-produced motor act are suppressed compared with those in response to a similar but externally generated stimulus. Previous studies in the somatosensory and auditory systems have shown that the motor-induced suppression of the sensory mechanisms is sensitive to delays between the motor act and the onset of the stimulus. The present study investigated time-dependent neural processing of auditory feedback in response to self-produced vocalizations. ERPs were recorded in response to normal and pitch-shifted voice auditory feedback during active vocalization and passive listening to the playback of the same vocalizations. The pitch-shifted stimulus was delivered to the subjects' auditory feedback after a randomly chosen time delay between the vocal onset and the stimulus presentation. Results showed that the neural responses to delayed feedback perturbations were significantly larger than those in response to the pitch-shifted stimulus occurring at vocal onset. Active vocalization was shown to enhance neural responsiveness to feedback alterations only for nonzero delays compared with passive listening to the playback. These findings indicated that the neural mechanisms of auditory feedback processing are sensitive to timing between the vocal motor commands and the incoming auditory feedback. Time-dependent neural processing of auditory feedback may be an important feature of the audio-vocal integration system that helps to improve the feedback-based monitoring and control of voice structure through vocal error detection and correction.

  11. Wind induced errors on solid precipitation measurements: an evaluation using time-dependent turbulence simulations

    NASA Astrophysics Data System (ADS)

    Colli, Matteo; Lanza, Luca Giovanni; Rasmussen, Roy; Mireille Thériault, Julie

    2014-05-01

    Among the different environmental sources of error for ground based solid precipitation measurements, wind is the main responsible for a large reduction of the catching performance. This is due to the aero-dynamic response of the gauge that affects the originally undisturbed airflow causing the deformation of the snowflakes trajectories. The application of composite gauge/wind shield measuring configurations allows the improvements of the collection efficiency (CE) at low wind speeds (Uw) but the performance achievable under severe airflow velocities and the role of turbulence still have to be explained. This work is aimed to assess the wind induced errors of a Geonor T200B vibrating wires gauge equipped with a single Alter shield. This is a common measuring system for solid precipitation, which constitutes of the R3 reference system in the ongoing WMO Solid Precipitation InterComparison Experiment (SPICE). The analysis is carried out by adopting advanced Computational Fluid Dynamics (CFD) tools for the numerical simulation of the turbulent airflow realized in the proximity of the catching section of the gauge. The airflow patterns were computed by running both time-dependent (Large Eddies Simulation) and time-independent (Reynolds Averaged Navier-Stokes) simulations. on the Yellowstone high performance computing system of the National Center for Atmospheric Research. The evaluation of CE under different Uw conditions was obtained by running a Lagrangian model for the calculation of the snowflakes trajectories building on the simulated airflow patterns. Particular attention has been paid to the sensitivity of the trajectories to different snow particles sizes and water content (corresponding to dry and wet snow). The results will be illustrated in comparative form between the different methodologies adopted and the existing infield CE evaluations based on double shield reference gauges.

  12. Characterization of Ambient Air Pollution Measurement Error in a Time-Series Health Study using a Geostatistical Simulation Approach.

    PubMed

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Gass, Katherine; Strickland, Matthew J; Tolbert, Paige E

    2012-09-01

    In recent years, geostatistical modeling has been used to inform air pollution health studies. In this study, distributions of daily ambient concentrations were modeled over space and time for 12 air pollutants. Simulated pollutant fields were produced for a 6-year time period over the 20-county metropolitan Atlanta area using the Stanford Geostatistical Modeling Software (SGeMS). These simulations incorporate the temporal and spatial autocorrelation structure of ambient pollutants, as well as season and day-of-week temporal and spatial trends; these fields were considered to be the true ambient pollutant fields for the purposes of the simulations that followed. Simulated monitor data at the locations of actual monitors were then generated that contain error representative of instrument imprecision. From the simulated monitor data, four exposure metrics were calculated: central monitor and unweighted, population-weighted, and area-weighted averages. For each metric, the amount and type of error relative to the simulated pollutant fields are characterized and the impact of error on an epidemiologic time-series analysis is predicted. The amount of error, as indicated by a lack of spatial autocorrelation, is greater for primary pollutants than for secondary pollutants and is only moderately reduced by averaging across monitors; more error will result in less statistical power in the epidemiologic analysis. The type of error, as indicated by the correlations of error with the monitor data and with the true ambient concentration, varies with exposure metric, with error in the central monitor metric more of the classical type (i.e., independent of the monitor data) and error in the spatial average metrics more of the Berkson type (i.e., independent of the true ambient concentration). Error type will affect the bias in the health risk estimate, with bias toward the null and away from the null predicted depending on the exposure metric; population-weighting yielded the

  13. Sub-micron absolute distance measurements in sub-millisecond times with dual free-running femtosecond Er fiber-lasers.

    PubMed

    Liu, Tze-An; Newbury, Nathan R; Coddington, Ian

    2011-09-12

    We demonstrate a simplified dual-comb LIDAR setup for precision absolute ranging that can achieve a ranging precision of 2 μm in 140 μs acquisition time. With averaging, the precision drops below 1 μm at 0.8 ms and below 200 nm at 20 ms. The system can measure the distance to multiple targets with negligible dead zones and a ranging ambiguity of 1 meter. The system is much simpler than a previous coherent dual-comb LIDAR because the two combs are replaced by free-running, saturable-absorber-based femtosecond Er fiber lasers, rather than tightly phase-locked combs, with the entire time base provided by a single 10-digit frequency counter. Despite the simpler design, the system provides a factor of three improved performance over the previous coherent dual comb LIDAR system.

  14. Comprehensive panel of real-time TaqMan polymerase chain reaction assays for detection and absolute quantification of filoviruses, arenaviruses, and New World hantaviruses.

    PubMed

    Trombley, Adrienne R; Wachter, Leslie; Garrison, Jeffrey; Buckley-Beason, Valerie A; Jahrling, Jordan; Hensley, Lisa E; Schoepp, Randal J; Norwood, David A; Goba, Augustine; Fair, Joseph N; Kulesh, David A

    2010-05-01

    Viral hemorrhagic fever is caused by a diverse group of single-stranded, negative-sense or positive-sense RNA viruses belonging to the families Filoviridae (Ebola and Marburg), Arenaviridae (Lassa, Junin, Machupo, Sabia, and Guanarito), and Bunyaviridae (hantavirus). Disease characteristics in these families mark each with the potential to be used as a biological threat agent. Because other diseases have similar clinical symptoms, specific laboratory diagnostic tests are necessary to provide the differential diagnosis during outbreaks and for instituting acceptable quarantine procedures. We designed 48 TaqMan-based polymerase chain reaction (PCR) assays for specific and absolute quantitative detection of multiple hemorrhagic fever viruses. Forty-six assays were determined to be virus-specific, and two were designated as pan assays for Marburg virus. The limit of detection for the assays ranged from 10 to 0.001 plaque-forming units (PFU)/PCR. Although these real-time hemorrhagic fever virus assays are qualitative (presence of target), they are also quantitative (measure a single DNA/RNA target sequence in an unknown sample and express the final results as an absolute value (e.g., viral load, PFUs, or copies/mL) on the basis of concentration of standard samples and can be used in viral load, vaccine, and antiviral drug studies.

  15. Real-time lossy compression of hyperspectral images using iterative error analysis on graphics processing units

    NASA Astrophysics Data System (ADS)

    Sánchez, Sergio; Plaza, Antonio

    2012-06-01

    Hyperspectral image compression is an important task in remotely sensed Earth Observation as the dimensionality of this kind of image data is ever increasing. This requires on-board compression in order to optimize the donwlink connection when sending the data to Earth. A successful algorithm to perform lossy compression of remotely sensed hyperspectral data is the iterative error analysis (IEA) algorithm, which applies an iterative process which allows controlling the amount of information loss and compression ratio depending on the number of iterations. This algorithm, which is based on spectral unmixing concepts, can be computationally expensive for hyperspectral images with high dimensionality. In this paper, we develop a new parallel implementation of the IEA algorithm for hyperspectral image compression on graphics processing units (GPUs). The proposed implementation is tested on several different GPUs from NVidia, and is shown to exhibit real-time performance in the analysis of an Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) data sets collected over different locations. The proposed algorithm and its parallel GPU implementation represent a significant advance towards real-time onboard (lossy) compression of hyperspectral data where the quality of the compression can be also adjusted in real-time.

  16. In-flow real-time detection of spectrally encoded microgels for miRNA absolute quantification

    PubMed Central

    Dannhauser, David; Causa, Filippo; Cusano, Angela M.; Rossi, Domenico; Netti, Paolo A.

    2016-01-01

    We present an in-flow ultrasensitive fluorescence detection of microRNAs (miRNAs) using spectrally encoded microgels. We researched and employed a viscoelastic fluid to achieve an optimal alignment of microgels in a straight measurement channel and applied a simple and inexpensive microfluidic layout, allowing continuous fluorescence signal acquisitions with several emission wavelengths. In particular, we chose microgels endowed with fluorescent emitting molecules designed for multiplex spectral analysis of specific miRNA types. We analysed in a quasi-real-time manner circa 80 microgel particles a minute at sample volumes down to a few microliters, achieving a miRNA detection limit of 202 fM in microfluidic flow conditions. Such performance opens up new routes for biosensing applications of particles within microfluidic devices. PMID:27990216

  17. In-flow real-time detection of spectrally encoded microgels for miRNA absolute quantification.

    PubMed

    Dannhauser, David; Causa, Filippo; Battista, Edmondo; Cusano, Angela M; Rossi, Domenico; Netti, Paolo A

    2016-11-01

    We present an in-flow ultrasensitive fluorescence detection of microRNAs (miRNAs) using spectrally encoded microgels. We researched and employed a viscoelastic fluid to achieve an optimal alignment of microgels in a straight measurement channel and applied a simple and inexpensive microfluidic layout, allowing continuous fluorescence signal acquisitions with several emission wavelengths. In particular, we chose microgels endowed with fluorescent emitting molecules designed for multiplex spectral analysis of specific miRNA types. We analysed in a quasi-real-time manner circa 80 microgel particles a minute at sample volumes down to a few microliters, achieving a miRNA detection limit of 202 fM in microfluidic flow conditions. Such performance opens up new routes for biosensing applications of particles within microfluidic devices.

  18. Continuous Gravity Monitoring in South America with Superconducting and Absolute Gravimeters: More than 12 years time series at station TIGO/Concepcion (Chile)

    NASA Astrophysics Data System (ADS)

    Wziontek, Hartmut; Falk, Reinhard; Hase, Hayo; Armin, Böer; Andreas, Güntner; Rongjiang, Wang

    2016-04-01

    As part of the Transportable Integrated Geodetic Observatory (TIGO) of BKG, the superconducting gravimeter SG 038 was set up in December 2002 at station Concepcion / Chile to record temporal gravity variations with highest precision. Since May 2006 the time series was supported by weekly observations with the absolute gravimeter FG5-227, proving the large seasonal variations of up to 30 μGal and establishing a gravity reference station in South America. With the move of the whole observatory to the new location near to La Plata / Argentina the series was terminated. Results of almost continuously monitoring gravity variations for more than 12 years are presented. Seasonal variations are interpreted with respect of global and local water storage changes and the impact of the 8.8 Maule Earthquake in February 2010 is discussed.

  19. Continuous theta burst stimulation over the left pre-motor cortex affects sensorimotor timing accuracy and supraliminal error correction.

    PubMed

    Bijsterbosch, Janine D; Lee, Kwang-Hyuk; Dyson-Sutton, William; Barker, Anthony T; Woodruff, Peter W R

    2011-09-02

    Adjustments to movement in response to changes in our surroundings are common in everyday behavior. Previous research has suggested that the left pre-motor cortex (PMC) is specialized for the temporal control of movement and may play a role in temporal error correction. The aim of this study was to determine the role of the left PMC in sensorimotor timing and error correction using theta burst transcranial magnetic stimulation (TBS). In Experiment 1, subjects performed a sensorimotor synchronization task (SMS) with the left and the right hand before and after either continuous or intermittent TBS (cTBS or iTBS). Timing accuracy was assessed during synchronized finger tapping with a regular auditory pacing stimulus. Responses following perceivable local timing shifts in the pacing stimulus (phase shifts) were used to measure error correction. Suppression of the left PMC using cTBS decreased timing accuracy because subjects tapped further away from the pacing tones and tapping variability increased. In addition, error correction responses returned to baseline tap-tone asynchrony levels faster following negative shifts and no overcorrection occurred following positive shifts after cTBS. However, facilitation of the left PMC using iTBS did not affect timing accuracy or error correction performance. Experiment 2 revealed that error correction performance may change with practice, independent of TBS. These findings provide evidence for a role of the left PMC in both sensorimotor timing and error correction in both hands. We propose that the left PMC may be involved in voluntarily controlled phase correction responses to perceivable timing shifts.

  20. Using a Novel Absolute Ontogenetic Age Determination Technique to Calculate the Timing of Tooth Eruption in the Saber-Toothed Cat, Smilodon fatalis

    PubMed Central

    Wysocki, M. Aleksander; Feranec, Robert S.; Tseng, Zhijie Jack; Bjornsson, Christopher S.

    2015-01-01

    Despite the superb fossil record of the saber-toothed cat, Smilodon fatalis, ontogenetic age determination for this and other ancient species remains a challenge. The present study utilizes a new technique, a combination of data from stable oxygen isotope analyses and micro-computed tomography, to establish the eruption rate for the permanent upper canines in Smilodon fatalis. The results imply an eruption rate of 6.0 millimeters per month, which is similar to a previously published average enamel growth rate of the S. fatalis upper canines (5.8 millimeters per month). Utilizing the upper canine growth rate, the upper canine eruption rate, and a previously published tooth replacement sequence, this study calculates absolute ontogenetic age ranges of tooth development and eruption in S. fatalis. The timing of tooth eruption is compared between S. fatalis and several extant conical-toothed felids, such as the African lion (Panthera leo). Results suggest that the permanent dentition of S. fatalis, except for the upper canines, was fully erupted by 14 to 22 months, and that the upper canines finished erupting at about 34 to 41 months. Based on these developmental age calculations, S. fatalis individuals less than 4 to 7 months of age were not typically preserved at Rancho La Brea. On the whole, S. fatalis appears to have had delayed dental development compared to dental development in similar-sized extant felids. This technique for absolute ontogenetic age determination can be replicated in other ancient species, including non-saber-toothed taxa, as long as the timing of growth initiation and growth rate can be determined for a specific feature, such as a tooth, and that growth period overlaps with the development of the other features under investigation. PMID:26132165

  1. SkyProbe: Real-Time Precision Monitoring in the Optical of the Absolute Atmospheric Absorption on the Telescope Science and Calibration Fields

    NASA Astrophysics Data System (ADS)

    Cuillandre, J.-C.; Magnier, E.; Sabin, D.; Mahoney, B.

    2016-05-01

    Mauna Kea is known for its pristine seeing conditions but sky transparency can be an issue for science operations since at least 25% of the observable (i.e. open dome) nights are not photometric, an effect mostly due to high-altitude cirrus. Since 2001, the original single channel SkyProbe mounted in parallel on the Canada-France-Hawaii Telescope (CFHT) has gathered one V-band exposure every minute during each observing night using a small CCD camera offering a very wide field of view (35 sq. deg.) encompassing the region pointed by the telescope for science operations, and exposures long enough (40 seconds) to capture at least 100 stars of Hipparcos' Tycho catalog at high galactic latitudes (and up to 600 stars at low galactic latitudes). The measurement of the true atmospheric absorption is achieved within 2%, a key advantage over all-sky direct thermal infrared imaging detection of clouds. The absolute measurement of the true atmospheric absorption by clouds and particulates affecting the data being gathered by the telescope's main science instrument has proven crucial for decision making in the CFHT queued service observing (QSO) representing today all of the telescope time. Also, science exposures taken in non-photometric conditions are automatically registered for a new observation at a later date at 1/10th of the original exposure time in photometric conditions to ensure a proper final absolute photometric calibration. Photometric standards are observed only when conditions are reported as being perfectly stable by SkyProbe. The more recent dual color system (simultaneous B & V bands) will offer a better characterization of the sky properties above Mauna Kea and should enable a better detection of the thinnest cirrus (absorption down to 0.01 mag., or 1%).

  2. Feature Migration in Time: Reflection of Selective Attention on Speech Errors

    ERIC Educational Resources Information Center

    Nozari, Nazbanou; Dell, Gary S.

    2012-01-01

    This article describes an initial study of the effect of focused attention on phonological speech errors. In 3 experiments, participants recited 4-word tongue twisters and focused attention on 1 (or none) of the words. The attended word was singled out differently in each experiment; participants were under instructions to avoid errors on the…

  3. Detecting and Correcting Errors in Rapid Aiming Movements: Effects of Movement Time, Distance, and Velocity

    ERIC Educational Resources Information Center

    Sherwood, David E.

    2010-01-01

    According to closed-loop accounts of motor control, movement errors are detected by comparing sensory feedback to an acquired reference state. Differences between the reference state and the movement-produced feedback results in an error signal that serves as a basis for a correction. The main question addressed in the current study was how…

  4. Satellite-station time synchronization information based real-time orbit error monitoring and correction of navigation satellite in Beidou System

    NASA Astrophysics Data System (ADS)

    He, Feng; Zhou, ShanShi; Hu, XiaoGong; Zhou, JianHua; Liu, Li; Guo, Rui; Li, XiaoJie; Wu, Shan

    2014-07-01

    Satellite-station two-way time comparison is a typical design in Beidou System (BDS) which is significantly different from other satellite navigation systems. As a type of two-way time comparison method, BDS time synchronization is hardly influenced by satellite orbit error, atmosphere delay, tracking station coordinate error and measurement model error. Meanwhile, single-way time comparison can be realized through the method of Multi-satellite Precision Orbit Determination (MPOD) with pseudo-range and carrier phase of monitor receiver. It is proved in the constellation of 3GEO/2IGSO that the radial orbit error can be reflected in the difference between two-way time comparison and single-way time comparison, and that may lead to a substitute for orbit evaluation by SLR. In this article, the relation between orbit error and difference of two-way and single-way time comparison is illustrated based on the whole constellation of BDS. Considering the all-weather and real-time operation mode of two-way time comparison, the orbit error could be quantifiably monitored in a real-time mode through comparing two-way and single-way time synchronization. In addition, the orbit error can be predicted and corrected in a short time based on its periodic characteristic. It is described in the experiments of GEO and IGSO that the prediction accuracy of space signal can be obviously improved when the prediction orbit error is sent to the users through navigation message, and then the UERE including terminal error can be reduced from 0.1 m to 0.4 m while the average accuracy can be improved more than 27%. Though it is still hard to make accuracy improvement for Precision Orbit Determination (POD) and orbit prediction because of the confined tracking net and the difficulties in dynamic model optimization, in this paper, a practical method for orbit accuracy improvement is proposed based on two-way time comparison which can result in the reflection of orbit error.

  5. Correcting incompatible DN values and geometric errors in nighttime lights time series images

    SciTech Connect

    Zhao, Naizhuo; Zhou, Yuyu; Samson, Eric L.

    2014-09-19

    The Defense Meteorological Satellite Program’s Operational Linescan System (DMSP-OLS) nighttime lights imagery has proven to be a powerful remote sensing tool to monitor urbanization and assess socioeconomic activities at large scales. However, the existence of incompatible digital number (DN) values and geometric errors severely limit application of nighttime light image data on multi-year quantitative research. In this study we extend and improve previous studies on inter-calibrating nighttime lights image data to obtain more compatible and reliable nighttime lights time series (NLT) image data for China and the United States (US) through four steps: inter-calibration, geometric correction, steady increase adjustment, and population data correction. We then use gross domestic product (GDP) data to test the processed NLT image data indirectly and find that sum light (summed DN value of pixels in a nighttime light image) maintains apparent increase trends with relatively large GDP growth rates but does not increase or decrease with relatively small GDP growth rates. As nighttime light is a sensitive indicator for economic activity, the temporally consistent trends between sum light and GDP growth rate imply that brightness of nighttime lights on the ground is correctly represented by the processed NLT image data. Finally, through analyzing the corrected NLT image data from 1992 to 2008, we find that China experienced apparent nighttime lights development in 1992-1997 and 2001-2008 respectively and the US suffered from nighttime lights decay in large areas after 2001.

  6. Time Base Error In Two-Headed Videodisc Player Introduced By Recorded Asymmetry

    NASA Astrophysics Data System (ADS)

    Isailovic, Jordan

    1988-06-01

    The initial aim of videodisc development was to produce a system that records audio/video information on a disc, replicates this disc accurately and inexpensively on plastic and finally, plays the replicas on home television screens by means of a disc player attachment. What made the videodisc so interesting is the potential combination of three properties: very low cost in high-volume duplication, very high information density, rapid acces to any portion of a long recording. Presently, two major disc categories are wel1 established: mass replica and write-one discs. Single headed players are almost exclusively used today. But, in some applications, where mainly access and interactivity are very important, two headed players are either used or presently being considered. Some new problems are surfacing, which are not of that much concern for the single-headed players. One of new problems is a time-based error (TBE) caused by recorded asymmetry, which will by discussed in this paper. Al so, the influence of this phenomenon on the medical images stored on the videodisc will be examined.

  7. An improved regularization method for estimating near real-time systematic errors suitable for medium-long GPS baseline solutions

    NASA Astrophysics Data System (ADS)

    Luo, X.; Ou, J.; Yuan, Y.; Gao, J.; Jin, X.; Zhang, K.; Xu, H.

    2008-08-01

    It is well known that the key problem associated with network-based real-time kinematic (RTK) positioning is the estimation of systematic errors of GPS observations, such as residual ionospheric delays, tropospheric delays, and orbit errors, particularly for medium-long baselines. Existing methods dealing with these systematic errors are either not applicable for making estimations in real-time or require additional observations in the computation. In both cases, the result is a difficulty in performing rapid positioning. We have developed a new strategy for estimating the systematic errors for near real-time applications. In this approach, only two epochs of observations are used each time to estimate the parameters. In order to overcome severe ill-conditioned problems of the normal equation, the Tikhonov regularization method is used. We suggest that the regularized matrix be constructed by combining the a priori information of the known coordinates of the reference stations, followed by the determination of the corresponding regularized parameter. A series of systematic errors estimation can be obtained using a session of GPS observations, and the new process can assist in resolving the integer ambiguities of medium-long baselines and in constructing the virtual observations for the virtual reference station. A number of tests using three medium- to long-range baselines (from tens of kilometers to longer than 1000 kilometers) are used to validate the new approach. Test results indicate that the coordinates of three baseline lengths derived are in the order of several centimeters after the systematical errors are successfully removed. Our results demonstrate that the proposed method can effectively estimate systematic errors in the near real-time for medium-long GPS baseline solutions.

  8. Optical laboratory solution and error model simulation of a linear time-varying finite element equation

    NASA Technical Reports Server (NTRS)

    Taylor, B. K.; Casasent, D. P.

    1989-01-01

    The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.

  9. Non-contrast 3D time-of-flight magnetic resonance angiography for visualization of intracranial aneurysms in patients with absolute contraindications to CT or MRI contrast.

    PubMed

    Yanamadala, Vijay; Sheth, Sameer A; Walcott, Brian P; Buchbinder, Bradley R; Buckley, Deidre; Ogilvy, Christopher S

    2013-08-01

    The preoperative evaluation of patients with intracranial aneurysms typically includes a contrast-enhanced vascular study, such as computed tomography angiography (CTA), magnetic resonance angiography (MRA), or digital subtraction angiography. However, there are numerous absolute and relative contraindications to the administration of imaging contrast agents, including pregnancy, severe contrast allergy, and renal insufficiency. Evaluation of patients with contrast contraindications thus presents a unique challenge. We identified three patients with absolute contrast contraindications who presented with intracranial aneurysms. One patient was pregnant, while the other two had previous severe anaphylactic reactions to iodinated contrast. Because of these contraindications to intravenous contrast, we performed non-contrast time-of-flight MRA with 3D reconstruction (TOF MRA with 3DR) with maximum intensity projections and volume renderings as part of the preoperative evaluation prior to successful open surgical clipping of the aneurysms. In the case of one paraclinoid aneurysm, a high-resolution non-contrast CT scan was also performed to assess the relationship of the aneurysm to the anterior clinoid process. TOF MRA with 3DR successfully identified the intracranial aneurysms and adequately depicted the surrounding microanatomy. Intraoperative findings were as predicted by the preoperative imaging studies. The aneurysms were successfully clip-obliterated, and the patients had uneventful post-operative courses. These cases demonstrate that non-contrast imaging is a viable modality to assess intracranial aneurysms as part of the surgical planning process in patients with contrast contraindications. TOF MRA with 3DR, in conjunction with high-resolution non-contrast CT when indicated, provides adequate visualization of the microanatomy of the aneurysm and surrounding structures.

  10. An evaluation and regional error modeling methodology for near-real-time satellite rainfall data over Australia

    NASA Astrophysics Data System (ADS)

    Pipunic, Robert C.; Ryu, Dongryeol; Costelloe, Justin F.; Su, Chun-Hsu

    2015-10-01

    In providing uniform spatial coverage, satellite-based rainfall estimates can potentially benefit hydrological modeling, particularly for flood prediction. Maximizing the value of information from such data requires knowledge of its error. The most recent Tropical Rainfall Measuring Mission (TRMM) 3B42RT (TRMM-RT) satellite product version 7 (v7) was used for examining evaluation procedures against in situ gauge data across mainland Australia at a daily time step, over a 9 year period. This provides insights into estimating uncertainty and informing quantitative error model development, with methodologies relevant to the recently operational Global Precipitation Measurement mission that builds upon the TRMM legacy. Important error characteristics highlighted for daily aggregated TRMM-RT v7 include increasing (negative) bias and error variance with increasing daily gauge totals and more reliability at detecting larger gauge totals with a probability of detection of <0.5 for rainfall < ~3 mm/d. Additionally, pixel location within clusters of spatially contiguous TRMM-RT v7 rainfall pixels (representing individual rain cloud masses) has predictive ability for false alarms. Differences between TRMM-RT v7 and gauge data have increasing (positive) bias and error variance with increasing TRMM-RT estimates. Difference errors binned within 10 mm/d increments of TRMM-RT v7 estimates highlighted negatively skewed error distributions for all bins, suitably approximated by the generalized extreme value distribution. An error model based on this distribution enables bias correction and definition of quantitative uncertainty bounds, which are expected to be valuable for hydrological modeling and/or merging with other rainfall products. These error characteristics are also an important benchmark for assessing if/how future satellite rainfall products have improved.

  11. A pharmacometric case study regarding the sensitivity of structural model parameter estimation to error in patient reported dosing times.

    PubMed

    Knights, Jonathan; Rohatagi, Shashank

    2015-12-01

    Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times.

  12. Absolutely classical spin states

    NASA Astrophysics Data System (ADS)

    Bohnet-Waldraff, F.; Giraud, O.; Braun, D.

    2017-01-01

    We introduce the concept of "absolutely classical" spin states, in analogy to absolutely separable states of bipartite quantum systems. Absolutely classical states are states that remain classical (i.e., a convex sum of projectors on coherent states of a spin j ) under any unitary transformation applied to them. We investigate the maximal size of the ball of absolutely classical states centered on the maximally mixed state and derive a lower bound for its radius as a function of the total spin quantum number. We also obtain a numerical estimate of this maximal radius and compare it to the case of absolutely separable states.

  13. Impact of habitat-specific GPS positional error on detection of movement scales by first-passage time analysis.

    PubMed

    Williams, David M; Dechen Quinn, Amy; Porter, William F

    2012-01-01

    Advances in animal tracking technologies have reduced but not eliminated positional error. While aware of such inherent error, scientists often proceed with analyses that assume exact locations. The results of such analyses then represent one realization in a distribution of possible outcomes. Evaluating results within the context of that distribution can strengthen or weaken our confidence in conclusions drawn from the analysis in question. We evaluated the habitat-specific positional error of stationary GPS collars placed under a range of vegetation conditions that produced a gradient of canopy cover. We explored how variation of positional error in different vegetation cover types affects a researcher's ability to discern scales of movement in analyses of first-passage time for white-tailed deer (Odocoileus virginianus). We placed 11 GPS collars in 4 different vegetative canopy cover types classified as the proportion of cover above the collar (0-25%, 26-50%, 51-75%, and 76-100%). We simulated the effect of positional error on individual movement paths using cover-specific error distributions at each location. The different cover classes did not introduce any directional bias in positional observations (1 m≤mean≤6.51 m, 0.24≤p≤0.47), but the standard deviation of positional error of fixes increased significantly with increasing canopy cover class for the 0-25%, 26-50%, 51-75% classes (SD = 2.18 m, 3.07 m, and 4.61 m, respectively) and then leveled off in the 76-100% cover class (SD = 4.43 m). We then added cover-specific positional errors to individual deer movement paths and conducted first-passage time analyses on the noisy and original paths. First-passage time analyses were robust to habitat-specific error in a forest-agriculture landscape. For deer in a fragmented forest-agriculture environment, and species that move across similar geographic extents, we suggest that first-passage time analysis is robust with regard to positional errors.

  14. Estimation of fine-time synchronization error in FH FEMA satcom systems using the early-late filter technique

    NASA Astrophysics Data System (ADS)

    Mason, Lloyd J.

    1994-02-01

    The problem of fine-time, or epoch, synchronization on the uplink of a frequency hopped FDMA satellite system is examined. All user signals are dehopped by the same device in the satellite receiver. For each transmitted signal the hop transition time must be adjusted so that when the signal arrives at the receiver, it is closely aligned with the transition time of the dehopper. To accomplish this fine-time synchronization, the system is probed by sending a known signal for a number of hop periods, L. For receivers which use a Fourier transform processor (FTP), such as may be realized by SAW devices, the early-late filter method provides a means for processing the probe response so that an estimate of the time error can be made. The resulting time error estimate is then sent via the downlink to the transmitter where it is used to adjust the user hop transition time accordingly.

  15. Absolute GPS Positioning Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Ramillien, G.

    A new inverse approach for restoring the absolute coordinates of a ground -based station from three or four observed GPS pseudo-ranges is proposed. This stochastic method is based on simulations of natural evolution named genetic algorithms (GA). These iterative procedures provide fairly good and robust estimates of the absolute positions in the Earth's geocentric reference system. For comparison/validation, GA results are compared to the ones obtained using the classical linearized least-square scheme for the determination of the XYZ location proposed by Bancroft (1985) which is strongly limited by the number of available observations (i.e. here, the number of input pseudo-ranges must be four). The r.m.s. accuracy of the non -linear cost function reached by this latter method is typically ~10-4 m2 corresponding to ~300-500-m accuracies for each geocentric coordinate. However, GA can provide more acceptable solutions (r.m.s. errors < 10-5 m2), even when only three instantaneous pseudo-ranges are used, such as a lost of lock during a GPS survey. Tuned GA parameters used in different simulations are N=1000 starting individuals, as well as Pc=60-70% and Pm=30-40% for the crossover probability and mutation rate, respectively. Statistical tests on the ability of GA to recover acceptable coordinates in presence of important levels of noise are made simulating nearly 3000 random samples of erroneous pseudo-ranges. Here, two main sources of measurement errors are considered in the inversion: (1) typical satellite-clock errors and/or 300-metre variance atmospheric delays, and (2) Geometrical Dilution of Precision (GDOP) due to the particular GPS satellite configuration at the time of acquisition. Extracting valuable information and even from low-quality starting range observations, GA offer an interesting alternative for high -precision GPS positioning.

  16. Analysis of potential errors in real-time streamflow data and methods of data verification by digital computer

    USGS Publications Warehouse

    Lystrom, David J.

    1972-01-01

    Various methods of verifying real-time streamflow data are outlined in part II. Relatively large errors (those greater than 20-30 percent) can be detected readily by use of well-designed verification programs for a digital computer, and smaller errors can be detected only by discharge measurements and field observations. The capability to substitute a simulated discharge value for missing or erroneous data is incorporated in some of the verification routines described. The routines represent concepts ranging from basic statistical comparisons to complex watershed modeling and provide a selection from which real-time data users can choose a suitable level of verification.

  17. Time-Order Errors in Duration Judgment Are Independent of Spatial Positioning

    PubMed Central

    Harrison, Charlotte; Binetti, Nicola; Mareschal, Isabelle; Johnston, Alan

    2017-01-01

    Time-order errors (TOEs) occur when the discriminability between two stimuli are affected by the order in which they are presented. While TOEs have been studied since the 1860s, it is unknown whether the spatial properties of a stimulus will affect this temporal phenomenon. In this experiment, we asked whether perceived duration, or duration discrimination, might be influenced by whether two intervals in a standard two-interval method of constants paradigm were spatially overlapping in visual short-term memory. Two circular sinusoidal gratings (one standard and the other a comparison) were shown sequentially and participants judged which of the two was presented for a longer duration. The test stimuli were either spatially overlapping (in different spatial frames) or separate. Stimulus order was randomized between trials. The standard stimulus lasted 600 ms, and the test stimulus had one of seven possible values (between 300 and 900 ms). There were no overall significant differences observed between spatially overlapping and separate stimuli. However, in trials where the standard stimulus was presented second, TOEs were greater, and participants were significantly less sensitive to differences in duration. TOEs were also greater in conditions involving a saccade. This suggests there is an intrinsic memory component to two interval tasks in that the information from the first interval has to be stored; this is more demanding when the standard is presented in the second interval. Overall, this study suggests that while temporal information may be encoded in some spatial form, it is not dependent on visual short-term memory. PMID:28337162

  18. Motoneuron axon pathfinding errors in zebrafish: Differential effects related to concentration and timing of nicotine exposure

    SciTech Connect

    Menelaou, Evdokia; Paul, Latoya T.; Perera, Surangi N.; Svoboda, Kurt R.

    2015-04-01

    Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. - Highlights: • Embryonic nicotine exposure can specifically affect secondary motoneuron axons in a dose-dependent manner.

  19. Absolute quantitative real-time polymerase chain reaction for the measurement of human papillomavirus E7 mRNA in cervical cytobrush specimens

    PubMed Central

    Scheurer, Michael E; Dillon, Laura M; Chen, Zhuo; Follen, Michele; Adler-Storthz, Karen

    2007-01-01

    Background Few reports of the utilization of an accurate, cost-effective means for measuring HPV oncogene transcripts have been published. Several papers have reported the use of relative quantitation or more expensive Taqman methods. Here, we report a method of absolute quantitative real-time PCR utilizing SYBR-green fluorescence for the measurement of HPV E7 expression in cervical cytobrush specimens. Results The construction of a standard curve based on the serial dilution of an E7-containing plasmid was the key for being able to accurately compare measurements between cervical samples. The assay was highly reproducible with an overall coefficient of variation of 10.4%. Conclusion The use of highly reproducible and accurate SYBR-based real-time polymerase chain reaction (PCR) assays instead of performing Taqman-type assays allows low-cost, high-throughput analysis of viral mRNA expression. The development of such assays will help in refining the current screening programs for HPV-related carcinomas. PMID:17407544

  20. Sustained Attention is Associated with Error Processing Impairment: Evidence from Mental Fatigue Study in Four-Choice Reaction Time Task

    PubMed Central

    Xiao, Yi; Ma, Feng; Lv, Yixuan; Cai, Gui; Teng, Peng; Xu, FengGang; Chen, Shanguang

    2015-01-01

    Attention is important in error processing. Few studies have examined the link between sustained attention and error processing. In this study, we examined how error-related negativity (ERN) of a four-choice reaction time task was reduced in the mental fatigue condition and investigated the role of sustained attention in error processing. Forty-one recruited participants were divided into two groups. In the fatigue experiment group, 20 subjects performed a fatigue experiment and an additional continuous psychomotor vigilance test (PVT) for 1 h. In the normal experiment group, 21 subjects only performed the normal experimental procedures without the PVT test. Fatigue and sustained attention states were assessed with a questionnaire. Event-related potential results showed that ERN (p < 0.005) and peak (p < 0.05) mean amplitudes decreased in the fatigue experiment. ERN amplitudes were significantly associated with the attention and fatigue states in electrodes Fz, FC1, Cz, and FC2. These findings indicated that sustained attention was related to error processing and that decreased attention is likely the cause of error processing impairment. PMID:25756780

  1. Sustained attention is associated with error processing impairment: evidence from mental fatigue study in four-choice reaction time task.

    PubMed

    Xiao, Yi; Ma, Feng; Lv, Yixuan; Cai, Gui; Teng, Peng; Xu, FengGang; Chen, Shanguang

    2015-01-01

    Attention is important in error processing. Few studies have examined the link between sustained attention and error processing. In this study, we examined how error-related negativity (ERN) of a four-choice reaction time task was reduced in the mental fatigue condition and investigated the role of sustained attention in error processing. Forty-one recruited participants were divided into two groups. In the fatigue experiment group, 20 subjects performed a fatigue experiment and an additional continuous psychomotor vigilance test (PVT) for 1 h. In the normal experiment group, 21 subjects only performed the normal experimental procedures without the PVT test. Fatigue and sustained attention states were assessed with a questionnaire. Event-related potential results showed that ERN (p < 0.005) and peak (p < 0.05) mean amplitudes decreased in the fatigue experiment. ERN amplitudes were significantly associated with the attention and fatigue states in electrodes Fz, FC1, Cz, and FC2. These findings indicated that sustained attention was related to error processing and that decreased attention is likely the cause of error processing impairment.

  2. Retention time prediction in temperature-programmed, comprehensive two-dimensional gas chromatography: modeling and error assessment.

    PubMed

    Barcaru, Andrei; Anroedh-Sampat, Andjoe; Janssen, Hans-Gerd; Vivó-Truyols, Gabriel

    2014-11-14

    In this paper we present a model relating experimental factors (column lengths, diameters and thickness, modulation times, pressures and temperature programs) with retention times. Unfortunately, an analytical solution to calculate the retention in temperature programmed GC × GC is impossible, making thus necessary to perform a numerical integration. In this paper we present a computational physical model of GC × GC, capable of predicting with a high accuracy retention times in both dimensions. Once fitted (e.g., calibrated), the model is used to make predictions, which are always subject to error. In this way, the prediction can result rather in a probability distribution of (predicted) retention times than in a fixed (most likely) value. One of the most common problems that can occur when fitting unknown parameters using experimental data is overfitting. In order to detect overfitting situations and assess the error, the K-fold cross-validation technique was applied. Another technique of error assessment proposed in this article is the use of error propagation using Jacobians. This method is based on estimation of the accuracy of the model by the partial derivatives of the retention time prediction with respect to the fitted parameters (in this case entropy and enthalpy for each component) in a set of given conditions. By treating the predictions of the model in terms of intervals rather than as precise values, it is possible to considerably increase the robustness of any optimization algorithm.

  3. Space-time structure and dynamics of the forecast error in a coastal circulation model of the Gulf of Lions

    NASA Astrophysics Data System (ADS)

    Auclair, Francis; Marsaleix, Patrick; De Mey, Pierre

    2003-02-01

    The probability density function (pdf) of forecast errors due to several possible error sources is investigated in a coastal ocean model driven by the atmosphere and a larger-scale ocean solution using an Ensemble (Monte Carlo) technique. An original method to generate dynamically adjusted perturbation of the slope current is proposed. The model is a high-resolution 3D primitive equation model resolving topographic interactions, river runoff and wind forcing. The Monte Carlo approach deals with model and observation errors in a natural way. It is particularly well-adapted to coastal non-linear studies. Indeed higher-order moments are implicitly retained in the covariance equation. Statistical assumptions are made on the uncertainties related to the various forcings (wind stress, open boundary conditions, etc.), to the initial state and to other model parameters, and randomly perturbed forecasts are carried out in accordance with the a priori error pdf. The evolution of these errors is then traced in space and time and the a posteriori error pdf can be explored. Third- and fourth-order moments of the pdf are computed to evaluate the normal or Gaussian behaviour of the distribution. The calculation of Central Empirical Orthogonal Functions (Ceofs) of the forecast Ensemble covariances eventually leads to a physical description of the model forecast error subspace in model state space. The time evolution of the projection of the Reference forecast onto the first Ceofs clearly shows the existence of specific model regimes associated to particular forcing conditions. The Ceofs basis is also an interesting candidate to define the Reduced Control Subspace for assimilation and in particular to explore transitions in model state space. We applied the above methodology to study the penetration of the Liguro-Provençal Catalan Current over the shelf of the Gulf of Lions in north-western Mediterranean together with the discharge of the Rhône river. This region is indeed well

  4. Time-dependent Drug–Drug Interaction Alerts in Care Provider Order Entry: Software May Inhibit Medication Error Reductions

    PubMed Central

    van der Sijs, Heleen; Lammers, Laureen; van den Tweel, Annemieke; Aarts, Jos; Berg, Marc; Vulto, Arnold; van Gelder, Teun

    2009-01-01

    Time-dependent drug–drug interactions (TDDIs) are drug combinations that result in a decreased drug effect due to coadministration of a second drug. Such interactions can be prevented by separately administering the drugs. This study attempted to reduce drug administration errors due to overridden TDDIs in a care provider order entry (CPOE) system. In four periods divided over two studies, logged TDDIs were investigated by reviewing the time intervals prescribed in the CPOE and recorded on the patient chart. The first study showed significant drug administration error reduction from 56.4 to 36.2% (p < 0.05), whereas the second study was not successful (46.7 and 45.2%; p > 0.05). Despite interventions, drug administration errors still occurred in more than one third of cases and prescribing errors in 79–87%. Probably the low alert specificity, the unclear alert information content, and the inability of the software to support safe and efficient TDDI alert handling all diminished correct prescribing, and consequently, insufficiently reduced drug administration errors. PMID:19717806

  5. Time-dependent drug-drug interaction alerts in care provider order entry: software may inhibit medication error reductions.

    PubMed

    van der Sijs, Heleen; Lammers, Laureen; van den Tweel, Annemieke; Aarts, Jos; Berg, Marc; Vulto, Arnold; van Gelder, Teun

    2009-01-01

    Time-dependent drug-drug interactions (TDDIs) are drug combinations that result in a decreased drug effect due to coadministration of a second drug. Such interactions can be prevented by separately administering the drugs. This study attempted to reduce drug administration errors due to overridden TDDIs in a care provider order entry (CPOE) system. In four periods divided over two studies, logged TDDIs were investigated by reviewing the time intervals prescribed in the CPOE and recorded on the patient chart. The first study showed significant drug administration error reduction from 56.4 to 36.2% (p<0.05), whereas the second study was not successful (46.7 and 45.2%; p>0.05). Despite interventions, drug administration errors still occurred in more than one third of cases and prescribing errors in 79-87%. Probably the low alert specificity, the unclear alert information content, and the inability of the software to support safe and efficient TDDI alert handling all diminished correct prescribing, and consequently, insufficiently reduced drug administration errors.

  6. Real-time modeling and online filtering of the stochastic error in a fiber optic current transducer

    NASA Astrophysics Data System (ADS)

    Wang, Lihui; Wei, Guangjin; Zhu, Yunan; Liu, Jian; Tian, Zhengqi

    2016-10-01

    The stochastic error characteristics of a fiber optic current transducer (FOCT) influence the relay protection, electric-energy metering, and other devices in the spacer layer. Real-time modeling and online filtering of the FOCT’s stochastic error tends to be an effective method for improving the measurement accuracy of the FOCT. This paper first pretreats and inspects the FOCT data, statistically. Then, the model order is set by the AIC principle to establish an ARMA (2,1) model and model’s applicability is tested. Finally, a Kalman filter is adopted to reduce the noise in the FOCT data. The results of the experiment and the simulation demonstrate that there is a notable decrease in the stochastic error after time series modeling and Kalman filtering. Besides, the mean-variance is decreased by two orders. All the stochastic error coefficients are decreased by the total variance method; the BI is decreased by 41.4%, the RRW is decreased by 67.5%, and the RR is decreased by 53.4%. Consequently, the method can reduce the stochastic error and improve the measurement accuracy of the FOCT, effectively.

  7. Errors in determination of soil water content using time-domain reflectometry caused by soil compaction around wave guides

    SciTech Connect

    Ghezzehei, T.A.

    2008-05-29

    Application of time domain reflectometry (TDR) in soil hydrology often involves the conversion of TDR-measured dielectric permittivity to water content using universal calibration equations (empirical or physically based). Deviations of soil-specific calibrations from the universal calibrations have been noted and are usually attributed to peculiar composition of soil constituents, such as high content of clay and/or organic matter. Although it is recognized that soil disturbance by TDR waveguides may have impact on measurement errors, to our knowledge, there has not been any quantification of this effect. In this paper, we introduce a method that estimates this error by combining two models: one that describes soil compaction around cylindrical objects and another that translates change in bulk density to evolution of soil water retention characteristics. Our analysis indicates that the compaction pattern depends on the mechanical properties of the soil at the time of installation. The relative error in water content measurement depends on the compaction pattern as well as the water content and water retention properties of the soil. Illustrative calculations based on measured soil mechanical and hydrologic properties from the literature indicate that the measurement errors of using a standard three-prong TDR waveguide could be up to 10%. We also show that the error scales linearly with the ratio of rod radius to the interradius spacing.

  8. Continued Driving and Time to Transition to Nondriver Status through Error-Specific Driving Restrictions

    ERIC Educational Resources Information Center

    Freund, Barbara; Petrakos, Davithoula

    2008-01-01

    We developed driving restrictions that are linked to specific driving errors, allowing cognitively impaired individuals to continue to independently meet mobility needs while minimizing risk to themselves and others. The purpose of this project was to evaluate the efficacy and duration expectancy of these restrictions in promoting safe continued…

  9. Nonlinear forecasting as a way of distinguishing chaos from measurement error in time series

    NASA Astrophysics Data System (ADS)

    Sugihara, George; May, Robert M.

    1990-04-01

    An approach is presented for making short-term predictions about the trajectories of chaotic dynamical systems. The method is applied to data on measles, chickenpox, and marine phytoplankton populations, to show how apparent noise associated with deterministic chaos can be distinguished from sampling error and other sources of externally induced environmental noise.

  10. An estimate of global absolute dynamic topography

    NASA Technical Reports Server (NTRS)

    Tai, C.-K.; Wunsch, C.

    1984-01-01

    The absolute dynamic topography of the world ocean is estimated from the largest scales to a short-wavelength cutoff of about 6700 km for the period July through September, 1978. The data base consisted of the time-averaged sea-surface topography determined by Seasat and geoid estimates made at the Goddard Space Flight Center. The issues are those of accuracy and resolution. Use of the altimetric surface as a geoid estimate beyond the short-wavelength cutoff reduces the spectral leakage in the estimated dynamic topography from erroneous small-scale geoid estimates without contaminating the low wavenumbers. Comparison of the result with a similarly filtered version of Levitus' (1982) historical average dynamic topography shows good qualitative agreement. There is quantitative disagreement, but it is within the estimated errors of both methods of calculation.

  11. Absolute becoming, relational becoming and the arrow of time: Some non-conventional remarks on the relationship between physics and metaphysics

    NASA Astrophysics Data System (ADS)

    Dorato, Mauro

    The literature on the compatibility between the time of our experience-characterized by passage or becoming-and time as is represented within spacetime theories has been affected by a persistent failure to get a clear grasp of the notion of becoming, both in its relation to an ontology of events "spread" in a four-dimensional manifold, and in relation to temporally asymmetric physical processes. In the first part of my paper I try to remedy this situation by offering what I consider a clear and faithful explication of becoming, valid independently of the particular spacetime setting in which we operate. Along the way, I will show why the metaphysical debate between the so-called "presentists" and "eternalists" is completely irrelevant to the question of becoming, as the debate itself is generated by a failure to distinguish between a tensed and a tenseless sense of "existence". After a much needed distinction between absolute and relational becoming, I then show in what sense classical (non-quantum) spacetime physics presupposes both types of becoming, for the simple reason that spacetime physics presupposes an ontology of (timelike-separated) events. As a consequence, not only does it turn out that using physics to try to provide empirical evidence for the existence of becoming amounts to putting the cart before the horses, but also that the order imposed by "the arrow of becoming" is more fundamental than any other physical arrow of time, despite the fact that becoming cannot be used to explain why entropy grows, or retarded electromagnetic radiation prevails versus advanced radiation.

  12. Real-Time PPP Based on the Coupling Estimation of Clock Bias and Orbit Error with Broadcast Ephemeris

    PubMed Central

    Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan

    2015-01-01

    Satellite orbit error and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the

  13. Real-Time PPP Based on the Coupling Estimation of Clock Bias and Orbit Error with Broadcast Ephemeris.

    PubMed

    Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan

    2015-07-22

    Satellite orbit error and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the

  14. Accuracy of travel time distribution (TTD) models as affected by TTD complexity, observation errors, and model and tracer selection

    USGS Publications Warehouse

    Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.

    2014-01-01

    Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.

  15. Analysis of PolSK based FSO system using wavelength and time diversity over strong atmospheric turbulence with pointing errors

    NASA Astrophysics Data System (ADS)

    Prabu, K.; Cheepalli, Shashidhar; Kumar, D. Sriram

    2014-08-01

    Free space optics (FSO) or wireless optical communication systems is an evolving alternative to the current radio frequency (RF) links due to its high and secure datarates, large license free bandwidth access, ease of installation, and lower cost for shorter range distances. These systems are largely influenced by atmospheric conditions due to wireless transmission; requirement of line of sight (LOS) propagation may lead to alignment problems in turn pointing errors. In this paper, we consider atmospheric turbulence and pointing errors are the major limitations. We tried to address these difficulties by considering polarization shift keying (PolSK) modulated FSO communication system with wavelength and time diversity. We derived the closed form expressions for estimation of the average bit error rate (BER) and outage probability, which are vital system performance metrics. Analytical results are shown considering different practical cases.

  16. Calibration method of absolute orientation of camera optical axis

    NASA Astrophysics Data System (ADS)

    Xu, Yong; Guo, Pengyu; Zhang, Xiaohu; Ding, Shaowen; Su, Ang; Li, Lichun

    2013-08-01

    Camera calibration is one of the most basic and important processes in optical measuring field. Generally, the objective of camera calibration is to estimate the internal and external parameters of object cameras, while the orientation error of optical axis is not included yet. Orientation error of optical axis is a important factor, which seriously affects measuring precision in high-precision measurement field, especially for those distant aerospace measurement in which object distance is much longer than focal length, that lead to magnifying the orientation errors to thousands times. In order to eliminate the influence of orientation error of camera optical axis, the imaging model of camera is analysed and established in this paper, and the calibration method is also introduced: Firstly, we analyse the reasons that cause optical axis error and its influence. Then, we find the model of optical axis orientation error and imaging model of camera basing on it's practical physical meaning. Furthermore, we derive the bundle adjustment algorithm which could compute the internal and external camera parameters and absolute orientation of camera optical axis simultaneously at high precision. In numeric simulation, we solve the camera parameters by using bundle adjustment optimization algorithm, then we correct the image points by calibration results according to the model of optical axis error, and the simulation result shows that our calibration model is reliable, effective and precise.

  17. An analysis of error propagation in AERMOD lateral dispersion using Round Hill II and Uttenweiller experiments in reduced averaging times.

    PubMed

    Hoinaski, Leonardo; Franco, Davide; de Melo Lisboa, Henrique

    2017-03-01

    Dispersion modelling was proved by researchers that most part of the models, including the regulatory models recommended by the Environmental Protection Agency of the United States (AERMOD and CALPUFF), do not have the ability to predict under complex situations. This article presents a novel evaluation of the propagation of errors in lateral dispersion coefficient of AERMOD with emphasis on estimate of average times under 10 min. The sources of uncertainty evaluated were parameterizations of lateral dispersion ([Formula: see text]), standard deviation of lateral wind speed ([Formula: see text]) and processing of obstacle effect. The model's performance was tested in two field tracer experiments: Round Hill II and Uttenweiller. The results show that error propagation from the estimate of [Formula: see text] directly affects the determination of [Formula: see text], especially in Round Hill II experiment conditions. After average times are reduced, errors arise in the parameterization of [Formula: see text], even after observation assimilations of [Formula: see text], exposing errors on Lagrangian Time Scale parameterization. The assessment of the model in the presence of obstacles shows that the implementation of a plume rise model enhancement algorithm can improve the performance of the AERMOD model. However, these improvements are small when the obstacles have a complex geometry, such as Uttenweiller.

  18. One-Class Classification-Based Real-Time Activity Error Detection in Smart Homes

    PubMed Central

    Das, Barnan; Cook, Diane J.; Krishnan, Narayanan C.; Schmitter-Edgecombe, Maureen

    2016-01-01

    Caring for individuals with dementia is frequently associated with extreme physical and emotional stress, which often leads to depression. Smart home technology and advances in machine learning techniques can provide innovative solutions to reduce caregiver burden. One key service that caregivers provide is prompting individuals with memory limitations to initiate and complete daily activities. We hypothesize that sensor technologies combined with machine learning techniques can automate the process of providing reminder-based interventions. The first step towards automated interventions is to detect when an individual faces difficulty with activities. We propose machine learning approaches based on one-class classification that learn normal activity patterns. When we apply these classifiers to activity patterns that were not seen before, the classifiers are able to detect activity errors, which represent potential prompt situations. We validate our approaches on smart home sensor data obtained from older adult participants, some of whom faced difficulties performing routine activities and thus committed errors. PMID:27746849

  19. Adaptive Automation and Cue Invocation: The Effect of Cue Timing on Operator Error

    DTIC Science & Technology

    2013-05-01

    129. 5. Parasuraman, R. (2000). Designing automation for human use: Empirical studies and quantitative models. Ergonomics , 43, 931-951. 6...Prospective memory errors involve memory for intended actions that are planned to be performed at some designated point in the future [20]. In the DMOO...RESCHU) [21] was used in this study. A Navy pilot who is familiar with supervisory control tasks designed the RESCHU task and the task has been

  20. Directional errors of movements and their correction in a discrete tracking task. [pilot reaction time and sensorimotor performance

    NASA Technical Reports Server (NTRS)

    Jaeger, R. J.; Agarwal, G. C.; Gottlieb, G. L.

    1978-01-01

    Subjects can correct their own errors of movement more quickly than they can react to external stimuli by using three general categories of feedback: (1) knowledge of results, primarily visually mediated; (2) proprioceptive or kinaesthetic such as from muscle spindles and joint receptors, and (3) corollary discharge or efference copy within the central nervous system. The effects of these feedbacks on simple reaction time, choice reaction time, and error correction time were studied in four normal human subjects. The movement used was plantarflexion and dorsiflexion of the ankle joint. The feedback loops were modified, by changing the sign of the visual display to alter the subject's perception of results, and by applying vibration at 100 Hz simultaneously to both the agonist and antagonist muscles of the ankle joint. The central processing was interfered with when the subjects were given moderate doses of alcohol (blood alcohol concentration levels of up to 0.07%). Vibration and alcohol increase both the simple and choice reaction times but not the error correction time.

  1. Error Analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

  2. Absolute High-Precision Localisation of an Unmanned Ground Vehicle by Using Real-Time Aerial Video Imagery for Geo-referenced Orthophoto Registration

    NASA Astrophysics Data System (ADS)

    Kuhnert, Lars; Ax, Markus; Langer, Matthias; Nguyen van, Duong; Kuhnert, Klaus-Dieter

    This paper describes an absolute localisation method for an unmanned ground vehicle (UGV) if GPS is unavailable for the vehicle. The basic idea is to combine an unmanned aerial vehicle (UAV) to the ground vehicle and use it as an external sensor platform to achieve an absolute localisation of the robotic team. Beside the discussion of the rather naive method directly using the GPS position of the aerial robot to deduce the ground robot's position the main focus of this paper lies on the indirect usage of the telemetry data of the aerial robot combined with live video images of an onboard camera to realise a registration of local video images with apriori registered orthophotos. This yields to a precise driftless absolute localisation of the unmanned ground vehicle. Experiments with our robotic team (AMOR and PSYCHE) successfully verify this approach.

  3. Sampling errors for satellite-derived tropical rainfall - Monte Carlo study using a space-time stochastic model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.

    1990-01-01

    Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.

  4. Real-time prediction of atmospheric Lagrangian coherent structures based on forecast data: An application and error analysis

    NASA Astrophysics Data System (ADS)

    BozorgMagham, Amir E.; Ross, Shane D.; Schmale, David G.

    2013-09-01

    The language of Lagrangian coherent structures (LCSs) provides a new means for studying transport and mixing of passive particles advected by an atmospheric flow field. Recent observations suggest that LCSs govern the large-scale atmospheric motion of airborne microorganisms, paving the way for more efficient models and management strategies for the spread of infectious diseases affecting plants, domestic animals, and humans. In addition, having reliable predictions of the timing of hyperbolic LCSs may contribute to improved aerobiological sampling of microorganisms with unmanned aerial vehicles and LCS-based early warning systems. Chaotic atmospheric dynamics lead to unavoidable forecasting errors in the wind velocity field, which compounds errors in LCS forecasting. In this study, we reveal the cumulative effects of errors of (short-term) wind field forecasts on the finite-time Lyapunov exponent (FTLE) fields and the associated LCSs when realistic forecast plans impose certain limits on the forecasting parameters. Objectives of this paper are to (a) quantify the accuracy of prediction of FTLE-LCS features and (b) determine the sensitivity of such predictions to forecasting parameters. Results indicate that forecasts of attracting LCSs exhibit less divergence from the archive-based LCSs than the repelling features. This result is important since attracting LCSs are the backbone of long-lived features in moving fluids. We also show under what circumstances one can trust the forecast results if one merely wants to know if an LCS passed over a region and does not need to precisely know the passage time.

  5. Efficient Time-Domain Imaging Processing for One-Stationary Bistatic Forward-Looking SAR Including Motion Errors.

    PubMed

    Xie, Hongtu; Shi, Shaoying; Xiao, Hui; Xie, Chao; Wang, Feng; Fang, Qunle

    2016-11-12

    With the rapid development of the one-stationary bistatic forward-looking synthetic aperture radar (OS-BFSAR) technology, the huge amount of the remote sensing data presents challenges for real-time imaging processing. In this paper, an efficient time-domain algorithm (ETDA) considering the motion errors for the OS-BFSAR imaging processing, is presented. This method can not only precisely handle the large spatial variances, serious range-azimuth coupling and motion errors, but can also greatly improve the imaging efficiency compared with the direct time-domain algorithm (DTDA). Besides, it represents the subimages on polar grids in the ground plane instead of the slant-range plane, and derives the sampling requirements considering motion errors for the polar grids to offer a near-optimum tradeoff between the imaging precision and efficiency. First, OS-BFSAR imaging geometry is built, and the DTDA for the OS-BFSAR imaging is provided. Second, the polar grids of subimages are defined, and the subaperture imaging in the ETDA is derived. The sampling requirements for polar grids are derived from the point of view of the bandwidth. Finally, the implementation and computational load of the proposed ETDA are analyzed. Experimental results based on simulated and measured data validate that the proposed ETDA outperforms the DTDA in terms of the efficiency improvement.

  6. Efficient Time-Domain Imaging Processing for One-Stationary Bistatic Forward-Looking SAR Including Motion Errors

    PubMed Central

    Xie, Hongtu; Shi, Shaoying; Xiao, Hui; Xie, Chao; Wang, Feng; Fang, Qunle

    2016-01-01

    With the rapid development of the one-stationary bistatic forward-looking synthetic aperture radar (OS-BFSAR) technology, the huge amount of the remote sensing data presents challenges for real-time imaging processing. In this paper, an efficient time-domain algorithm (ETDA) considering the motion errors for the OS-BFSAR imaging processing, is presented. This method can not only precisely handle the large spatial variances, serious range-azimuth coupling and motion errors, but can also greatly improve the imaging efficiency compared with the direct time-domain algorithm (DTDA). Besides, it represents the subimages on polar grids in the ground plane instead of the slant-range plane, and derives the sampling requirements considering motion errors for the polar grids to offer a near-optimum tradeoff between the imaging precision and efficiency. First, OS-BFSAR imaging geometry is built, and the DTDA for the OS-BFSAR imaging is provided. Second, the polar grids of subimages are defined, and the subaperture imaging in the ETDA is derived. The sampling requirements for polar grids are derived from the point of view of the bandwidth. Finally, the implementation and computational load of the proposed ETDA are analyzed. Experimental results based on simulated and measured data validate that the proposed ETDA outperforms the DTDA in terms of the efficiency improvement. PMID:27845757

  7. Simplified formula for mean cycle-slip time of phase-locked loops with steady-state phase error.

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1972-01-01

    Previous work shows that the mean time from lock to a slipped cycle of a phase-locked loop is given by a certain double integral. Accurate numerical evaluation of this formula for the second-order loop is extremely vexing because the difference between exponentially large quantities is involved. The presented article demonstrates a method in which a much-reduced precision program can be used to obtain the mean first-cycle slip time for a loop of arbitrary degree tracking at a specified SNR and steady-state phase error. It also presents a simple approximate formula that is asymptotically tight at higher loop SNR.

  8. Development and validation of a liquid chromatography/electrospray ionization time-of-flight mass spectrometry method for relative and absolute quantification of steroidal alkaloids in Fritillaria species.

    PubMed

    Zhou, Jian-Liang; Li, Ping; Li, Hui-Jun; Jiang, Yan; Ren, Mei-Ting; Liu, Ying

    2008-01-04

    Steroidal alkaloids are naturally occurring nitrogen-containing compounds in many edible or medicinal plants, such as potato, tomato, Fritillaria and American hellebore, which possess a variety of toxicological and pharmacological effects on humans. The aim of this study is to explore the potential of liquid chromatography/electrospray ionization time-of-flight mass spectrometry (LC/ESI-TOF-MS) method in the determination of these important alkaloids in plant matrices. The application of this method has been proven through 26 naturally occurring steroidal alkaloids in Fritillaria species. Accurate mass measurements within 4 ppm error were obtained for all the alkaloids detected out of various plant matrices, which allowed an unequivocal identification of the target steroidal alkaloids. The bunching factor for mass spectrometer, an important parameter significantly affecting the precision and accuracy of quantitative method, was firstly optimized in this work and satisfactory precision and linearity were achieved by the optimization of that parameter. The ranges of RSD values of intra-day and inter-day variability for all alkaloids were decreased remarkably from 41.8-159% and 13.2-140% to 0.32-7.98% and 2.37-16.1%, respectively, when the value of bunching factor was optimized from 1 to 3. Linearity of response more than two orders of magnitude was also demonstrated (regression coefficient >0.99). The LC/TOF-MS detection method offered improvements to the sensitivity, compared with previously applied LC (or GC) methods, with limits of detection down to 0.0014-0.0335 microg/ml. The results in this paper illustrate the robustness and applicability of LC/TOF-MS for steroidal alkaloids analysis in plant samples. In addition, relative quantitative determination of steroidal alkaloid with one popular analyte verticinone which is commercially available was also investigated in order to break through the choke point of lack of standards in phytochemical analysis. The

  9. Performance analysis for time-frequency MUSIC algorithm in presence of both additive noise and array calibration errors

    NASA Astrophysics Data System (ADS)

    Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim

    2012-12-01

    This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.

  10. Enhanced multi-hop operation using hybrid optoelectronic router with time-to-live-based selective forward error correction.

    PubMed

    Nakahara, Tatsushi; Suzaki, Yasumasa; Urata, Ryohei; Segawa, Toru; Ishikawa, Hiroshi; Takahashi, Ryo

    2011-12-12

    Multi-hop operation is demonstrated with a prototype hybrid optoelectronic router for optical packet switched networks. The router is realized by combining key optical/optoelectronic device/sub-system technologies and complementary metal-oxide-semiconductor electronics. Using the hop count monitored via the time-to-live field in the packet label, the optoelectronic buffer of the router performs buffering with forward error correction selectively for packets degraded due to multiple hopping every N hops. Experimental results for 10-Gb/s optical packets confirm that the scheme can expand the number of hops while keeping the bit error rate low without the need for optical 3R regenerators at each node.

  11. Comparison of error-amplification and haptic-guidance training techniques for learning of a timing-based motor task by healthy individuals.

    PubMed

    Milot, Marie-Hélène; Marchal-Crespo, Laura; Green, Christopher S; Cramer, Steven C; Reinkensmeyer, David J

    2010-03-01

    Performance errors drive motor learning for many tasks. Some researchers have suggested that reducing performance errors with haptic guidance can benefit learning by demonstrating correct movements, while others have suggested that artificially increasing errors will force faster and more complete learning. This study compared the effect of these two techniques--haptic guidance and error amplification--as healthy subjects learned to play a computerized pinball-like game. The game required learning to press a button using wrist movement at the correct time to make a flipper hit a falling ball to a randomly positioned target. Errors were decreased or increased using a robotic device that retarded or accelerated wrist movement, based on sensed movement initiation timing errors. After training with either error amplification or haptic guidance, subjects significantly reduced their timing errors and generalized learning to untrained targets. However, for a subset of more skilled subjects, training with amplified errors produced significantly greater learning than training with the reduced errors associated with haptic guidance, while for a subset of less skilled subjects, training with haptic guidance seemed to benefit learning more. These results suggest that both techniques help enhanced performance of a timing task, but learning is optimized if training subjects with the appropriate technique based on their baseline skill level.

  12. Absolute Standards for Climate Measurements

    NASA Astrophysics Data System (ADS)

    Leckey, J.

    2016-10-01

    In a world of changing climate, political uncertainty, and ever-changing budgets, the benefit of measurements traceable to SI standards increases by the day. To truly resolve climate change trends on a decadal time scale, on-orbit measurements need to be referenced to something that is both absolute and unchanging. One such mission is the Climate Absolute Radiance and Refractivity Observatory (CLARREO) that will measure a variety of climate variables with an unprecedented accuracy to definitively quantify climate change. In the CLARREO mission, we will utilize phase change cells in which a material is melted to calibrate the temperature of a blackbody that can then be observed by a spectrometer. A material's melting point is an unchanging physical constant that, through a series of transfers, can ultimately calibrate a spectrometer on an absolute scale. CLARREO consists of two primary instruments: an infrared (IR) spectrometer and a reflected solar (RS) spectrometer. The mission will contain orbiting radiometers with sufficient accuracy to calibrate other space-based instrumentation and thus transferring the absolute traceability. The status of various mission options will be presented.

  13. A Note on Standard Errors for Survival Curves in Discrete-Time Survival Analysis

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Sklar, Jeffrey C.

    2005-01-01

    Cox (1972) proposed a discrete-time survival model that is somewhat analogous to the proportional hazards model for continuous time. Efron (1988) showed that this model can be estimated using ordinary logistic regression software, and Singer and Willett (1993) provided a detailed illustration of a particularly flexible form of the model that…

  14. Time-resolved in vivo luminescence dosimetry for online error detection in pulsed dose-rate brachytherapy

    SciTech Connect

    Andersen, Claus E.; Nielsen, Soeren Kynde; Lindegaard, Jacob Christian; Tanderup, Kari

    2009-11-15

    Purpose: The purpose of this study is to present and evaluate a dose-verification protocol for pulsed dose-rate (PDR) brachytherapy based on in vivo time-resolved (1 s time resolution) fiber-coupled luminescence dosimetry. Methods: Five cervix cancer patients undergoing PDR brachytherapy (Varian GammaMed Plus with {sup 192}Ir) were monitored. The treatments comprised from 10 to 50 pulses (1 pulse/h) delivered by intracavitary/interstitial applicators (tandem-ring systems and/or needles). For each patient, one or two dosimetry probes were placed directly in or close to the tumor region using stainless steel or titanium needles. Each dosimeter probe consisted of a small aluminum oxide crystal attached to an optical fiber cable (1 mm outer diameter) that could guide radioluminescence (RL) and optically stimulated luminescence (OSL) from the crystal to special readout instrumentation. Positioning uncertainty and hypothetical dose-delivery errors (interchanged guide tubes or applicator movements from {+-}5 to {+-}15 mm) were simulated in software in order to assess the ability of the system to detect errors. Results: For three of the patients, the authors found no significant differences (P>0.01) for comparisons between in vivo measurements and calculated reference values at the level of dose per dwell position, dose per applicator, or total dose per pulse. The standard deviations of the dose per pulse were less than 3%, indicating a stable dose delivery and a highly stable geometry of applicators and dosimeter probes during the treatments. For the two other patients, the authors noted significant deviations for three individual pulses and for one dosimeter probe. These deviations could have been due to applicator movement during the treatment and one incorrectly positioned dosimeter probe, respectively. Computer simulations showed that the likelihood of detecting a pair of interchanged guide tubes increased by a factor of 10 or more for the considered patients when

  15. Absolute and relative blindsight.

    PubMed

    Balsdon, Tarryn; Azzopardi, Paul

    2015-03-01

    The concept of relative blindsight, referring to a difference in conscious awareness between conditions otherwise matched for performance, was introduced by Lau and Passingham (2006) as a way of identifying the neural correlates of consciousness (NCC) in fMRI experiments. By analogy, absolute blindsight refers to a difference between performance and awareness regardless of whether it is possible to match performance across conditions. Here, we address the question of whether relative and absolute blindsight in normal observers can be accounted for by response bias. In our replication of Lau and Passingham's experiment, the relative blindsight effect was abolished when performance was assessed by means of a bias-free 2AFC task or when the criterion for awareness was varied. Furthermore, there was no evidence of either relative or absolute blindsight when both performance and awareness were assessed with bias-free measures derived from confidence ratings using signal detection theory. This suggests that both relative and absolute blindsight in normal observers amount to no more than variations in response bias in the assessment of performance and awareness. Consideration of the properties of psychometric functions reveals a number of ways in which relative and absolute blindsight could arise trivially and elucidates a basis for the distinction between Type 1 and Type 2 blindsight.

  16. Measuring Software Timing Errors in the Presentation of Visual Stimuli in Cognitive Neuroscience Experiments

    PubMed Central

    Garaizar, Pablo; Vadillo, Miguel A.; López-de-Ipiña, Diego; Matute, Helena

    2014-01-01

    Because of the features provided by an abundance of specialized experimental software packages, personal computers have become prominent and powerful tools in cognitive research. Most of these programs have mechanisms to control the precision and accuracy with which visual stimuli are presented as well as the response times. However, external factors, often related to the technology used to display the visual information, can have a noticeable impact on the actual performance and may be easily overlooked by researchers. The aim of this study is to measure the precision and accuracy of the timing mechanisms of some of the most popular software packages used in a typical laboratory scenario in order to assess whether presentation times configured by researchers do not differ from measured times more than what is expected due to the hardware limitations. Despite the apparent precision and accuracy of the results, important issues related to timing setups in the presentation of visual stimuli were found, and they should be taken into account by researchers in their experiments. PMID:24409318

  17. Absolute neutrino mass scale

    NASA Astrophysics Data System (ADS)

    Capelli, Silvia; Di Bari, Pasquale

    2013-04-01

    Neutrino oscillation experiments firmly established non-vanishing neutrino masses, a result that can be regarded as a strong motivation to extend the Standard Model. In spite of being the lightest massive particles, neutrinos likely represent an important bridge to new physics at very high energies and offer new opportunities to address some of the current cosmological puzzles, such as the matter-antimatter asymmetry of the Universe and Dark Matter. In this context, the determination of the absolute neutrino mass scale is a key issue within modern High Energy Physics. The talks in this parallel session well describe the current exciting experimental activity aiming to determining the absolute neutrino mass scale and offer an overview of a few models beyond the Standard Model that have been proposed in order to explain the neutrino masses giving a prediction for the absolute neutrino mass scale and solving the cosmological puzzles.

  18. Error analysis of real time and post processed or bit determination of GFO using GPS tracking

    NASA Technical Reports Server (NTRS)

    Schreiner, William S.

    1991-01-01

    The goal of the Navy's GEOSAT Follow-On (GFO) mission is to map the topography of the world's oceans in both real time (operational) and post processed modes. Currently, the best candidate for supplying the required orbit accuracy is the Global Positioning System (GPS). The purpose of this fellowship was to determine the expected orbit accuracy for GFO in both the real time and post-processed modes when using GPS tracking. This report presents the work completed through the ending date of the fellowship.

  19. To Err is Normable: The Computation of Frequency-Domain Error Bounds from Time-Domain Data

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.; Veillette, Robert J.; DeAbreuGarcia, J. Alexis; Chicatelli, Amy; Hartmann, Richard

    1998-01-01

    This paper exploits the relationships among the time-domain and frequency-domain system norms to derive information useful for modeling and control design, given only the system step response data. A discussion of system and signal norms is included. The proposed procedures involve only simple numerical operations, such as the discrete approximation of derivatives and integrals, and the calculation of matrix singular values. The resulting frequency-domain and Hankel-operator norm approximations may be used to evaluate the accuracy of a given model, and to determine model corrections to decrease the modeling errors.

  20. The absolute path command

    SciTech Connect

    Moody, A.

    2012-05-11

    The ap command traveres all symlinks in a given file, directory, or executable name to identify the final absolute path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it can provide the absolute path to a relative directory from the current working directory.

  1. An error-resilient approach for real-time packet communications by HF-channel diversity

    NASA Astrophysics Data System (ADS)

    Navarro, Antonio; Rodrigues, Rui; Angeja, Joao; Tavares, Joao; Carvalho, Luis; Perdigao, Fernando

    2004-08-01

    This paper evaluates the performance of a high frequency (HF) wireless network for transporting packet multimedia services. Beyond of allowing civil/amateur communications, HF bands are also used for long distance wireless military communications. Therefore, our work is based on NATO Link and Physical layer standards, STANAG 5066 and STANAG 4539 respectively. At each HF channel, a typical transmission bandwidth is about 3 kHz with the resulting throughput bit rate up to 12800 bps. This very low bit rate by itself imposes serious challenges for reliable and low delay real time multimedia communications. Thus, this paper discusses the performance of a real time communication system designed to allow an end-to-end communication through "best effort" networks. With HF channel diversity, the packet loss percentage, on average considering three channel conditions, is decreased by 16% in the channel SNR range from 0 to 45 dB.

  2. Time-variable Earth's albedo model characteristics and applications to satellite sampling errors

    NASA Technical Reports Server (NTRS)

    Bartman, F. L.

    1981-01-01

    Characteristics of the time variable Earth albedo model are described. With the cloud cover multiplying factor adjusted to produce a global annual average albedo of 30.3, the global annual average cloud cover is 45.5 percent. Global annual average sunlit cloud cover is 48.5 percent; nighttime cloud cover is 42.7 percent. Month-to-month global average albedo is almost sinusoidal with maxima in June and December and minima in April and October. Month-to-month variation of sunlit cloud cover is similar, but not in all details. The diurnal variation of global albedo is greatest from November to March; the corresponding variation of sunlit cloud cover is greatest from May to October. Annual average zonal albedos and monthly average zonal albedos are in good agreement with satellite-measured values, with notable differences in the polar regions in some months and at 15 S. The albedo of some 10 deg by 10 deg. areas of the Earth versus zenith angle are described. Satellite albedo measurement sampling effects are described in local time and in Greenwich mean time.

  3. Estimating Absolute Site Effects

    SciTech Connect

    Malagnini, L; Mayeda, K M; Akinci, A; Bragato, P L

    2004-07-15

    The authors use previously determined direct-wave attenuation functions as well as stable, coda-derived source excitation spectra to isolate the absolute S-wave site effect for the horizontal and vertical components of weak ground motion. They used selected stations in the seismic network of the eastern Alps, and find the following: (1) all ''hard rock'' sites exhibited deamplification phenomena due to absorption at frequencies ranging between 0.5 and 12 Hz (the available bandwidth), on both the horizontal and vertical components; (2) ''hard rock'' site transfer functions showed large variability at high-frequency; (3) vertical-motion site transfer functions show strong frequency-dependence, and (4) H/V spectral ratios do not reproduce the characteristics of the true horizontal site transfer functions; (5) traditional, relative site terms obtained by using reference ''rock sites'' can be misleading in inferring the behaviors of true site transfer functions, since most rock sites have non-flat responses due to shallow heterogeneities resulting from varying degrees of weathering. They also use their stable source spectra to estimate total radiated seismic energy and compare against previous results. they find that the earthquakes in this region exhibit non-constant dynamic stress drop scaling which gives further support for a fundamental difference in rupture dynamics between small and large earthquakes. To correct the vertical and horizontal S-wave spectra for attenuation, they used detailed regional attenuation functions derived by Malagnini et al. (2002) who determined frequency-dependent geometrical spreading and Q for the region. These corrections account for the gross path effects (i.e., all distance-dependent effects), although the source and site effects are still present in the distance-corrected spectra. The main goal of this study is to isolate the absolute site effect (as a function of frequency) by removing the source spectrum (moment-rate spectrum) from

  4. Comparing response time, errors, and satisfaction between text-based and graphical user interfaces during nursing order tasks.

    PubMed

    Staggers, N; Kobus, D

    2000-01-01

    Despite the general adoption of graphical users interfaces (GUIs) in health care, few empirical data document the impact of this move on system users. This study compares two distinctly different user interfaces, a legacy text-based interface and a prototype graphical interface, for differences in nurses' response time (RT), errors, and satisfaction when the interfaces are used in the performance of computerized nursing order tasks. In a medical center on the East Coast of the United States, 98 randomly selected male and female nurses completed 40 tasks using each interface. Nurses completed four different types of order tasks (create, activate, modify, and discontinue). Using a repeated-measures and Latin square design, the study was counterbalanced for tasks, interface types, and blocks of trials. Overall, nurses had significantly faster response times (P < 0.0001) and fewer errors (P < 0.0001) using the prototype GUI than the text-based interface. The GUI was also rated significantly higher for satisfaction than the text system, and the GUI was faster to leam (P < 0.0001). Therefore, the results indicated that the use of a prototype GUI for nursing orders significantly enhances user performance and satisfaction. Consideration should be given to redesigning older user interfaces to create more modern ones by using human factors principles and input from user-centered focus groups. Future work should examine prospective nursing interfaces for highly complex interactions in computer-based patient records, detail the severity of errors made on line, and explore designs to optimize interactions in life-critical systems.

  5. Statistical error analysis in CCD time-resolved photometry with applications to variable stars and quasars

    NASA Technical Reports Server (NTRS)

    Howell, Steve B.; Warnock, Archibald, III; Mitchell, Kenneth J.

    1988-01-01

    Differential photometric time series obtained from CCD frames are tested for intrinsic variability using a newly developed analysis of variance technique. In general, the objects used for differential photometry will not all be of equal magnitude, so the techniques derived here explicitly correct for differences in the measured variances due to photon statistics. Other random-noise terms are also considered. The technique tests for the presence of intrinsic variability without regard to its random or periodic nature. It is then applied to observations of the variable stars ZZ Ceti and US 943 and the active extragalactic objects OQ 530, US 211, US 844, LB 9743, and OJ 287.

  6. [The error analysis and experimental verification of laser radar spectrum detection and terahertz time domain spectroscopy].

    PubMed

    Liu, Wen-Tao; Li, Jing-Wen; Sun, Zhi-Hui

    2010-03-01

    Terahertz waves (THz, T-ray) lie between far-infrared and microwave in electromagnetic spectrum with frequency from 0.1 to 10 THz. Many chemical agent explosives show characteristic spectral features in the terahertz. Compared with conventional methods of detecting a variety of threats, such as weapons and chemical agent, THz radiation is low frequency and non-ionizing, and does not give rise to safety concerns. The present paper summarizes the latest progress in the application of terahertz time domain spectroscopy (THz-TDS) to chemical agent explosives. A kind of device on laser radar detecting and real time spectrum measuring was designed which measures the laser spectrum on the bases of Fourier optics and optical signal processing. Wedge interferometer was used as the beam splitter to wipe off the background light and detect the laser and measure the spectrum. The result indicates that 10 ns laser radar pulse can be detected and many factors affecting experiments are also introduced. The combination of laser radar spectrum detecting, THz-TDS, modern pattern recognition and signal processing technology is the developing trend of remote detection for chemical agent explosives.

  7. Measurement of absolute frequency of continuous-wave terahertz radiation in real time using a free-running, dual-wavelength mode-locked, erbium-doped fibre laser

    NASA Astrophysics Data System (ADS)

    Hu, Guoqing; Mizuguchi, Tatsuya; Zhao, Xin; Minamikawa, Takeo; Mizuno, Takahiko; Yang, Yuli; Li, Cui; Bai, Ming; Zheng, Zheng; Yasui, Takeshi

    2017-02-01

    A single, free-running, dual-wavelength mode-locked, erbium-doped fibre laser was exploited to measure the absolute frequency of continuous-wave terahertz (CW-THz) radiation in real time using dual THz combs of photo-carriers (dual PC-THz combs). Two independent mode-locked laser beams with different wavelengths and different repetition frequencies were generated from this laser and were used to generate dual PC-THz combs having different frequency spacings in photoconductive antennae. Based on the dual PC-THz combs, the absolute frequency of CW-THz radiation was determined with a relative precision of 1.2 × 10‑9 and a relative accuracy of 1.4 × 10‑9 at a sampling rate of 100 Hz. Real-time determination of the absolute frequency of CW-THz radiation varying over a few tens of GHz was also demonstrated. Use of a single dual-wavelength mode-locked fibre laser, in place of dual mode-locked lasers, greatly reduced the size, complexity, and cost of the measurement system while maintaining the real-time capability and high measurement precision.

  8. Measurement of absolute frequency of continuous-wave terahertz radiation in real time using a free-running, dual-wavelength mode-locked, erbium-doped fibre laser

    PubMed Central

    Hu, Guoqing; Mizuguchi, Tatsuya; Zhao, Xin; Minamikawa, Takeo; Mizuno, Takahiko; Yang, Yuli; Li, Cui; Bai, Ming; Zheng, Zheng; Yasui, Takeshi

    2017-01-01

    A single, free-running, dual-wavelength mode-locked, erbium-doped fibre laser was exploited to measure the absolute frequency of continuous-wave terahertz (CW-THz) radiation in real time using dual THz combs of photo-carriers (dual PC-THz combs). Two independent mode-locked laser beams with different wavelengths and different repetition frequencies were generated from this laser and were used to generate dual PC-THz combs having different frequency spacings in photoconductive antennae. Based on the dual PC-THz combs, the absolute frequency of CW-THz radiation was determined with a relative precision of 1.2 × 10−9 and a relative accuracy of 1.4 × 10−9 at a sampling rate of 100 Hz. Real-time determination of the absolute frequency of CW-THz radiation varying over a few tens of GHz was also demonstrated. Use of a single dual-wavelength mode-locked fibre laser, in place of dual mode-locked lasers, greatly reduced the size, complexity, and cost of the measurement system while maintaining the real-time capability and high measurement precision. PMID:28186148

  9. Large-Scale Uncertainty and Error Analysis for Time-dependent Fluid/Structure Interactions in Wind Turbine Applications

    SciTech Connect

    Alonso, Juan J.; Iaccarino, Gianluca

    2013-08-25

    The following is the final report covering the entire period of this aforementioned grant, June 1, 2011 - May 31, 2013 for the portion of the effort corresponding to Stanford University (SU). SU has partnered with Sandia National Laboratories (PI: Mike S. Eldred) and Purdue University (PI: Dongbin Xiu) to complete this research project and this final report includes those contributions made by the members of the team at Stanford. Dr. Eldred is continuing his contributions to this project under a no-cost extension and his contributions to the overall effort will be detailed at a later time (once his effort has concluded) on a separate project submitted by Sandia National Laboratories. At Stanford, the team is made up of Profs. Alonso, Iaccarino, and Duraisamy, post-doctoral researcher Vinod Lakshminarayan, and graduate student Santiago Padron. At Sandia National Laboratories, the team includes Michael Eldred, Matt Barone, John Jakeman, and Stefan Domino, and at Purdue University, we have Prof. Dongbin Xiu as our main collaborator. The overall objective of this project was to develop a novel, comprehensive methodology for uncertainty quantification by combining stochastic expansions (nonintrusive polynomial chaos and stochastic collocation), the adjoint approach, and fusion with experimental data to account for aleatory and epistemic uncertainties from random variable, random field, and model form sources. The expected outcomes of this activity were detailed in the proposal and are repeated here to set the stage for the results that we have generated during the time period of execution of this project: 1. The rigorous determination of an error budget comprising numerical errors in physical space and statistical errors in stochastic space and its use for optimal allocation of resources; 2. A considerable increase in efficiency when performing uncertainty quantification with a large number of uncertain variables in complex non-linear multi-physics problems; 3. A

  10. Comparing range data across the slow-time dimension to correct motion measurement errors beyond the range resolution of a synthetic aperture radar

    DOEpatents

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2010-08-17

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  11. Audibility of dispersion error in room acoustic finite-difference time-domain simulation in the presence of absorption of air.

    PubMed

    Saarelma, Jukka; Savioja, Lauri

    2016-12-01

    The finite-difference time-domain method has gained increasing interest for room acoustic prediction use. A well-known limitation of the method is a frequency and direction dependent dispersion error. In this study, the audibility of dispersion error in the presence of air absorption is measured. The results indicate that the dispersion error in the worst-case direction of the studied scheme gets masked by the air absorption at a phase velocity error percentage of 0.28% at the frequency of 20 kHz.

  12. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    EPA Science Inventory

    BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of a...

  13. Error-based Extraction of States and Energy Landscapes from Experimental Single-Molecule Time-Series

    PubMed Central

    Taylor, J. Nicholas; Li, Chun-Biu; Cooper, David R.; Landes, Christy F.; Komatsuzaki, Tamiki

    2015-01-01

    Characterization of states, the essential components of the underlying energy landscapes, is one of the most intriguing subjects in single-molecule (SM) experiments due to the existence of noise inherent to the measurements. Here we present a method to extract the underlying state sequences from experimental SM time-series. Taking into account empirical error and the finite sampling of the time-series, the method extracts a steady-state network which provides an approximation of the underlying effective free energy landscape. The core of the method is the application of rate-distortion theory from information theory, allowing the individual data points to be assigned to multiple states simultaneously. We demonstrate the method's proficiency in its application to simulated trajectories as well as to experimental SM fluorescence resonance energy transfer (FRET) trajectories obtained from isolated agonist binding domains of the AMPA receptor, an ionotropic glutamate receptor that is prevalent in the central nervous system. PMID:25779909

  14. Error-based Extraction of States and Energy Landscapes from Experimental Single-Molecule Time-Series

    NASA Astrophysics Data System (ADS)

    Taylor, J. Nicholas; Li, Chun-Biu; Cooper, David R.; Landes, Christy F.; Komatsuzaki, Tamiki

    2015-03-01

    Characterization of states, the essential components of the underlying energy landscapes, is one of the most intriguing subjects in single-molecule (SM) experiments due to the existence of noise inherent to the measurements. Here we present a method to extract the underlying state sequences from experimental SM time-series. Taking into account empirical error and the finite sampling of the time-series, the method extracts a steady-state network which provides an approximation of the underlying effective free energy landscape. The core of the method is the application of rate-distortion theory from information theory, allowing the individual data points to be assigned to multiple states simultaneously. We demonstrate the method's proficiency in its application to simulated trajectories as well as to experimental SM fluorescence resonance energy transfer (FRET) trajectories obtained from isolated agonist binding domains of the AMPA receptor, an ionotropic glutamate receptor that is prevalent in the central nervous system.

  15. A real-time error-free color-correction facility for digital consumers

    NASA Astrophysics Data System (ADS)

    Shaw, Rodney

    2008-01-01

    It has been well known since the earliest days of color photography that color-balance in general, and facial reproduction (flesh tones) in particular, are of dominant interest to the consumer, and significant research resources have been expended in satisfying this need. The general problem is a difficult one, spanning the factors that govern perception and personal preference, the physics and chemistry of color reproduction, as well as wide field of color measurement specification, and analysis. However, with the advent of digital photography and its widespread acceptance in the consumer market, and with the possibility of a much greater degree of individual control over color reproduction, the field is taking on a new consumer-driven impetus, and the provision of user facilities for preferred color choice now constitutes an intense field of research. In addition, due to the conveniences of digital technology, the collection of large data bases and statistics relating to individual color preferences have now become a relatively straightforward operation. Using a consumer preference approach of this type, we have developed a user-friendly facility whereby unskilled consumers may manipulate the color of their personal digital images according to their preferred choice. By virtue of its ease of operation and the real-time nature of the color-correction transforms, this facility can readily be inserted anywhere a consumer interacts with a digital image, from camera, printer, or scanner, to web or photo-kiosk. Here the underlying scientific principles are explored in detail, and these are related to the practical color-preference outcomes. Examples are given of the application to the correction of images with unsatisfactory color balance, and especially to flesh tones and faces, and the nature of the consumer controls and their corresponding image transformations are explored.

  16. Developing control charts to review and monitor medication errors.

    PubMed

    Ciminera, J L; Lease, M P

    1992-03-01

    There is a need to monitor reported medication errors in a hospital setting. Because the quantity of errors vary due to external reporting, quantifying the data is extremely difficult. Typically, these errors are reviewed using classification systems that often have wide variations in the numbers per class per month. The authors recommend the use of control charts to review historical data and to monitor future data. The procedure they have adopted is a modification of schemes using absolute (i.e., positive) values of successive differences to estimate the standard deviation when only single incidence values are available in time rather than sample averages, and when many successive differences may be zero.

  17. Measurement of absolute concentrations of individual compounds in metabolite mixtures by gradient-selective time-zero 1H-13C HSQC with two concentration references and fast maximum likelihood reconstruction analysis.

    PubMed

    Hu, Kaifeng; Ellinger, James J; Chylla, Roger A; Markley, John L

    2011-12-15

    Time-zero 2D (13)C HSQC (HSQC(0)) spectroscopy offers advantages over traditional 2D NMR for quantitative analysis of solutions containing a mixture of compounds because the signal intensities are directly proportional to the concentrations of the constituents. The HSQC(0) spectrum is derived from a series of spectra collected with increasing repetition times within the basic HSQC block by extrapolating the repetition time to zero. Here we present an alternative approach to data collection, gradient-selective time-zero (1)H-(13)C HSQC(0) in combination with fast maximum likelihood reconstruction (FMLR) data analysis and the use of two concentration references for absolute concentration determination. Gradient-selective data acquisition results in cleaner spectra, and NMR data can be acquired in both constant-time and non-constant-time mode. Semiautomatic data analysis is supported by the FMLR approach, which is used to deconvolute the spectra and extract peak volumes. The peak volumes obtained from this analysis are converted to absolute concentrations by reference to the peak volumes of two internal reference compounds of known concentration: DSS (4,4-dimethyl-4-silapentane-1-sulfonic acid) at the low concentration limit (which also serves as chemical shift reference) and MES (2-(N-morpholino)ethanesulfonic acid) at the high concentration limit. The linear relationship between peak volumes and concentration is better defined with two references than with one, and the measured absolute concentrations of individual compounds in the mixture are more accurate. We compare results from semiautomated gsHSQC(0) with those obtained by the original manual phase-cycled HSQC(0) approach. The new approach is suitable for automatic metabolite profiling by simultaneous quantification of multiple metabolites in a complex mixture.

  18. Analysis of absolute flatness testing in sub-stitching interferometer

    NASA Astrophysics Data System (ADS)

    Jia, Xin; Xu, Fuchao; Xie, Weimin; Xing, Tingwen

    2016-09-01

    Sub-aperture stitching is an effective way to extend the lateral and vertical dynamic range of a conventional interferometer. The test accuracy can be achieved by removing the error of reference surface by the absolute testing method. When the testing accuracy (repeatability and reproducibility) is close to 1nm, in addition to the reference surface, other factors will also affect the measuring accuracy such as environment, zoom magnification, stitching precision, tooling and fixture, the characteristics of optical materials and so on. In the thousand level cleanroom, we establish a good environment system. Long time stability, temperature controlled at 22°+/-0.02°.The humidity and noise are controlled in a certain range. We establish a stitching system in the clean room. The vibration testing system is used to test the vibration. The air pressure testing system is also used. In the motion system, we control the tilt error no more than 4 second to reduce the error. The angle error can be tested by the autocollimator and double grating reading head.

  19. Time-resolved absolute measurements by electro-optic effect of giant electromagnetic pulses due to laser-plasma interaction in nanosecond regime

    PubMed Central

    Consoli, F.; De Angelis, R.; Duvillaret, L.; Andreoli, P. L.; Cipriani, M.; Cristofari, G.; Di Giorgio, G.; Ingenito, F.; Verona, C.

    2016-01-01

    We describe the first electro-optical absolute measurements of electromagnetic pulses (EMPs) generated by laser-plasma interaction in nanosecond regime. Laser intensities are inertial-confinement-fusion (ICF) relevant and wavelength is 1054 nm. These are the first direct EMP amplitude measurements with the detector rather close and in direct view of the plasma. A maximum field of 261 kV/m was measured, two orders of magnitude higher than previous measurements by conductive probes on nanosecond regime lasers with much higher energy. The analysis of measurements and of particle-in-cell simulations indicates that signals match the emission of charged particles detected in the same experiment, and suggests that anisotropic particle emission from target, X-ray photoionization and charge implantation on surfaces directly exposed to plasma, could be important EMP contributions. Significant information achieved on EMP features and sources is crucial for future plants of laser-plasma acceleration and inertial-confinement-fusion and for the use as effective plasma diagnostics. It also opens to remarkable applications of laser-plasma interaction as intense source of RF-microwaves for studies on materials and devices, EMP-radiation-hardening and electromagnetic compatibility. The demonstrated extreme effectivity of electric-fields detection in laser-plasma context by electro-optic effect, leads to great potential for characterization of laser-plasma interaction and generated Terahertz radiation. PMID:27301704

  20. Time-resolved absolute measurements by electro-optic effect of giant electromagnetic pulses due to laser-plasma interaction in nanosecond regime

    NASA Astrophysics Data System (ADS)

    Consoli, F.; de Angelis, R.; Duvillaret, L.; Andreoli, P. L.; Cipriani, M.; Cristofari, G.; di Giorgio, G.; Ingenito, F.; Verona, C.

    2016-06-01

    We describe the first electro-optical absolute measurements of electromagnetic pulses (EMPs) generated by laser-plasma interaction in nanosecond regime. Laser intensities are inertial-confinement-fusion (ICF) relevant and wavelength is 1054 nm. These are the first direct EMP amplitude measurements with the detector rather close and in direct view of the plasma. A maximum field of 261 kV/m was measured, two orders of magnitude higher than previous measurements by conductive probes on nanosecond regime lasers with much higher energy. The analysis of measurements and of particle-in-cell simulations indicates that signals match the emission of charged particles detected in the same experiment, and suggests that anisotropic particle emission from target, X-ray photoionization and charge implantation on surfaces directly exposed to plasma, could be important EMP contributions. Significant information achieved on EMP features and sources is crucial for future plants of laser-plasma acceleration and inertial-confinement-fusion and for the use as effective plasma diagnostics. It also opens to remarkable applications of laser-plasma interaction as intense source of RF-microwaves for studies on materials and devices, EMP-radiation-hardening and electromagnetic compatibility. The demonstrated extreme effectivity of electric-fields detection in laser-plasma context by electro-optic effect, leads to great potential for characterization of laser-plasma interaction and generated Terahertz radiation.

  1. Electronic Absolute Cartesian Autocollimator

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B.

    2006-01-01

    An electronic absolute Cartesian autocollimator performs the same basic optical function as does a conventional all-optical or a conventional electronic autocollimator but differs in the nature of its optical target and the manner in which the position of the image of the target is measured. The term absolute in the name of this apparatus reflects the nature of the position measurement, which, unlike in a conventional electronic autocollimator, is based absolutely on the position of the image rather than on an assumed proportionality between the position and the levels of processed analog electronic signals. The term Cartesian in the name of this apparatus reflects the nature of its optical target. Figure 1 depicts the electronic functional blocks of an electronic absolute Cartesian autocollimator along with its basic optical layout, which is the same as that of a conventional autocollimator. Referring first to the optical layout and functions only, this or any autocollimator is used to measure the compound angular deviation of a flat datum mirror with respect to the optical axis of the autocollimator itself. The optical components include an illuminated target, a beam splitter, an objective or collimating lens, and a viewer or detector (described in more detail below) at a viewing plane. The target and the viewing planes are focal planes of the lens. Target light reflected by the datum mirror is imaged on the viewing plane at unit magnification by the collimating lens. If the normal to the datum mirror is parallel to the optical axis of the autocollimator, then the target image is centered on the viewing plane. Any angular deviation of the normal from the optical axis manifests itself as a lateral displacement of the target image from the center. The magnitude of the displacement is proportional to the focal length and to the magnitude (assumed to be small) of the angular deviation. The direction of the displacement is perpendicular to the axis about which the

  2. Absolute airborne gravimetry

    NASA Astrophysics Data System (ADS)

    Baumann, Henri

    This work consists of a feasibility study of a first stage prototype airborne absolute gravimeter system. In contrast to relative systems, which are using spring gravimeters, the measurements acquired by absolute systems are uncorrelated and the instrument is not suffering from problems like instrumental drift, frequency response of the spring and possible variation of the calibration factor. The major problem we had to resolve were to reduce the influence of the non-gravitational accelerations included in the measurements. We studied two different approaches to resolve it: direct mechanical filtering, and post-processing digital compensation. The first part of the work describes in detail the different mechanical passive filters of vibrations, which were studied and tested in the laboratory and later in a small truck in movement. For these tests as well as for the airborne measurements an absolute gravimeter FG5-L from Micro-G Ltd was used together with an Inertial navigation system Litton-200, a vertical accelerometer EpiSensor, and GPS receivers for positioning. These tests showed that only the use of an optical table gives acceptable results. However, it is unable to compensate for the effects of the accelerations of the drag free chamber. The second part describes the strategy of the data processing. It is based on modeling the perturbing accelerations by means of GPS, EpiSensor and INS data. In the third part the airborne experiment is described in detail, from the mounting in the aircraft and data processing to the different problems encountered during the evaluation of the quality and accuracy of the results. In the part of data processing the different steps conducted from the raw apparent gravity data and the trajectories to the estimation of the true gravity are explained. A comparison between the estimated airborne data and those obtained by ground upward continuation at flight altitude allows to state that airborne absolute gravimetry is feasible and

  3. Reiterative deconvolution: New technique for time of flight estimation errors reduction in case of close proximity of two reflections.

    PubMed

    Svilainis, Linas; Lukoseviciute, Kristina; Liaukonis, Dobilas

    2017-04-01

    ToF estimation errors for two reflections have been analyzed. Case of high signal-to-noise ratio was assumed, when accuracy of the order of few nanoseconds can be achieved. It was indicated that additional bias errors are introduced in ToF estimator when other reflection is in close temporal proximity to the reflection of interest. Research demonstrates that iterative deconvolution does not improve the accuracy significantly. Novel technique is suggested which addresses this issue by means of a reiterative deconvolution. Simulation and experimental performance evaluation is presented. Bias error quickly diminishes with every reiteration when using new technique and is below the other errors after few reiterations.

  4. A hardware error estimate for floating-point computations

    NASA Astrophysics Data System (ADS)

    Lang, Tomás; Bruguera, Javier D.

    2008-08-01

    We propose a hardware-computed estimate of the roundoff error in floating-point computations. The estimate is computed concurrently with the execution of the program and gives an estimation of the accuracy of the result. The intention is to have a qualitative indication when the accuracy of the result is low. We aim for a simple implementation and a negligible effect on the execution of the program. Large errors due to roundoff occur in some computations, producing inaccurate results. However, usually these large errors occur only for some values of the data, so that the result is accurate in most executions. As a consequence, the computation of an estimate of the error during execution would allow the use of algorithms that produce accurate results most of the time. In contrast, if an error estimate is not available, the solution is to perform an error analysis. However, this analysis is complex or impossible in some cases, and it produces a worst-case error bound. The proposed approach is to keep with each value an estimate of its error, which is computed when the value is produced. This error is the sum of a propagated error, due to the errors of the operands, plus the generated error due to roundoff during the operation. Since roundoff errors are signed values (when rounding to nearest is used), the computation of the error allows for compensation when errors are of different sign. However, since the error estimate is of finite precision, it suffers from similar accuracy problems as any floating-point computation. Moreover, it is not an error bound. Ideally, the estimate should be large when the error is large and small when the error is small. Since this cannot be achieved always with an inexact estimate, we aim at assuring the first property always, and the second most of the time. As a minimum, we aim to produce a qualitative indication of the error. To indicate the accuracy of the value, the most appropriate type of error is the relative error. However

  5. Errors in the estimation of approximate entropy and other recurrence-plot-derived indices due to the finite resolution of RR time series.

    PubMed

    García-González, Miguel A; Fernández-Chimeno, Mireya; Ramos-Castro, Juan

    2009-02-01

    An analysis of the errors due to the finite resolution of RR time series in the estimation of the approximate entropy (ApEn) is described. The quantification errors in the discrete RR time series produce considerable errors in the ApEn estimation (bias and variance) when the signal variability or the sampling frequency is low. Similar errors can be found in indices related to the quantification of recurrence plots. An easy way to calculate a figure of merit [the signal to resolution of the neighborhood ratio (SRN)] is proposed in order to predict when the bias in the indices could be high. When SRN is close to an integer value n, the bias is higher than when near n - 1/2 or n + 1/2. Moreover, if SRN is close to an integer value, the lower this value, the greater the bias is.

  6. Identifying Autocorrelation Generated by Various Error Processes in Interrupted Time-Series Regression Designs: A Comparison of AR1 and Portmanteau Tests

    ERIC Educational Resources Information Center

    Huitema, Bradley E.; McKean, Joseph W.

    2007-01-01

    Regression models used in the analysis of interrupted time-series designs assume statistically independent errors. Four methods of evaluating this assumption are the Durbin-Watson (D-W), Huitema-McKean (H-M), Box-Pierce (B-P), and Ljung-Box (L-B) tests. These tests were compared with respect to Type I error and power under a wide variety of error…

  7. Absolute Equilibrium Entropy

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    1997-01-01

    The entropy associated with absolute equilibrium ensemble theories of ideal, homogeneous, fluid and magneto-fluid turbulence is discussed and the three-dimensional fluid case is examined in detail. A sigma-function is defined, whose minimum value with respect to global parameters is the entropy. A comparison is made between the use of global functions sigma and phase functions H (associated with the development of various H-theorems of ideal turbulence). It is shown that the two approaches are complimentary though conceptually different: H-theorems show that an isolated system tends to equilibrium while sigma-functions allow the demonstration that entropy never decreases when two previously isolated systems are combined. This provides a more complete picture of entropy in the statistical mechanics of ideal fluids.

  8. Absolute multilateration between spheres

    NASA Astrophysics Data System (ADS)

    Muelaner, Jody; Wadsworth, William; Azini, Maria; Mullineux, Glen; Hughes, Ben; Reichold, Armin

    2017-04-01

    Environmental effects typically limit the accuracy of large scale coordinate measurements in applications such as aircraft production and particle accelerator alignment. This paper presents an initial design for a novel measurement technique with analysis and simulation showing that that it could overcome the environmental limitations to provide a step change in large scale coordinate measurement accuracy. Referred to as absolute multilateration between spheres (AMS), it involves using absolute distance interferometry to directly measure the distances between pairs of plain steel spheres. A large portion of each sphere remains accessible as a reference datum, while the laser path can be shielded from environmental disturbances. As a single scale bar this can provide accurate scale information to be used for instrument verification or network measurement scaling. Since spheres can be simultaneously measured from multiple directions, it also allows highly accurate multilateration-based coordinate measurements to act as a large scale datum structure for localized measurements, or to be integrated within assembly tooling, coordinate measurement machines or robotic machinery. Analysis and simulation show that AMS can be self-aligned to achieve a theoretical combined standard uncertainty for the independent uncertainties of an individual 1 m scale bar of approximately 0.49 µm. It is also shown that combined with a 1 µm m‑1 standard uncertainty in the central reference system this could result in coordinate standard uncertainty magnitudes of 42 µm over a slender 1 m by 20 m network. This would be a sufficient step change in accuracy to enable next generation aerospace structures with natural laminar flow and part-to-part interchangeability.

  9. On the estimation errors of KM and V from time-course experiments using the Michaelis-Menten equation.

    PubMed

    Stroberg, Wylie; Schnell, Santiago

    2016-12-01

    The conditions under which the Michaelis-Menten equation accurately captures the steady-state kinetics of a simple enzyme-catalyzed reaction is contrasted with the conditions under which the same equation can be used to estimate parameters, KM and V, from progress curve data. Validity of the underlying assumptions leading to the Michaelis-Menten equation are shown to be necessary, but not sufficient to guarantee accurate estimation of KM and V. Detailed error analysis and numerical "experiments" show the required experimental conditions for the independent estimation of both KM and V from progress curves. A timescale, tQ, measuring the portion of the time course over which the progress curve exhibits substantial curvature provides a novel criterion for accurate estimation of KM and V from a progress curve experiment. It is found that, if the initial substrate concentration is of the same order of magnitude as KM, the estimated values of the KM and V will correspond to their true values calculated from the microscopic rate constants of the corresponding mass-action system, only so long as the initial enzyme concentration is less than KM.

  10. The generalized STAR(1,1) modeling with time correlated errors to red-chili weekly prices of some traditional markets in Bandung, West Java

    NASA Astrophysics Data System (ADS)

    Nisa Fadlilah F., I.; Mukhaiyar, Utriweni; Fahmi, Fauzia

    2015-12-01

    The observations at a certain location may be linearly influenced by the previous times of observations at that location and neighbor locations, which could be analyzed by Generalized STAR(1,1). In this paper, the weekly red-chili prices secondary-data of five main traditional markets in Bandung are used as case study. The purpose of GSTAR(1,1) model is to forecast the next time red-chili prices at those markets. The model is identified by sample space-time ACF and space-time PACF, and model parameters are estimated by least square estimation method. Theoretically, the errors' independency assumption could simplify the parameter estimation's problem. However, practically that assumption is hard to satisfy since the errors may be correlated each other's. In red-chili prices modeling, it is considered that the process has time-correlated errors, i.e. martingale difference process, instead of follow normal distribution. Here, we do some simulations to investigate the behavior of errors' assumptions. Although some of results show that the behavior of the errors' model are not always followed the martingale difference process, it does not corrupt the goodness of GSTAR(1,1) model to forecast the red-chili prices at those five markets.

  11. Sounding rocket measurement of the absolute solar EUV flux utilizing a silicon photodiode

    SciTech Connect

    Ogawa, H.S.; McMullin, D.; Judge, D.L. ); Canfield, L.R. )

    1990-04-01

    A newly developed stable and high quantum efficiency silicon photodiode was used to obtain an accurate measurement of the integrated absolute magnitude of the solar extreme ultraviolet photon flux in the spectral region between 50 and 800 {angstrom}. The detector was flown aboard a solar point sounding rocket launched from White Sands Missile Range in New Mexico on October 24, 1988. The adjusted daily 10.7-cm solar radio flux and sunspot number were 168.4 and 121, respectively. The unattenuated absolute value of the solar EUV flux at 1 AU in the specified wavelength region was 6.81 {times} 10{sup 10} photons cm{sup {minus}2} s{sup {minus}1}. Based on a nominal probable error of 7% for National Institute of Standards and Technology detector efficiency measurements in the 50- to 500-{angstrom} region (5% on longer wavelength measurements between 500 and 1216 {angstrom}), and based on experimental errors associated with their rocket instrumentation and analysis, a conservative total error estimate of {approximately}14% is assigned to the absolute integral solar flux obtained.

  12. WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice

    SciTech Connect

    Kry, S; Dromgoole, L; Alvarez, P; Lowenstein, J; Molineu, A; Taylor, P; Followill, D

    2015-06-15

    Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutions were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly

  13. Dose error from deviation of dwell time and source position for high dose-rate 192Ir in remote afterloading system

    PubMed Central

    Okamoto, Hiroyuki; Aikawa, Ako; Wakita, Akihisa; Yoshio, Kotaro; Murakami, Naoya; Nakamura, Satoshi; Hamada, Minoru; Abe, Yoshihisa; Itami, Jun

    2014-01-01

    The influence of deviations in dwell times and source positions for 192Ir HDR-RALS was investigated. The potential dose errors for various kinds of brachytherapy procedures were evaluated. The deviations of dwell time ΔT of a 192Ir HDR source for the various dwell times were measured with a well-type ionization chamber. The deviations of source position ΔP were measured with two methods. One is to measure actual source position using a check ruler device. The other is to analyze peak distances from radiographic film irradiated with 20 mm gap between the dwell positions. The composite dose errors were calculated using Gaussian distribution with ΔT and ΔP as 1σ of the measurements. Dose errors depend on dwell time and distance from the point of interest to the dwell position. To evaluate the dose error in clinical practice, dwell times and point of interest distances were obtained from actual treatment plans involving cylinder, tandem-ovoid, tandem-ovoid with interstitial needles, multiple interstitial needles, and surface-mold applicators. The ΔT and ΔP were 32 ms (maximum for various dwell times) and 0.12 mm (ruler), 0.11 mm (radiographic film). The multiple interstitial needles represent the highest dose error of 2%, while the others represent less than approximately 1%. Potential dose error due to dwell time and source position deviation can depend on kinds of brachytherapy techniques. In all cases, the multiple interstitial needles is most susceptible. PMID:24566719

  14. Absolute measurement of length with nanometric resolution

    NASA Astrophysics Data System (ADS)

    Apostol, D.; Garoi, F.; Timcu, A.; Damian, V.; Logofatu, P. C.; Nascov, V.

    2005-08-01

    Laser interferometer displacement measuring transducers have a well-defined traceability route to the definition of the meter. The laser interferometer is de-facto length scale for applications in micro and nano technologies. However their physical unit -half lambda is too large for nanometric resolution. Fringe interpolation-usual technique to improve the resolution-lack of reproducibility could be avoided using the principles of absolute distance measurement. Absolute distance refers to the use of interferometric techniques for determining the position of an object without the necessity of measuring continuous displacements between points. The interference pattern as produced by the interference of two point-like coherent sources is fitted to a geometric model so as to determine the longitudinal location of the target by minimizing least square errors. The longitudinal coordinate of the target was measured with accuracy better than 1 nm, for a target position range of 0.4μm.

  15. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2001-01-01

    prohibitively expensive, as it would require manufacturing numerous amplifiers, in addition to acquiring the required digital hardware. As an alternative, the time-domain TWT interaction model developed here provides the capability to establish a computational test bench where ISI or bit error rate can be simulated as a function of TWT operating parameters and component geometries. Intermodulation products, harmonic generation, and backward waves can also be monitored with the model for similar correlations. The advancements in computational capabilities and corresponding potential improvements in TWT performance may prove to be the enabling technologies for realizing unprecedented data rates for near real time transmission of the increasingly larger volumes of data demanded by planned commercial and Government satellite communications applications. This work is in support of the Cross Enterprise Technology Development Program in Headquarters' Advanced Technology & Mission Studies Division and the Air Force Office of Scientific Research Small Business Technology Transfer programs.

  16. Absolute timing of sulfide and gold mineralization: A comparison of Re-Os molybdenite and Ar-Ar mica methods from the Tintina Gold Belt, Alaska

    USGS Publications Warehouse

    Selby, D.; Creaser, R.A.; Hart, C.J.R.; Rombach, C.S.; Thompson, J.F.H.; Smith, M.T.; Bakke, A.A.; Goldfarb, R.J.

    2002-01-01

    New Re-Os molybdenite dates from two lode gold deposits of the Tintina Gold Belt, Alaska, provide direct timing constraints for sulfide and gold mineralization. At Fort Knox, the Re-Os molybdenite date is identical to the U-Pb zircon age for the host intrusion, supporting an intrusive-related origin for the deposit. However, 40Ar/39Ar dates from hydrothermal and igneous mica are considerably younger. At the Pogo deposit, Re-Os molybdenite dates are also much older than 40Ar/39Ar dates from hydrothermal mica, but dissimilar to the age of local granites. These age relationships indicate that the Re-Os molybdenite method records the timing of sulfide and gold mineralization, whereas much younger 40Ar/39Ar dates are affected by post-ore thermal events, slow cooling, and/or systemic analytical effects. The results of this study complement a growing body of evidence to indicate that the Re-Os chronometer in molybdenite can be an accurate and robust tool for establishing timing relations in ore systems.

  17. Development of defined microbial population standards using fluorescence activated cell sorting for the absolute quantification of S. aureus using real-time PCR.

    PubMed

    Martinon, Alice; Cronin, Ultan P; Wilkinson, Martin G

    2012-01-01

    In this article, four types of standards were assessed in a SYBR Green-based real-time PCR procedure for the quantification of Staphylococcus aureus (S. aureus) in DNA samples. The standards were purified S. aureus genomic DNA (type A), circular plasmid DNA containing a thermonuclease (nuc) gene fragment (type B), DNA extracted from defined populations of S. aureus cells generated by Fluorescence Activated Cell Sorting (FACS) technology with (type C) or without purification of DNA by boiling (type D). The optimal efficiency of 2.016 was obtained on Roche LightCycler(®) 4.1. software for type C standards, whereas the lowest efficiency (1.682) corresponded to type D standards. Type C standards appeared to be more suitable for quantitative real-time PCR because of the use of defined populations for construction of standard curves. Overall, Fieller Confidence Interval algorithm may be improved for replicates having a low standard deviation in Cycle Threshold values such as found for type B and C standards. Stabilities of diluted PCR standards stored at -20°C were compared after 0, 7, 14 and 30 days and were lower for type A or C standards compared with type B standards. However, FACS generated standards may be useful for bacterial quantification in real-time PCR assays once optimal storage and temperature conditions are defined.

  18. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOEpatents

    Prasad, Mano K.; Snyderman, Neal J.; Rowland, Mark S.

    2015-12-01

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  19. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOEpatents

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-06-05

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  20. Diagnosis of human herpes virus 1 and 2 (HHV-1 and HHV-2): use of a synthetic standard curve for absolute quantification by real time polymerase chain reaction

    PubMed Central

    Lima, Lyana Rodrigues Pinto; da Silva, Amanda Perse; Schmidt-Chanasit, Jonas; de Paula, Vanessa Salete

    2017-01-01

    The use of quantitative real time polymerase chain reaction (qPCR) for herpesvirus detection has improved the sensitivity and specificity of diagnosis, as it is able to detect shedding episodes in the absence of clinical lesions and diagnose clinical specimens that have low viral loads. With an aim to improve the detection and quantification of herpesvirus by qPCR, synthetic standard curves for human herpesvirus 1 and 2 (HHV-1 and HHV-2) targeting regions gD and gG, respectively, were designed and evaluated. The results show that synthetic curves can replace DNA standard curves in diagnostic herpes qPCR. PMID:28225902

  1. Diagnosis of human herpes virus 1 and 2 (HHV-1 and HHV-2): use of a synthetic standard curve for absolute quantification by real time polymerase chain reaction.

    PubMed

    Lima, Lyana Rodrigues Pinto; Silva, Amanda Perse da; Schmidt-Chanasit, Jonas; Paula, Vanessa Salete de

    2017-02-16

    The use of quantitative real time polymerase chain reaction (qPCR) for herpesvirus detection has improved the sensitivity and specificity of diagnosis, as it is able to detect shedding episodes in the absence of clinical lesions and diagnose clinical specimens that have low viral loads. With an aim to improve the detection and quantification of herpesvirus by qPCR, synthetic standard curves for human herpesvirus 1 and 2 (HHV-1 and HHV-2) targeting regions gD and gG, respectively, were designed and evaluated. The results show that synthetic curves can replace DNA standard curves in diagnostic herpes qPCR.

  2. Diagnosis of human herpes virus 1 and 2 (HHV-1 and HHV-2): use of a synthetic standard curve for absolute quantification by real time polymerase chain reaction.

    PubMed

    Lima, Lyana Rodrigues Pinto; Silva, Amanda Perse da; Schmidt-Chanasit, Jonas; Paula, Vanessa Salete de

    2017-03-01

    The use of quantitative real time polymerase chain reaction (qPCR) for herpesvirus detection has improved the sensitivity and specificity of diagnosis, as it is able to detect shedding episodes in the absence of clinical lesions and diagnose clinical specimens that have low viral loads. With an aim to improve the detection and quantification of herpesvirus by qPCR, synthetic standard curves for human herpesvirus 1 and 2 (HHV-1 and HHV-2) targeting regions gD and gG, respectively, were designed and evaluated. The results show that synthetic curves can replace DNA standard curves in diagnostic herpes qPCR.

  3. An absolute radius scale for Saturn's rings

    NASA Technical Reports Server (NTRS)

    Nicholson, Philip D.; Cooke, Maren L.; Pelton, Emily

    1990-01-01

    Radio and stellar occultation observations of Saturn's rings made by the Voyager spacecraft are discussed. The data reveal systematic discrepancies of almost 10 km in some parts of the rings, limiting some of the investigations. A revised solution for Saturn's rotation pole has been proposed which removes the discrepancies between the stellar and radio occultation profiles. Corrections to previously published radii vary from -2 to -10 km for the radio occultation, and +5 to -6 km for the stellar occultation. An examination of spiral density waves in the outer A Ring supports that the revised absolute radii are in error by no more than 2 km.

  4. Absolute method of measuring magnetic susceptibility

    USGS Publications Warehouse

    Thorpe, A.; Senftle, F.E.

    1959-01-01

    An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.

  5. Comparison of haptic guidance and error amplification robotic trainings for the learning of a timing-based motor task by healthy seniors

    PubMed Central

    Bouchard, Amy E.; Corriveau, Hélène; Milot, Marie-Hélène

    2015-01-01

    With age, a decline in the temporal aspect of movement is observed such as a longer movement execution time and a decreased timing accuracy. Robotic training can represent an interesting approach to help improve movement timing among the elderly. Two types of robotic training—haptic guidance (HG; demonstrating the correct movement for a better movement planning and improved execution of movement) and error amplification (EA; exaggerating movement errors to have a more rapid and complete learning) have been positively used in young healthy subjects to boost timing accuracy. For healthy seniors, only HG training has been used so far where significant and positive timing gains have been obtained. The goal of the study was to evaluate and compare the impact of both HG and EA robotic trainings on the improvement of seniors’ movement timing. Thirty-two healthy seniors (mean age 68 ± 4 years) learned to play a pinball-like game by triggering a one-degree-of-freedom hand robot at the proper time to make a flipper move and direct a falling ball toward a randomly positioned target. During HG and EA robotic trainings, the subjects’ timing errors were decreased and increased, respectively, based on the subjects’ timing errors in initiating a movement. Results showed that only HG training benefited learning, but the improvement did not generalize to untrained targets. Also, age had no influence on the efficacy of HG robotic training, meaning that the oldest subjects did not benefit more from HG training than the younger senior subjects. Using HG to teach the correct timing of movement seems to be a good strategy to improve motor learning for the elderly as for younger people. However, more studies are needed to assess the long-term impact of HG robotic training on improvement in movement timing. PMID:25873868

  6. Real-time RT-PCR for detection, identification and absolute quantification of viral haemorrhagic septicaemia virus using different types of standards.

    PubMed

    Lopez-Vazquez, C; Bandín, I; Dopazo, C P

    2015-05-21

    In the present study, 2 systems of real-time RT-PCR-one based on SYBR Green and the other on TaqMan-were designed to detect strains from any genotype of viral haemorrhagic septicaemia virus (VHSV), with high sensitivity and repeatability/reproducibility. In addition, the method was optimized for quantitative purposes (qRT-PCR), and standard curves with different types of reference templates were constructed and compared. Specificity was tested against 26 isolates from 4 genotypes. The sensitivity of the procedures was first tested against cell culture isolation, obtaining a limit of detection (LD) of 100 TCID50 ml-1 (100-fold below the LD using cell culture), at a threshold cycle value (Ct) of 36. Sensitivity was also evaluated using RNA from crude (LD = 1 fg; 160 genome copies) and purified virus (100 ag; 16 copies), plasmid DNA (2 copies) and RNA transcript (15 copies). No differences between both chemistries were observed in sensitivity and dynamic range. To evaluate repeatability and reproducibility, all experiments were performed in triplicate and on 3 different days, by workers with different levels of experience, obtaining Ct values with coefficients of variation always <5. This fact, together with the high efficiency and R2 values of the standard curves, encouraged us to analyse the reliability of the method for viral quantification. The results not only demonstrated that the procedure can be used for detection, identification and quantification of this virus, but also demonstrated a clear correlation between the regression lines obtained with different standards, which will help scientists to compare sensitivity results between different studies.

  7. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  8. Stimulus Probability Effects in Absolute Identification

    ERIC Educational Resources Information Center

    Kent, Christopher; Lamberts, Koen

    2016-01-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…

  9. ALTIMETER ERRORS,

    DTIC Science & Technology

    CIVIL AVIATION, *ALTIMETERS, FLIGHT INSTRUMENTS, RELIABILITY, ERRORS , PERFORMANCE(ENGINEERING), BAROMETERS, BAROMETRIC PRESSURE, ATMOSPHERIC TEMPERATURE, ALTITUDE, CORRECTIONS, AVIATION SAFETY, USSR.

  10. Absolute and Convective Instability of a Liquid Jet

    NASA Technical Reports Server (NTRS)

    Lin, S. P.; Hudman, M.; Chen, J. N.

    1999-01-01

    The existence of absolute instability in a liquid jet has been predicted for some time. The disturbance grows in time and propagates both upstream and downstream in an absolutely unstable liquid jet. The image of absolute instability is captured in the NASA 2.2 sec drop tower and reported here. The transition from convective to absolute instability is observed experimentally. The experimental results are compared with the theoretical predictions on the transition Weber number as functions of the Reynolds number. The role of interfacial shear relative to all other relevant forces which cause the onset of jet breakup is explained.

  11. First Absolutely Calibrated Localized Measurements of Ion Velocity in the MST in Locked and Rotating Plasmas

    NASA Astrophysics Data System (ADS)

    Baltzer, M.; Craig, D.; den Hartog, D. J.; Nornberg, M. D.; Munaretto, S.

    2015-11-01

    An Ion Doppler Spectrometer (IDS) is used on MST for high time-resolution passive and active measurements of impurity ion emission. Absolutely calibrated measurements of flow are difficult because the spectrometer records data within 0.3 nm of the C+5 line of interest, and commercial calibration lamps do not produce lines in this narrow range . A novel optical system was designed to absolutely calibrate the IDS. The device uses an UV LED to produce a broad emission curve in the desired region. A Fabry-Perot etalon filters this light, cutting transmittance peaks into the pattern of the LED emission. An optical train of fused silica lenses focuses the light into the IDS with f/4. A holographic diffuser blurs the light cone to increase homogeneity. Using this light source, the absolute Doppler shift of ion emissions can be measured in MST plasmas. In combination with charge exchange recombination spectroscopy, localized ion velocities can now be measured. Previously, a time-averaged measurement along the chord bisecting the poloidal plane was used to calibrate the IDS; the quality of these central chord calibrations can be characterized with our absolute calibration. Calibration errors may also be quantified and minimized by optimizing the curve-fitting process. Preliminary measurements of toroidal velocity in locked and rotating plasmas will be shown. This work has been supported by the US DOE.

  12. Empathy and error processing.

    PubMed

    Larson, Michael J; Fair, Joseph E; Good, Daniel A; Baldwin, Scott A

    2010-05-01

    Recent research suggests a relationship between empathy and error processing. Error processing is an evaluative control function that can be measured using post-error response time slowing and the error-related negativity (ERN) and post-error positivity (Pe) components of the event-related potential (ERP). Thirty healthy participants completed two measures of empathy, the Interpersonal Reactivity Index (IRI) and the Empathy Quotient (EQ), and a modified Stroop task. Post-error slowing was associated with increased empathic personal distress on the IRI. ERN amplitude was related to overall empathy score on the EQ and the fantasy subscale of the IRI. The Pe and measures of empathy were not related. Results remained consistent when negative affect was controlled via partial correlation, with an additional relationship between ERN amplitude and empathic concern on the IRI. Findings support a connection between empathy and error processing mechanisms.

  13. Field evaluation of the error arising from inadequate time averaging in the standard use of depth-integrating suspended-sediment samplers

    USGS Publications Warehouse

    Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.

    2011-01-01

    Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required

  14. Cryogenic, Absolute, High Pressure Sensor

    NASA Technical Reports Server (NTRS)

    Chapman, John J. (Inventor); Shams. Qamar A. (Inventor); Powers, William T. (Inventor)

    2001-01-01

    A pressure sensor is provided for cryogenic, high pressure applications. A highly doped silicon piezoresistive pressure sensor is bonded to a silicon substrate in an absolute pressure sensing configuration. The absolute pressure sensor is bonded to an aluminum nitride substrate. Aluminum nitride has appropriate coefficient of thermal expansion for use with highly doped silicon at cryogenic temperatures. A group of sensors, either two sensors on two substrates or four sensors on a single substrate are packaged in a pressure vessel.

  15. A multi-centennial time series of well-constrained ΔR values for the Irish Sea derived using absolutely-dated shell samples from the mollusc Arctica islandica

    NASA Astrophysics Data System (ADS)

    Butler, P. G.; Scourse, J. D.; Richardson, C. A.; Wanamaker, A. D., Jr.

    2009-04-01

    Determinations of the local correction (ΔR) to the globally averaged marine radiocarbon reservoir age are often isolated in space and time, derived from heterogeneous sources and constrained by significant uncertainties. Although time series of ΔR at single sites can be obtained from sediment cores, these are subject to multiple uncertainties related to sedimentation rates, bioturbation and interspecific variations in the source of radiocarbon in the analysed samples. Coral records provide better resolution, but these are available only for tropical locations. It is shown here that it is possible to use the shell of the long-lived bivalve mollusc Arctica islandica as a source of high resolution time series of absolutely-dated marine radiocarbon determinations for the shelf seas surrounding the North Atlantic ocean. Annual growth increments in the shell can be crossdated and chronologies can be constructed in a precise analogue with the use of tree-rings. Because the calendar dates of the samples are known, ΔR can be determined with high precision and accuracy and because all the samples are from the same species, the time series of ΔR values possesses a high degree of internal consistency. Presented here is a multi-centennial (AD 1593 - AD 1933) time series of 31 ΔR values for a site in the Irish Sea close to the Isle of Man. The mean value of ΔR (-62 14C yrs) does not change significantly during this period but increased variability is apparent before AD 1750.

  16. Real-time soft error rate measurements on bulk 40 nm SRAM memories: a five-year dual-site experiment

    NASA Astrophysics Data System (ADS)

    Autran, J. L.; Munteanu, D.; Moindjie, S.; Saad Saoud, T.; Gasiot, G.; Roche, P.

    2016-11-01

    This paper reports five years of real-time soft error rate experimentation conducted with the same setup at mountain altitude for three years and then at sea level for two years. More than 7 Gbit of SRAM memories manufactured in CMOS bulk 40 nm technology have been subjected to the natural radiation background. The intensity of the atmospheric neutron flux has been continuously measured on site during these experiments using dedicated neutron monitors. As the result, the neutron and alpha component of the soft error rate (SER) have been very accurately extracted from these measurements, refining the first SER estimations performed in 2012 for this SRAM technology. Data obtained at sea level evidence, for the first time, a possible correlation between the neutron flux changes induced by the daily atmospheric pressure variations and the measured SER. Finally, all of the experimental data are compared with results obtained from accelerated tests and numerical simulation.

  17. Medication Errors

    MedlinePlus

    ... common links HHS U.S. Department of Health and Human Services U.S. Food and Drug Administration A to Z Index Follow ... Practices National Patient Safety Foundation To Err is Human: ... Errors: Quality Chasm Series National Coordinating Council for Medication Error ...

  18. Absolute Gravity Measurements with the FG5#215 in Czech Republic, Slovakia and Hungary

    NASA Astrophysics Data System (ADS)

    Pálinkás, V.; Kostelecký, J.; Lederer, M.

    2009-04-01

    Since 2001, the absolute gravimeter FG5#215 has been used for modernization of national gravity networks in Czech Republic, Slovakia and Hungary. Altogether 37 absolute sites were measured at least once. In case of 29 sites, the absolute gravity has been determined prior to the FG5#215 by other accurate absolute meters (FG5 or JILA-g). Differences between gravity results, which reach up to 25 microgal, are caused by random and systematic errors of measurements, variations of environmental effects (mainly hydrological effects) and by geodynamics. The set of achieved differences is analyzed for potential hydrological effects based on global hydrology models and systematic errors of instrumental origin. Systematic instrumental errors are evaluated in context with accomplished international comparison measurements of absolute gravimeters in Sèvres and Walferdange organized by the Bureau International des Poids et Measures and European Center for Geodynamics and Seismology, respectively.

  19. Elevation correction factor for absolute pressure measurements

    NASA Technical Reports Server (NTRS)

    Panek, Joseph W.; Sorrells, Mark R.

    1996-01-01

    With the arrival of highly accurate multi-port pressure measurement systems, conditions that previously did not affect overall system accuracy must now be scrutinized closely. Errors caused by elevation differences between pressure sensing elements and model pressure taps can be quantified and corrected. With multi-port pressure measurement systems, the sensing elements are connected to pressure taps that may be many feet away. The measurement system may be at a different elevation than the pressure taps due to laboratory space or test article constraints. This difference produces a pressure gradient that is inversely proportional to height within the interface tube. The pressure at the bottom of the tube will be higher than the pressure at the top due to the weight of the tube's column of air. Tubes with higher pressures will exhibit larger absolute errors due to the higher air density. The above effect is well documented but has generally been taken into account with large elevations only. With error analysis techniques, the loss in accuracy from elevation can be easily quantified. Correction factors can be applied to maintain the high accuracies of new pressure measurement systems.

  20. Prospects for the Moon as an SI-Traceable Absolute Spectroradiometric Standard for Satellite Remote Sensing

    NASA Astrophysics Data System (ADS)

    Cramer, C. E.; Stone, T. C.; Lykke, K.; Woodward, J. T.

    2015-12-01

    The Earth's Moon has many physical properties that make it suitable for use as a reference light source for radiometric calibration of remote sensing satellite instruments. Lunar calibration has been successfully applied to many imagers in orbit, including both MODIS instruments and NPP-VIIRS, using the USGS ROLO model to predict the reference exoatmospheric lunar irradiance. Sensor response trending was developed for SeaWIFS with a relative accuracy better than 0.1 % per year with lunar calibration techniques. However, the Moon rarely is used as an absolute reference for on-orbit calibration, primarily due to uncertainties in the ROLO model absolute scale of 5%-10%. But this limitation lies only with the models - the Moon itself is radiometrically stable, and development of a high-accuracy absolute lunar reference is inherently feasible. A program has been undertaken by NIST to collect absolute measurements of the lunar spectral irradiance with absolute accuracy <1 % (k=2), traceable to SI radiometric units. Initial Moon observations were acquired from the Whipple Observatory on Mt. Hopkins, Arizona, elevation 2367 meters, with continuous spectral coverage from 380 nm to 1040 nm at ~3 nm resolution. The lunar spectrometer acquired calibration measurements several times each observing night by pointing to a calibrated integrating sphere source. The lunar spectral irradiance at the top of the atmosphere was derived from a time series of ground-based measurements by a Langley analysis that incorporated measured atmospheric conditions and ROLO model predictions for the change in irradiance resulting from the changing Sun-Moon-Observer geometry throughout each night. Two nights were selected for further study. An extensive error analysis, which includes instrument calibration and atmospheric correction terms, shows a combined standard uncertainty under 1 % over most of the spectral range. Comparison of these two nights' spectral irradiance measurements with predictions

  1. Software error detection

    NASA Technical Reports Server (NTRS)

    Buechler, W.; Tucker, A. G.

    1981-01-01

    Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.

  2. Pseudo-absolute quantitative analysis using gas chromatography - Vacuum ultraviolet spectroscopy - A tutorial.

    PubMed

    Bai, Ling; Smuts, Jonathan; Walsh, Phillip; Qiu, Changling; McNair, Harold M; Schug, Kevin A

    2017-02-08

    The vacuum ultraviolet detector (VUV) is a new non-destructive mass sensitive detector for gas chromatography that continuously and rapidly collects full wavelength range absorption between 120 and 240 nm. In addition to conventional methods of quantification (internal and external standard), gas chromatography - vacuum ultraviolet spectroscopy has the potential for pseudo-absolute quantification of analytes based on pre-recorded cross sections (well-defined absorptivity across the 120-240 nm wavelength range recorded by the detector) without the need for traditional calibration. The pseudo-absolute method was used in this research to experimentally evaluate the sources of sample loss and gain associated with sample introduction into a typical gas chromatograph. Standard samples of benzene and natural gas were used to assess precision and accuracy for the analysis of liquid and gaseous samples, respectively, based on the amount of analyte loaded on-column. Results indicate that injection volume, split ratio, and sampling times for splitless analysis can all contribute to inaccurate, yet precise sample introduction. For instance, an autosampler can very reproducibly inject a designated volume, but there are significant systematic errors (here, a consistently larger volume than that designated) in the actual volume introduced. The pseudo-absolute quantification capability of the vacuum ultraviolet detector provides a new means for carrying out system performance checks and potentially for solving challenging quantitative analytical problems. For practical purposes, an internal standardized approach to normalize systematic errors can be used to perform quantitative analysis with the pseudo-absolute method.

  3. A Mobile Device App to Reduce Time to Drug Delivery and Medication Errors During Simulated Pediatric Cardiopulmonary Resuscitation: A Randomized Controlled Trial

    PubMed Central

    Combescure, Christophe; Lacroix, Laurence; Haddad, Kevin; Sanchez, Oliver; Gervaix, Alain; Lovis, Christian; Manzano, Sergio

    2017-01-01

    Background During pediatric cardiopulmonary resuscitation (CPR), vasoactive drug preparation for continuous infusion is both complex and time-consuming, placing children at higher risk than adults for medication errors. Following an evidence-based ergonomic-driven approach, we developed a mobile device app called Pediatric Accurate Medication in Emergency Situations (PedAMINES), intended to guide caregivers step-by-step from preparation to delivery of drugs requiring continuous infusion. Objective The aim of our study was to determine whether the use of PedAMINES reduces drug preparation time (TDP) and time to delivery (TDD; primary outcome), as well as medication errors (secondary outcomes) when compared with conventional preparation methods. Methods The study was a randomized controlled crossover trial with 2 parallel groups comparing PedAMINES with a conventional and internationally used drugs infusion rate table in the preparation of continuous drug infusion. We used a simulation-based pediatric CPR cardiac arrest scenario with a high-fidelity manikin in the shock room of a tertiary care pediatric emergency department. After epinephrine-induced return of spontaneous circulation, pediatric emergency nurses were first asked to prepare a continuous infusion of dopamine, using either PedAMINES (intervention group) or the infusion table (control group), and second, a continuous infusion of norepinephrine by crossing the procedure. The primary outcome was the elapsed time in seconds, in each allocation group, from the oral prescription by the physician to TDD by the nurse. TDD included TDP. The secondary outcome was the medication dosage error rate during the sequence from drug preparation to drug injection. Results A total of 20 nurses were randomized into 2 groups. During the first study period, mean TDP while using PedAMINES and conventional preparation methods was 128.1 s (95% CI 102-154) and 308.1 s (95% CI 216-400), respectively (180 s reduction, P=.002). Mean

  4. Database applicaton for absolute spectrophotometry

    NASA Astrophysics Data System (ADS)

    Bochkov, Valery V.; Shumko, Sergiy

    2002-12-01

    32-bit database application with multidocument interface for Windows has been developed to calculate absolute energy distributions of observed spectra. The original database contains wavelength calibrated observed spectra which had been already passed through apparatus reductions such as flatfielding, background and apparatus noise subtracting. Absolute energy distributions of observed spectra are defined in unique scale by means of registering them simultaneously with artificial intensity standard. Observations of sequence of spectrophotometric standards are used to define absolute energy of the artificial standard. Observations of spectrophotometric standards are used to define optical extinction in selected moments. FFT algorithm implemented in the application allows performing convolution (deconvolution) spectra with user-defined PSF. The object-oriented interface has been created using facilities of C++ libraries. Client/server model with Windows Socket functionality based on TCP/IP protocol is used to develop the application. It supports Dynamic Data Exchange conversation in server mode and uses Microsoft Exchange communication facilities.

  5. Absolute Radiometric Calibration of EUNIS-06

    NASA Technical Reports Server (NTRS)

    Thomas, R. J.; Rabin, D. M.; Kent, B. J.; Paustian, W.

    2007-01-01

    The Extreme-Ultraviolet Normal-Incidence Spectrometer (EUNIS) is a soundingrocket payload that obtains imaged high-resolution spectra of individual solar features, providing information about the Sun's corona and upper transition region. Shortly after its successful initial flight last year, a complete end-to-end calibration was carried out to determine the instrument's absolute radiometric response over its Longwave bandpass of 300 - 370A. The measurements were done at the Rutherford-Appleton Laboratory (RAL) in England, using the same vacuum facility and EUV radiation source used in the pre-flight calibrations of both SOHO/CDS and Hinode/EIS, as well as in three post-flight calibrations of our SERTS sounding rocket payload, the precursor to EUNIS. The unique radiation source provided by the Physikalisch-Technische Bundesanstalt (PTB) had been calibrated to an absolute accuracy of 7% (l-sigma) at 12 wavelengths covering our bandpass directly against the Berlin electron storage ring BESSY, which is itself a primary radiometric source standard. Scans of the EUNIS aperture were made to determine the instrument's absolute spectral sensitivity to +- 25%, considering all sources of error, and demonstrate that EUNIS-06 was the most sensitive solar E W spectrometer yet flown. The results will be matched against prior calibrations which relied on combining measurements of individual optical components, and on comparisons with theoretically predicted 'insensitive' line ratios. Coordinated observations were made during the EUNIS-06 flight by SOHO/CDS and EIT that will allow re-calibrations of those instruments as well. In addition, future EUNIS flights will provide similar calibration updates for TRACE, Hinode/EIS, and STEREO/SECCHI/EUVI.

  6. Absolute Humidity and the Seasonality of Influenza (Invited)

    NASA Astrophysics Data System (ADS)

    Shaman, J. L.; Pitzer, V.; Viboud, C.; Grenfell, B.; Goldstein, E.; Lipsitch, M.

    2010-12-01

    Much of the observed wintertime increase of mortality in temperate regions is attributed to seasonal influenza. A recent re-analysis of laboratory experiments indicates that absolute humidity strongly modulates the airborne survival and transmission of the influenza virus. Here we show that the onset of increased wintertime influenza-related mortality in the United States is associated with anomalously low absolute humidity levels during the prior weeks. We then use an epidemiological model, in which observed absolute humidity conditions temper influenza transmission rates, to successfully simulate the seasonal cycle of observed influenza-related mortality. The model results indicate that direct modulation of influenza transmissibility by absolute humidity alone is sufficient to produce this observed seasonality. These findings provide epidemiological support for the hypothesis that absolute humidity drives seasonal variations of influenza transmission in temperate regions. In addition, we show that variations of the basic and effective reproductive numbers for influenza, caused by seasonal changes in absolute humidity, are consistent with the general timing of pandemic influenza outbreaks observed for 2009 A/H1N1 in temperate regions. Indeed, absolute humidity conditions correctly identify the region of the United States vulnerable to a third, wintertime wave of pandemic influenza. These findings suggest that the timing of pandemic influenza outbreaks is controlled by a combination of absolute humidity conditions, levels of susceptibility and changes in population mixing and contact rates.

  7. Absolute classification with unsupervised clustering

    NASA Technical Reports Server (NTRS)

    Jeon, Byeungwoo; Landgrebe, D. A.

    1992-01-01

    An absolute classification algorithm is proposed in which the class definition through training samples or otherwise is required only for a particular class of interest. The absolute classification is considered as a problem of unsupervised clustering when one cluster is known initially. The definitions and statistics of the other classes are automatically developed through the weighted unsupervised clustering procedure, which is developed to keep the cluster corresponding to the class of interest from losing its identity as the class of interest. Once all the classes are developed, a conventional relative classifier such as the maximum-likelihood classifier is used in the classification.

  8. Space-time data fusion under error in computer model output: an application to modeling air quality.

    PubMed

    Berrocal, Veronica J; Gelfand, Alan E; Holland, David M

    2012-09-01

    We provide methods that can be used to obtain more accurate environmental exposure assessment. In particular, we propose two modeling approaches to combine monitoring data at point level with numerical model output at grid cell level, yielding improved prediction of ambient exposure at point level. Extending our earlier downscaler model (Berrocal, V. J., Gelfand, A. E., and Holland, D. M. (2010b). A spatio-temporal downscaler for outputs from numerical models. Journal of Agricultural, Biological and Environmental Statistics 15, 176-197), these new models are intended to address two potential concerns with the model output. One recognizes that there may be useful information in the outputs for grid cells that are neighbors of the one in which the location lies. The second acknowledges potential spatial misalignment between a station and its putatively associated grid cell. The first model is a Gaussian Markov random field smoothed downscaler that relates monitoring station data and computer model output via the introduction of a latent Gaussian Markov random field linked to both sources of data. The second model is a smoothed downscaler with spatially varying random weights defined through a latent Gaussian process and an exponential kernel function, that yields, at each site, a new variable on which the monitoring station data is regressed with a spatial linear model. We applied both methods to daily ozone concentration data for the Eastern US during the summer months of June, July and August 2001, obtaining, respectively, a 5% and a 15% predictive gain in overall predictive mean square error over our earlier downscaler model (Berrocal et al., 2010b). Perhaps more importantly, the predictive gain is greater at hold-out sites that are far from monitoring sites.

  9. An absolute measure for a key currency

    NASA Astrophysics Data System (ADS)

    Oya, Shunsuke; Aihara, Kazuyuki; Hirata, Yoshito

    It is generally considered that the US dollar and the euro are the key currencies in the world and in Europe, respectively. However, there is no absolute general measure for a key currency. Here, we investigate the 24-hour periodicity of foreign exchange markets using a recurrence plot, and define an absolute measure for a key currency based on the strength of the periodicity. Moreover, we analyze the time evolution of this measure. The results show that the credibility of the US dollar has not decreased significantly since the Lehman shock, when the Lehman Brothers bankrupted and influenced the economic markets, and has increased even relatively better than that of the euro and that of the Japanese yen.

  10. Effects of ambient air pollution measurement error on health effect estimates in time-series studies: a simulation-based analysis.

    PubMed

    Strickland, Matthew J; Gass, Katherine M; Goldman, Gretchen T; Mulholland, James A

    2015-01-01

    In this study, we investigated bias caused by spatial variability and spatial heterogeneity in outdoor air-pollutant concentrations, instrument imprecision, and choice of daily pollutant metric on risk ratio (RR) estimates obtained from a Poisson time-series analysis. Daily concentrations for 12 pollutants were simulated for Atlanta, Georgia, at 5 km resolution during a 6-year period. Viewing these as being representative of the true concentrations, a population-level pollutant health effect (RR) was specified, and daily counts of health events were simulated. Error representative of instrument imprecision was added to the simulated concentrations at the locations of fixed site monitors in Atlanta, and these mismeasured values were combined to create three different city-wide daily metrics (central monitor, unweighted average, and population-weighted average). Given our assumptions, the median bias in the RR per unit increase in concentration was found to be lowest for the population-weighted average metric. Although the Berkson component of error caused bias away from the null in the log-linear models, the net bias due to measurement error tended to be towards the null. The relative differences in bias among the metrics were lessened, although not eliminated, by scaling results to interquartile range increases in concentration.

  11. Early-time observations of gamma-ray burst error boxes with the Livermore optical transient imaging system

    SciTech Connect

    Williams, G G

    2000-08-01

    Despite the enormous wealth of gamma-ray burst (GRB) data collected over the past several years the physical mechanism which causes these extremely powerful phenomena is still unknown. Simultaneous and early time optical observations of GRBs will likely make an great contribution t o our understanding. LOTIS is a robotic wide field-of-view telescope dedicated to the search for prompt and early-time optical afterglows from gamma-ray bursts. LOTIS began routine operations in October 1996 and since that time has responded to over 145 gamma-ray burst triggers. Although LOTIS has not yet detected prompt optical emission from a GRB its upper limits have provided constraints on the theoretical emission mechanisms. Super-LOTIS, also a robotic wide field-of-view telescope, can detect emission 100 times fainter than LOTIS is capable of detecting. Routine observations from Steward Observatory's Kitt Peak Station will begin in the immediate future. During engineering test runs under bright skies from the grounds of Lawrence Livermore National Laboratory Super-LOTIS provided its first upper limits on the early-time optical afterglow of GRBs. This dissertation provides a summary of the results from LOTIS and Super-LOTIS through the time of writing. Plans for future studies with both systems are also presented.

  12. Absolute transition probabilities of phosphorus.

    NASA Technical Reports Server (NTRS)

    Miller, M. H.; Roig, R. A.; Bengtson, R. D.

    1971-01-01

    Use of a gas-driven shock tube to measure the absolute strengths of 21 P I lines and 126 P II lines (from 3300 to 6900 A). Accuracy for prominent, isolated neutral and ionic lines is estimated to be 28 to 40% and 18 to 30%, respectively. The data and the corresponding theoretical predictions are examined for conformity with the sum rules.-

  13. Relativistic Absolutism in Moral Education.

    ERIC Educational Resources Information Center

    Vogt, W. Paul

    1982-01-01

    Discusses Emile Durkheim's "Moral Education: A Study in the Theory and Application of the Sociology of Education," which holds that morally healthy societies may vary in culture and organization but must possess absolute rules of moral behavior. Compares this moral theory with current theory and practice of American educators. (MJL)

  14. Equilibrating errors: reliable estimation of information transmission rates in biological systems with spectral analysis-based methods.

    PubMed

    Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti

    2014-06-01

    Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding.

  15. An integrated error estimation and lag-aware data assimilation scheme for real-time flood forecasting

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The performance of conventional filtering methods can be degraded by ignoring the time lag between soil moisture and discharge response when discharge observations are assimilated into streamflow modelling. This has led to the ongoing development of more optimal ways to implement sequential data ass...

  16. Constrained Least Absolute Deviation Neural Networks

    PubMed Central

    Wang, Zhishun; Peterson, Bradley S.

    2008-01-01

    It is well known that least absolute deviation (LAD) criterion or L1-norm used for estimation of parameters is characterized by robustness, i.e., the estimated parameters are totally resistant (insensitive) to large changes in the sampled data. This is an extremely useful feature, especially, when the sampled data are known to be contaminated by occasionally occurring outliers or by spiky noise. In our previous works, we have proposed the least absolute deviation neural network (LADNN) to solve unconstrained LAD problems. The theoretical proofs and numerical simulations have shown that the LADNN is Lyapunov-stable and it can globally converge to the exact solution to a given unconstrained LAD problem. We have also demonstrated its excellent application value in time-delay estimation. More generally, a practical LAD application problem may contain some linear constraints, such as a set of equalities and/or inequalities, which is called constrained LAD problem, whereas the unconstrained LAD can be considered as a special form of the constrained LAD. In this paper, we present a new neural network called constrained least absolute deviation neural network (CLADNN) to solve general constrained LAD problems. Theoretical proofs and numerical simulations demonstrate that the proposed CLADNN is Lyapunov stable and globally converges to the exact solution to a given constrained LAD problem, independent of initial values. The numerical simulations have also illustrated that the proposed CLADNN can be used to robustly estimate parameters for nonlinear curve fitting, which is extensively used in signal and image processing. PMID:18269958

  17. LEMming: A Linear Error Model to Normalize Parallel Quantitative Real-Time PCR (qPCR) Data as an Alternative to Reference Gene Based Methods

    PubMed Central

    Feuer, Ronny; Vlaic, Sebastian; Arlt, Janine; Sawodny, Oliver; Dahmen, Uta; Zanger, Ulrich M.; Thomas, Maria

    2015-01-01

    Background Gene expression analysis is an essential part of biological and medical investigations. Quantitative real-time PCR (qPCR) is characterized with excellent sensitivity, dynamic range, reproducibility and is still regarded to be the gold standard for quantifying transcripts abundance. Parallelization of qPCR such as by microfluidic Taqman Fluidigm Biomark Platform enables evaluation of multiple transcripts in samples treated under various conditions. Despite advanced technologies, correct evaluation of the measurements remains challenging. Most widely used methods for evaluating or calculating gene expression data include geNorm and ΔΔCt, respectively. They rely on one or several stable reference genes (RGs) for normalization, thus potentially causing biased results. We therefore applied multivariable regression with a tailored error model to overcome the necessity of stable RGs. Results We developed a RG independent data normalization approach based on a tailored linear error model for parallel qPCR data, called LEMming. It uses the assumption that the mean Ct values within samples of similarly treated groups are equal. Performance of LEMming was evaluated in three data sets with different stability patterns of RGs and compared to the results of geNorm normalization. Data set 1 showed that both methods gave similar results if stable RGs are available. Data set 2 included RGs which are stable according to geNorm criteria, but became differentially expressed in normalized data evaluated by a t-test. geNorm-normalized data showed an effect of a shifted mean per gene per condition whereas LEMming-normalized data did not. Comparing the decrease of standard deviation from raw data to geNorm and to LEMming, the latter was superior. In data set 3 according to geNorm calculated average expression stability and pairwise variation, stable RGs were available, but t-tests of raw data contradicted this. Normalization with RGs resulted in distorted data contradicting

  18. Absolute measurement of the extreme UV solar flux

    NASA Technical Reports Server (NTRS)

    Carlson, R. W.; Ogawa, H. S.; Judge, D. L.; Phillips, E.

    1984-01-01

    A windowless rare-gas ionization chamber has been developed to measure the absolute value of the solar extreme UV flux in the 50-575-A region. Successful results were obtained on a solar-pointing sounding rocket. The ionization chamber, operated in total absorption, is an inherently stable absolute detector of ionizing UV radiation and was designed to be independent of effects from secondary ionization and gas effusion. The net error of the measurement is + or - 7.3 percent, which is primarily due to residual outgassing in the instrument, other errors such as multiple ionization, photoelectron collection, and extrapolation to the zero atmospheric optical depth being small in comparison. For the day of the flight, Aug. 10, 1982, the solar irradiance (50-575 A), normalized to unit solar distance, was found to be 5.71 + or - 0.42 x 10 to the 10th photons per sq cm sec.

  19. Regional absolute conductivity reconstruction using projected current density in MREIT.

    PubMed

    Sajib, Saurav Z K; Kim, Hyung Joong; Kwon, Oh In; Woo, Eung Je

    2012-09-21

    Magnetic resonance electrical impedance tomography (MREIT) is a non-invasive technique for imaging the internal conductivity distribution in tissue within an MRI scanner, utilizing the magnetic flux density, which is introduced when a current is injected into the tissue from external electrodes. This magnetic flux alters the MRI signal, so that appropriate reconstruction can provide a map of the additional z-component of the magnetic field (B(z)) as well as the internal current density distribution that created it. To extract the internal electrical properties of the subject, including the conductivity and/or the current density distribution, MREIT techniques use the relationship between the external injection current and the z-component of the magnetic flux density B = (B(x), B(y), B(z)). The tissue studied typically contains defective regions, regions with a low MRI signal and/or low MRI signal-to-noise-ratio, due to the low density of nuclear magnetic resonance spins, short T(2) or T*(2) relaxation times, as well as regions with very low electrical conductivity, through which very little current traverses. These defective regions provide noisy B(z) data, which can severely degrade the overall reconstructed conductivity distribution. Injecting two independent currents through surface electrodes, this paper proposes a new direct method to reconstruct a regional absolute isotropic conductivity distribution in a region of interest (ROI) while avoiding the defective regions. First, the proposed method reconstructs the contrast of conductivity using the transversal J-substitution algorithm, which blocks the propagation of severe accumulated noise from the defective region to the ROI. Second, the proposed method reconstructs the regional projected current density using the relationships between the internal current density, which stems from a current injection on the surface, and the measured B(z) data. Combining the contrast conductivity distribution in the entire

  20. Mathematical Model for Absolute Magnetic Measuring Systems in Industrial Applications

    NASA Astrophysics Data System (ADS)

    Fügenschuh, Armin; Fügenschuh, Marzena; Ludszuweit, Marina; Mojsic, Aleksandar; Sokół, Joanna

    2015-09-01

    Scales for measuring systems are either based on incremental or absolute measuring methods. Incremental scales need to initialize a measurement cycle at a reference point. From there, the position is computed by counting increments of a periodic graduation. Absolute methods do not need reference points, since the position can be read directly from the scale. The positions on the complete scales are encoded using two incremental tracks with different graduation. We present a new method for absolute measuring using only one track for position encoding up to micrometre range. Instead of the common perpendicular magnetic areas, we use a pattern of trapezoidal magnetic areas, to store more complex information. For positioning, we use the magnetic field where every position is characterized by a set of values measured by a hall sensor array. We implement a method for reconstruction of absolute positions from the set of unique measured values. We compare two patterns with respect to uniqueness, accuracy, stability and robustness of positioning. We discuss how stability and robustness are influenced by different errors during the measurement in real applications and how those errors can be compensated.

  1. Comparison of Orbit-Based and Time-Offset-Based Geometric Correction Models for SAR Satellite Imagery Based on Error Simulation

    PubMed Central

    Hong, Seunghwan; Choi, Yoonjo; Park, Ilsuk; Sohn, Hong-Gyoo

    2017-01-01

    Geometric correction of SAR satellite imagery is the process to adjust the model parameters that define the relationship between ground and image coordinates. To achieve sub-pixel geolocation accuracy, the adoption of the appropriate geometric correction model and parameters is important. Until now, various geometric correction models have been developed and applied. However, it is still difficult for general users to adopt a suitable geometric correction models having sufficient precision. In this regard, this paper evaluated the orbit-based and time-offset-based models with an error simulation. To evaluate the geometric correction models, Radarsat-1 images that have large errors in satellite orbit information and TerraSAR-X images that have a reportedly high accuracy in satellite orbit and sensor information were utilized. For Radarsat-1 imagery, the geometric correction model based on the satellite position parameters has a better performance than the model based on time-offset parameters. In the case of the TerraSAR-X imagery, two geometric correction models had similar performance and could ensure sub-pixel geolocation accuracy. PMID:28106729

  2. Comparison of Orbit-Based and Time-Offset-Based Geometric Correction Models for SAR Satellite Imagery Based on Error Simulation.

    PubMed

    Hong, Seunghwan; Choi, Yoonjo; Park, Ilsuk; Sohn, Hong-Gyoo

    2017-01-17

    Geometric correction of SAR satellite imagery is the process to adjust the model parameters that define the relationship between ground and image coordinates. To achieve sub-pixel geolocation accuracy, the adoption of the appropriate geometric correction model and parameters is important. Until now, various geometric correction models have been developed and applied. However, it is still difficult for general users to adopt a suitable geometric correction models having sufficient precision. In this regard, this paper evaluated the orbit-based and time-offset-based models with an error simulation. To evaluate the geometric correction models, Radarsat-1 images that have large errors in satellite orbit information and TerraSAR-X images that have a reportedly high accuracy in satellite orbit and sensor information were utilized. For Radarsat-1 imagery, the geometric correction model based on the satellite position parameters has a better performance than the model based on time-offset parameters. In the case of the TerraSAR-X imagery, two geometric correction models had similar performance and could ensure sub-pixel geolocation accuracy.

  3. The Absolute Spectrum Polarimeter (ASP)

    NASA Technical Reports Server (NTRS)

    Kogut, A. J.

    2010-01-01

    The Absolute Spectrum Polarimeter (ASP) is an Explorer-class mission to map the absolute intensity and linear polarization of the cosmic microwave background and diffuse astrophysical foregrounds over the full sky from 30 GHz to 5 THz. The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r much greater than 1O(raised to the power of { -3}) and Compton distortion y < 10 (raised to the power of{-6}). We describe the ASP instrument and mission architecture needed to detect the signature of an inflationary epoch in the early universe using only 4 semiconductor bolometers.

  4. Physics of negative absolute temperatures

    NASA Astrophysics Data System (ADS)

    Abraham, Eitan; Penrose, Oliver

    2017-01-01

    Negative absolute temperatures were introduced into experimental physics by Purcell and Pound, who successfully applied this concept to nuclear spins; nevertheless, the concept has proved controversial: a recent article aroused considerable interest by its claim, based on a classical entropy formula (the "volume entropy") due to Gibbs, that negative temperatures violated basic principles of statistical thermodynamics. Here we give a thermodynamic analysis that confirms the negative-temperature interpretation of the Purcell-Pound experiments. We also examine the principal arguments that have been advanced against the negative temperature concept; we find that these arguments are not logically compelling, and moreover that the underlying "volume" entropy formula leads to predictions inconsistent with existing experimental results on nuclear spins. We conclude that, despite the counterarguments, negative absolute temperatures make good theoretical sense and did occur in the experiments designed to produce them.

  5. Optomechanics for absolute rotation detection

    NASA Astrophysics Data System (ADS)

    Davuluri, Sankar

    2016-07-01

    In this article, we present an application of optomechanical cavity for the absolute rotation detection. The optomechanical cavity is arranged in a Michelson interferometer in such a way that the classical centrifugal force due to rotation changes the length of the optomechanical cavity. The change in the cavity length induces a shift in the frequency of the cavity mode. The phase shift corresponding to the frequency shift in the cavity mode is measured at the interferometer output to estimate the angular velocity of absolute rotation. We derived an analytic expression to estimate the minimum detectable rotation rate in our scheme for a given optomechanical cavity. Temperature dependence of the rotation detection sensitivity is studied.

  6. Tracking time-varying causality and directionality of information flow using an error reduction ratio test with applications to electroencephalography data.

    PubMed

    Zhao, Yifan; Billings, Steve A; Wei, Hualiang; Sarrigiannis, Ptolemaios G

    2012-11-01

    This paper introduces an error reduction ratio-causality (ERR-causality) test that can be used to detect and track causal relationships between two signals. In comparison to the traditional Granger method, one significant advantage of the new ERR-causality test is that it can effectively detect the time-varying direction of linear or nonlinear causality between two signals without fitting a complete model. Another important advantage is that the ERR-causality test can detect both the direction of interactions and estimate the relative time shift between the two signals. Numerical examples are provided to illustrate the effectiveness of the new method together with the determination of the causality between electroencephalograph signals from different cortical sites for patients during an epileptic seizure.

  7. The Motor-Cognitive Model of Motor Imagery: Evidence From Timing Errors in Simulated Reaching and Grasping.

    PubMed

    Glover, Scott; Baran, Marek

    2017-04-03

    Motor imagery represents an important but theoretically underdeveloped area of research in psychology. The motor-cognitive model of motor imagery was presented, and contrasted with the currently prevalent view, the functional equivalence model. In 3 experiments, the predictions of the two models were pitted against each other through manipulations of task precision and the introduction of an interference task, while comparing their effects on overt actions and motor imagery. In Experiments 1a and 1b, the motor-cognitive model predicted an effect of precision whereby motor imagery would overestimate simulated movement times when a grasping action involved a high level of precision; this prediction was upheld. In Experiment 2, the motor-cognitive model predicted that an interference task would slow motor imagery to a much greater extent than it would overt actions; this prediction was also upheld. Experiment 3 showed that the effects observed in the previous experiments could not be due to failures to match the motor imagery and overt action tasks. None of the above results were explainable by either a strong version of the functional equivalence model, or any reasonable adaptations thereof. It was concluded that the motor-cognitive model may represent a theoretically viable advance in the understanding of motor imagery. (PsycINFO Database Record

  8. Identification of defects and strain error estimation for bending steel beams using time domain Brillouin distributed optical fiber sensors

    NASA Astrophysics Data System (ADS)

    Bernini, R.; Fraldi, M.; Minardo, A.; Minutolo, V.; Carannante, F.; Nunziante, L.; Zeni, L.

    2006-04-01

    In recent years the use of distributed optical fiber sensors for measurements of strain in beams, by means of the Brillouin scattering effect, has been proposed. Several works pointed out the practical difficulty of this kind of measurement, connected both to theoretical and to experimental problems, e.g. mechanical characterization of optical fibers, decaying of strains in the protective coatings, spatial resolution of the Brillouin scattering, brittleness of the glass core, elastic-plastic response of the polymeric jackets, end effects and the different responses of the fiber for dilatation and contraction. Dealing with each of the above problems still requires a great research effort. However, recent literature shows that distributed optical fiber measurement techniques are extremely useful for finding qualitative responses in terms of strains. Indeed, in spite of the above-mentioned uncertainties, the great advantage of the proposed distributed measurement of strains remains evident for the safety assessment of large structures, such as bridges, tunnels, dams and pipelines, over their whole lifetimes. In view of this, in the present paper the detection of defects or damage in bending beams—by using distributed optical fiber sensors in a method based on time domain stimulated Brillouin scattering—is proposed. In particular, laboratory tests were carried out to measure the strain profile along a steel beam. Two tests were performed: the first one involves an integral steel beam, while the second experiment is performed on a damaged beam. Comparison between these two tests allows the detection of the position and the establishing of bounds on the size of the defect. At the end, the quality and accuracy of the measurements are discussed and a sensitivity analysis of the strain readings taking into account the bonding conditions at the interface between the structure and the fiber is also carried out by means a parametric numerical simulation.

  9. Absolute Priority for a Vehicle in VANET

    NASA Astrophysics Data System (ADS)

    Shirani, Rostam; Hendessi, Faramarz; Montazeri, Mohammad Ali; Sheikh Zefreh, Mohammad

    In today's world, traffic jams waste hundreds of hours of our life. This causes many researchers try to resolve the problem with the idea of Intelligent Transportation System. For some applications like a travelling ambulance, it is important to reduce delay even for a second. In this paper, we propose a completely infrastructure-less approach for finding shortest path and controlling traffic light to provide absolute priority for an emergency vehicle. We use the idea of vehicular ad-hoc networking to reduce the imposed travelling time. Then, we simulate our proposed protocol and compare it with a centrally controlled traffic light system.

  10. Using absolute gravimeter data to determine vertical gravity gradients

    USGS Publications Warehouse

    Robertson, D.S.

    2001-01-01

    The position versus time data from a free-fall absolute gravimeter can be used to estimate the vertical gravity gradient in addition to the gravity value itself. Hipkin has reported success in estimating the vertical gradient value using a data set of unusually good quality. This paper explores techniques that may be applicable to a broader class of data that may be contaminated with "system response" errors of larger magnitude than were evident in the data used by Hipkin. This system response function is usually modelled as a sum of exponentially decaying sinusoidal components. The technique employed here involves combining the x0, v0 and g parameters from all the drops made during a site occupation into a single least-squares solution, and including the value of the vertical gradient and the coefficients of system response function in the same solution. The resulting non-linear equations must be solved iteratively and convergence presents some difficulties. Sparse matrix techniques are used to make the least-squares problem computationally tractable.

  11. Post-manufacturing, 17-times acceptable raw bit error rate enhancement, dynamic codeword transition ECC scheme for highly reliable solid-state drives, SSDs

    NASA Astrophysics Data System (ADS)

    Tanakamaru, Shuhei; Fukuda, Mayumi; Higuchi, Kazuhide; Esumi, Atsushi; Ito, Mitsuyoshi; Li, Kai; Takeuchi, Ken

    2011-04-01

    A dynamic codeword transition ECC scheme is proposed for highly reliable solid-state drives, SSDs. By monitoring the error number or the write/erase cycles, the ECC codeword dynamically increases from 512 Byte (+parity) to 1 KByte, 2 KByte, 4 KByte…32 KByte. The proposed ECC with a larger codeword decreases the failure rate after ECC. As a result, the acceptable raw bit error rate, BER, before ECC is enhanced. Assuming a NAND Flash memory which requires 8-bit correction in 512 Byte codeword ECC, a 17-times higher acceptable raw BER than the conventional fixed 512 Byte codeword ECC is realized for the mobile phone application without an interleaving. For the MP3 player, digital-still camera and high-speed memory card applications with a dual channel interleaving, 15-times higher acceptable raw BER is achieved. Finally, for the SSD application with 8 channel interleaving, 13-times higher acceptable raw BER is realized. Because the ratio of the user data to the parity bits is the same in each ECC codeword, no additional memory area is required. Note that the reliability of SSD is improved after the manufacturing without cost penalty. Compared with the conventional ECC with the fixed large 32 KByte codeword, the proposed scheme achieves a lower power consumption by introducing the "best-effort" type operation. In the proposed scheme, during the most of the lifetime of SSD, a weak ECC with a shorter codeword such as 512 Byte (+parity), 1 KByte and 2 KByte is used and 98% lower power consumption is realized. At the life-end of SSD, a strong ECC with a 32 KByte codeword is used and the highly reliable operation is achieved. The random read performance is also discussed. The random read performance is estimated by the latency. The latency is below 1.5 ms for ECC codeword up to 32 KByte. This latency is below the average latency of 15,000 rpm HDD, 2 ms.

  12. Standardization of the cumulative absolute velocity

    SciTech Connect

    O'Hara, T.F.; Jacobson, J.P. )

    1991-12-01

    EPRI NP-5930, A Criterion for Determining Exceedance of the Operating Basis Earthquake,'' was published in July 1988. As defined in that report, the Operating Basis Earthquake (OBE) is exceeded when both a response spectrum parameter and a second damage parameter, referred to as the Cumulative Absolute Velocity (CAV), are exceeded. In the review process of the above report, it was noted that the calculation of CAV could be confounded by time history records of long duration containing low (nondamaging) acceleration. Therefore, it is necessary to standardize the method of calculating CAV to account for record length. This standardized methodology allows consistent comparisons between future CAV calculations and the adjusted CAV threshold value based upon applying the standardized methodology to the data set presented in EPRI NP-5930. The recommended method to standardize the CAV calculation is to window its calculation on a second-by-second basis for a given time history. If the absolute acceleration exceeds 0.025g at any time during each one second interval, the earthquake records used in EPRI NP-5930 have been reanalyzed and the adjusted threshold of damage for CAV was found to be 0.16g-set.

  13. Absolute calibration of optical tweezers

    SciTech Connect

    Viana, N.B.; Mazolli, A.; Maia Neto, P.A.; Nussenzveig, H.M.; Rocha, M.S.; Mesquita, O.N.

    2006-03-27

    As a step toward absolute calibration of optical tweezers, a first-principles theory of trapping forces with no adjustable parameters, corrected for spherical aberration, is experimentally tested. Employing two very different setups, we find generally very good agreement for the transverse trap stiffness as a function of microsphere radius for a broad range of radii, including the values employed in practice, and at different sample chamber depths. The domain of validity of the WKB ('geometrical optics') approximation to the theory is verified. Theoretical predictions for the trapping threshold, peak position, depth variation, multiple equilibria, and 'jump' effects are also confirmed.

  14. Investigation of the effects of correlated measurement errors in time series analysis techniques applied to nuclear material accountancy data. [Program COVAR

    SciTech Connect

    Pike, D.H.; Morrison, G.W.; Downing, D.J.

    1982-04-01

    It has been shown in previous work that the Kalman Filter and Linear Smoother produces optimal estimates of inventory and loss from a material balance area. The assumptions of the Kalman Filter/Linear Smoother approach assume no correlation between inventory measurement error nor does it allow for serial correlation in these measurement errors. The purpose of this report is to extend the previous results by relaxing these assumptions to allow for correlation of measurement errors. The results show how to account for correlated measurement errors in the linear system model of the Kalman Filter/Linear Smoother. An algorithm is also included for calculating the required error covariance matrices.

  15. Forecasting peak daily ozone levels: part 2--A regression with time series errors model having a principal component trigger to forecast 1999 and 2002 ozone levels.

    PubMed

    Liu, Pao-Wen Grace; Johnson, Richard

    2003-12-01

    A modified time series approach, a Box-Jenkins regression with time series errors (RTSE) model plus a principal component (PC) trigger, has been developed to forecast peak daily 1-hr ozone (O3) in real time at the University of Wisconsin-Milwaukee North (UWM-N) during 1999 and 2002. The PC trigger acts as a predictor variable in the RTSE model. It tries to answer the question: will the next day be a possible high O3 day? To answer this question, three PC trigger rules were developed: (1) Hi-Low Checklist, (2) Discriminant Function Approach I, and (3) Discriminant Function Approach II. Also, a pure RTSE model without including the PC trigger and a persistence approach were tested for comparison. The RTSE model with DFA I successfully forecast the only two 1-hr federal exceedances (124 ppb), one in 1999 and one in 2002. In terms of the O3 100-ppb exceedances, 60-80% of the incorrect forecasts occurred with incorrect PC decisions. A few others may have been caused by unexpected O3-weather relations. When the three approaches used UWM-N data to forecast a 100-ppb exceedance somewhere in the Milwaukee, WI, metropolitan area, their performance was dramatically improved: the false alarm rate was reduced from 0.89 to 0.44, and the probability of detection was increased from 0.71 to 0.88.

  16. Absolute Density Calibration Cell for Laser Induced Fluorescence Erosion Rate Measurements

    NASA Technical Reports Server (NTRS)

    Domonkos, Matthew T.; Stevens, Richard E.

    2001-01-01

    Flight qualification of ion thrusters typically requires testing on the order of 10,000 hours. Extensive knowledge of wear mechanisms and rates is necessary to establish design confidence prior to long duration tests. Consequently, real-time erosion rate measurements offer the potential both to reduce development costs and to enhance knowledge of the dependency of component wear on operating conditions. Several previous studies have used laser-induced fluorescence (LIF) to measure real-time, in situ erosion rates of ion thruster accelerator grids. Those studies provided only relative measurements of the erosion rate. In the present investigation, a molybdenum tube was resistively heated such that the evaporation rate yielded densities within the tube on the order of those expected from accelerator grid erosion. This work examines the suitability of the density cell as an absolute calibration source for LIF measurements, and the intrinsic error was evaluated.

  17. Absolute calibration of forces in optical tweezers

    NASA Astrophysics Data System (ADS)

    Dutra, R. S.; Viana, N. B.; Maia Neto, P. A.; Nussenzveig, H. M.

    2014-07-01

    Optical tweezers are highly versatile laser traps for neutral microparticles, with fundamental applications in physics and in single molecule cell biology. Force measurements are performed by converting the stiffness response to displacement of trapped transparent microspheres, employed as force transducers. Usually, calibration is indirect, by comparison with fluid drag forces. This can lead to discrepancies by sizable factors. Progress achieved in a program aiming at absolute calibration, conducted over the past 15 years, is briefly reviewed. Here we overcome its last major obstacle, a theoretical overestimation of the peak stiffness, within the most employed range for applications, and we perform experimental validation. The discrepancy is traced to the effect of primary aberrations of the optical system, which are now included in the theory. All required experimental parameters are readily accessible. Astigmatism, the dominant effect, is measured by analyzing reflected images of the focused laser spot, adapting frequently employed video microscopy techniques. Combined with interface spherical aberration, it reveals a previously unknown window of instability for trapping. Comparison with experimental data leads to an overall agreement within error bars, with no fitting, for a broad range of microsphere radii, from the Rayleigh regime to the ray optics one, for different polarizations and trapping heights, including all commonly employed parameter domains. Besides signaling full first-principles theoretical understanding of optical tweezers operation, the results may lead to improved instrument design and control over experiments, as well as to an extended domain of applicability, allowing reliable force measurements, in principle, from femtonewtons to nanonewtons.

  18. Magnetospheric Multiscale (MMS) Mission Commissioning Phase Orbit Determination Error Analysis

    NASA Technical Reports Server (NTRS)

    Chung, Lauren R.; Novak, Stefan; Long, Anne; Gramling, Cheryl

    2009-01-01

    The Magnetospheric MultiScale (MMS) mission commissioning phase starts in a 185 km altitude x 12 Earth radii (RE) injection orbit and lasts until the Phase 1 mission orbits and orientation to the Earth-Sun li ne are achieved. During a limited time period in the early part of co mmissioning, five maneuvers are performed to raise the perigee radius to 1.2 R E, with a maneuver every other apogee. The current baseline is for the Goddard Space Flight Center Flight Dynamics Facility to p rovide MMS orbit determination support during the early commissioning phase using all available two-way range and Doppler tracking from bo th the Deep Space Network and Space Network. This paper summarizes th e results from a linear covariance analysis to determine the type and amount of tracking data required to accurately estimate the spacecraf t state, plan each perigee raising maneuver, and support thruster cal ibration during this phase. The primary focus of this study is the na vigation accuracy required to plan the first and the final perigee ra ising maneuvers. Absolute and relative position and velocity error hi stories are generated for all cases and summarized in terms of the ma ximum root-sum-square consider and measurement noise error contributi ons over the definitive and predictive arcs and at discrete times inc luding the maneuver planning and execution times. Details of the meth odology, orbital characteristics, maneuver timeline, error models, and error sensitivities are provided.

  19. Cosmology with negative absolute temperatures

    NASA Astrophysics Data System (ADS)

    Vieira, J. P. P.; Byrnes, Christian T.; Lewis, Antony

    2016-08-01

    Negative absolute temperatures (NAT) are an exotic thermodynamical consequence of quantum physics which has been known since the 1950's (having been achieved in the lab on a number of occasions). Recently, the work of Braun et al. [1] has rekindled interest in negative temperatures and hinted at a possibility of using NAT systems in the lab as dark energy analogues. This paper goes one step further, looking into the cosmological consequences of the existence of a NAT component in the Universe. NAT-dominated expanding Universes experience a borderline phantom expansion (w < -1) with no Big Rip, and their contracting counterparts are forced to bounce after the energy density becomes sufficiently large. Both scenarios might be used to solve horizon and flatness problems analogously to standard inflation and bouncing cosmologies. We discuss the difficulties in obtaining and ending a NAT-dominated epoch, and possible ways of obtaining density perturbations with an acceptable spectrum.

  20. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  1. An error criterion for determining sampling rates in closed-loop control systems

    NASA Technical Reports Server (NTRS)

    Brecher, S. M.

    1972-01-01

    The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.

  2. Absolute Electron Extraction Efficiency of Liquid Xenon

    NASA Astrophysics Data System (ADS)

    Kamdin, Katayun; Mizrachi, Eli; Morad, James; Sorensen, Peter

    2016-03-01

    Dual phase liquid/gas xenon time projection chambers (TPCs) currently set the world's most sensitive limits on weakly interacting massive particles (WIMPs), a favored dark matter candidate. These detectors rely on extracting electrons from liquid xenon into gaseous xenon, where they produce proportional scintillation. The proportional scintillation from the extracted electrons serves to internally amplify the WIMP signal; even a single extracted electron is detectable. Credible dark matter searches can proceed with electron extraction efficiency (EEE) lower than 100%. However, electrons systematically left at the liquid/gas boundary are a concern. Possible effects include spontaneous single or multi-electron proportional scintillation signals in the gas, or charging of the liquid/gas interface or detector materials. Understanding EEE is consequently a serious concern for this class of rare event search detectors. Previous EEE measurements have mostly been relative, not absolute, assuming efficiency plateaus at 100%. I will present an absolute EEE measurement with a small liquid/gas xenon TPC test bed located at Lawrence Berkeley National Laboratory.

  3. Upscaled CTAB-based DNA extraction and real-time PCR assays for Fusarium culmorum and F. graminearum DNA in plant material with reduced sampling error.

    PubMed

    Brandfass, Christoph; Karlovsky, Petr

    2008-11-01

    Fusarium graminearum Schwabe (Gibberella zeae Schwein. Petch.) and F. culmorum W.G. Smith are major mycotoxin producers in small-grain cereals afflicted with Fusarium head blight (FHB). Real-time PCR (qPCR) is the method of choice for species-specific, quantitative estimation of fungal biomass in plant tissue. We demonstrated that increasing the amount of plant material used for DNA extraction to 0.5-1.0 g considerably reduced sampling error and improved the reproducibility of DNA yield. The costs of DNA extraction at different scales and with different methods (commercial kits versus cetyltrimethylammonium bromide-based protocol) and qPCR systems (doubly labeled hybridization probes versus SYBR Green) were compared. A cost-effective protocol for the quantification of F. graminearum and F. culmorum DNA in wheat grain and maize stalk debris based on DNA extraction from 0.5-1.0 g material and real-time PCR with SYBR Green fluorescence detection was developed.

  4. Real-Time Correction of Rigid-Body-Motion-Induced Phase Errors for Diffusion-Weighted Steady State Free Precession Imaging

    PubMed Central

    O’Halloran, R; Aksoy, M; Aboussouan, E; Peterson, E; Van, A; Bammer, R

    2014-01-01

    Purpose Diffusion contrast in diffusion-weighted steady state free precession MRI is generated through the constructive addition of signal from many coherence pathways. Motion-induced phase causes destructive interference which results in loss of signal magnitude and diffusion contrast. In this work, a 3D navigator-based real-time correction of the rigid-body-motion-induced phase errors is developed for diffusion-weighted steady state free precession MRI. Methods The efficacy of the real-time prospective correction method in preserving phase coherence of the steady-state is tested in 3D phantom experiments and 3D scans of healthy human subjects. Results In nearly all experiments, the signal magnitude in images obtained with proposed prospective correction was higher than the signal magnitude in images obtained with no correction. In the human subjects the mean magnitude signal in the data was up to 30 percent higher with prospective motion correction than without. Prospective correction never resulted in a decrease in mean signal magnitude in either the data or in the images. Conclusions The proposed prospective motion correction method is shown to preserve the phase coherence of the steady state in diffusion-weighted steady state free precession MRI, thus mitigating signal magnitude losses that would confound the desired diffusion contrast. PMID:24715414

  5. An ultrasonic system for measurement of absolute myocardial thickness using a single transducer.

    PubMed

    Pitsillides, K F; Longhurst, J C

    1995-03-01

    We have developed an ultrasonic instrument that can measure absolute regional myocardial wall motion throughout the cardiac cycle using a single epicardial piezoelectric transducer. The methods in place currently that utilize ultrasound to measure myocardial wall thickness are the transit-time sonomicrometer (TTS) and, more recently, the Doppler echo displacement method. Both methods have inherent disadvantages. To address the need for an instrument that can measure absolute dimensions of myocardial wall at any depth, an ultrasonic single-crystal sonomicrometer (SCS) system was developed. This system can identify and track the boundary of the endocardial muscle-blood interface. With this instrument, it is possible to obtain, from a single epicardial transducer, measurement of myocardial wall motion that is calibrated in absolute dimensional units. The operating principles of the proposed myocardial dimension measurement system are as follows. A short duration ultrasonic burst having a frequency of 10 MHz is transmitted from the piezoelectric transducer. Reflected echoes are sampled at two distinct time intervals to generate reference and interface sample volumes. During steady state, the two sample volumes are adjusted so that the reference volume remains entirely within the myocardium, whereas half of the interface sampled volume is located within the myocardium. After amplification and filtering, the true root mean square values of both signals are compared and an error signal is generated. A closed-loop circuit uses the integrated error signal to continuously adjust the position of the two sample volumes. We have compared our system in vitro against a known signal and in vivo against the two-crystal TTS system during control, suppression (ischemia), and enhancement (isoproterenol) of myocardial function. Results were obtained in vitro for accuracy (> 99%), signal linearity (r = 0.99), and frequency response to heart rates > 450 beats/min, and in vivo data were

  6. Absolute Plate Velocities from Seismic Anisotropy

    NASA Astrophysics Data System (ADS)

    Kreemer, Corné; Zheng, Lin; Gordon, Richard

    2015-04-01

    The orientation of seismic anisotropy inferred beneath plate interiors may provide a means to estimate the motions of the plate relative to the sub-asthenospheric mantle. Here we analyze two global sets of shear-wave splitting data, that of Kreemer [2009] and an updated and expanded data set, to estimate plate motions and to better understand the dispersion of the data, correlations in the errors, and their relation to plate speed. We also explore the effect of using geologically current plate velocities (i.e., the MORVEL set of angular velocities [DeMets et al. 2010]) compared with geodetically current plate velocities (i.e., the GSRM v1.2 angular velocities [Kreemer et al. 2014]). We demonstrate that the errors in plate motion azimuths inferred from shear-wave splitting beneath any one tectonic plate are correlated with the errors of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. The SKS-MORVEL absolute plate angular velocities (based on the Kreemer [2009] data set) are determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25±0.11° Ma-1 (95% confidence limits) right-handed about 57.1°S, 68.6°E. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ=19.2° ) differs insignificantly from that for continental lithosphere (σ=21.6° ). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ=7.4° ) than for continental

  7. The absolute radiometric calibration of the advanced very high resolution radiometer

    NASA Technical Reports Server (NTRS)

    Slater, P. N.; Teillet, P. M.; Ding, Y.

    1988-01-01

    The need for independent, redundant absolute radiometric calibration methods is discussed with reference to the Thematic Mapper. Uncertainty requirements for absolute calibration of between 0.5 and 4 percent are defined based on the accuracy of reflectance retrievals at an agricultural site. It is shown that even very approximate atmospheric corrections can reduce the error in reflectance retrieval to 0.02 over the reflectance range 0 to 0.4.

  8. Automated absolute phase retrieval in across-track interferometry

    NASA Technical Reports Server (NTRS)

    Madsen, Soren N.; Zebker, Howard A.

    1992-01-01

    Discussed is a key element in the processing of topographic radar maps acquired by the NASA/JPL airborne synthetic aperture radar configured as an across-track interferometer (TOPSAR). TOPSAR utilizes a single transmit and two receive antennas; the three-dimensional target location is determined by triangulation based on a known baseline and two measured slant ranges. The slant range difference is determined very accurately from the phase difference between the signals received by the two antennas. This phase is measured modulo 2pi, whereas it is the absolute phase which relates directly to the difference in slant range. It is shown that splitting the range bandwidth into two subbands in the processor and processing each individually allows for the absolute phase. The underlying principles and system errors which must be considered are discussed, together with the implementation and results from processing data acquired during the summer of 1991.

  9. Absolute magnitude calibration using trigonometric parallax - Incomplete, spectroscopic samples

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Casertano, Stefano

    1991-01-01

    A new numerical algorithm is used to calibrate the absolute magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum-likelihood estimation, can retrieve unbiased estimates of the intrinsic absolute magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate error estimates are derived for each of the fitted parameters. The algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. The procedure is described in general and applied to both real and simulated data.

  10. Single-track absolute position encoding method based on spatial frequency of stripes

    NASA Astrophysics Data System (ADS)

    Xiang, Xiansong; Lu, Yancong; Wei, Chunlong; Zhou, Changhe

    2014-11-01

    A new method of single-track absolute position encoding based on spatial frequency of stripes is proposed. Instead of using pseudorandom-sequence arranged stripes as in conventional situations, this kind of encoding method stores the location information in the frequency space of the stripes, which means the spatial frequency of stripes varies with position and indicates position. This encoding method has a strong fault-tolerant capability with single-stripe detecting errors. The method can be applied to absolute linear encoders, absolute photoelectric angle encoders or two-dimensional absolute linear encoders. The measuring apparatus includes a CCD image sensor and a microscope system, and the method of decoding this frequency code is based on FFT algorithm. This method should be highly interesting for practical applications as an absolute position encoding method.

  11. Measured and modelled absolute gravity in Greenland

    NASA Astrophysics Data System (ADS)

    Nielsen, E.; Forsberg, R.; Strykowski, G.

    2012-12-01

    Present day changes in the ice volume in glaciated areas like Greenland will change the load on the Earth and to this change the lithosphere will respond elastically. The Earth also responds to changes in the ice volume over a millennial time scale. This response is due to the viscous properties of the mantle and is known as Glaical Isostatic Adjustment (GIA). Both signals are present in GPS and absolute gravity (AG) measurements and they will give an uncertainty in mass balance estimates calculated from these data types. It is possible to separate the two signals if both gravity and Global Positioning System (GPS) time series are available. DTU Space acquired an A10 absolute gravimeter in 2008. One purpose of this instrument is to establish AG time series in Greenland and the first measurements were conducted in 2009. Since then are 18 different Greenland GPS Network (GNET) stations visited and six of these are visited more then once. The gravity signal consists of three signals; the elastic signal, the viscous signal and the direct attraction from the ice masses. All of these signals can be modelled using various techniques. The viscous signal is modelled by solving the Sea Level Equation with an appropriate ice history and Earth model. The free code SELEN is used for this. The elastic signal is modelled as a convolution of the elastic Greens function for gravity and a model of present day ice mass changes. The direct attraction is the same as the Newtonian attraction and is calculated as this. Here we will present the preliminary results of the AG measurements in Greenland. We will also present modelled estimates of the direct attraction, the elastic and the viscous signals.

  12. Towards error-free interaction.

    PubMed

    Tsoneva, Tsvetomira; Bieger, Jordi; Garcia-Molina, Gary

    2010-01-01

    Human-machine interaction (HMI) relies on pat- tern recognition algorithms that are not perfect. To improve the performance and usability of these systems we can utilize the neural mechanisms in the human brain dealing with error awareness. This study aims at designing a practical error detection algorithm using electroencephalogram signals that can be integrated in an HMI system. Thus, real-time operation, customization, and operation convenience are important. We address these requirements in an experimental framework simulating machine errors. Our results confirm the presence of brain potentials related to processing of machine errors. These are used to implement an error detection algorithm emphasizing the differences in error processing on a per subject basis. The proposed algorithm uses the individual best bipolar combination of electrode sites and requires short calibration. The single-trial error detection performance on six subjects, characterized by the area under the ROC curve ranges from 0.75 to 0.98.

  13. Absolute nonlocality via distributed computing without communication

    NASA Astrophysics Data System (ADS)

    Czekaj, Ł.; Pawłowski, M.; Vértesi, T.; Grudka, A.; Horodecki, M.; Horodecki, R.

    2015-09-01

    Understanding the role that quantum entanglement plays as a resource in various information processing tasks is one of the crucial goals of quantum information theory. Here we propose an alternative perspective for studying quantum entanglement: distributed computation of functions without communication between nodes. To formalize this approach, we propose identity games. Surprisingly, despite no signaling, we obtain that nonlocal quantum strategies beat classical ones in terms of winning probability for identity games originating from certain bipartite and multipartite functions. Moreover we show that, for a majority of functions, access to general nonsignaling resources boosts success probability two times in comparison to classical ones for a number of large enough outputs. Because there are no constraints on the inputs and no processing of the outputs in the identity games, they detect very strong types of correlations: absolute nonlocality.

  14. WE-AB-BRA-04: Evaluation of the Tumor Registration Error in Biopsy Procedures Performed Under Real Time PET/CT Guidance

    SciTech Connect

    Fanchon, L; Apte, A; Dzyubak, O; Mageras, G; Yorke, E; Solomon, S; Kirov, A; Visvikis, D; Hatt, M

    2015-06-15

    Purpose: PET/CT guidance is used for biopsies of metabolically active lesions, which are not well seen on CT alone or to target the metabolically active tissue in tumor ablations. It has also been shown that PET/CT guided biopsies provide an opportunity to verify the location of the lesion border at the place of needle insertion. However the error in needle placement with respect to the metabolically active region may be affected by motion between the PET/CT scan performed at the start of the procedure and the CT scan performed with the needle in place and this error has not been previously quantified. Methods: Specimens from 31 PET/CT guided biopsies were investigated and correlated to the intraoperative PET scan under an IRB approved HIPAA compliant protocol. For 4 of the cases in which larger motion was suspected a second PET scan was obtained with the needle in place. The CT and the PET images obtained before and after the needle insertion were used to calculate the displacement of the voxels along the needle path. CTpost was registered to CTpre using a free form deformable registration and then fused with PETpre. The shifts between the PET image contours (42% of SUVmax) for PETpre and PETpost were obtained at the needle position. Results: For these extreme cases the displacement of the CT voxels along the needle path ranged from 2.9 to 8 mm with a mean of 5 mm. The shift of the PET image segmentation contours (42% of SUVmax) at the needle position ranged from 2.3 to 7 mm between the two scans. Conclusion: Evaluation of the mis-registration between the CT with the needle in place and the pre-biopsy PET can be obtained using deformable registration of the respective CT scans and can be used to indicate the need of a second PET in real-time. This work is supported in part by a grant from Biospace Lab, S.A.

  15. The EM-POGO: A simple, absolute velocity profiler

    NASA Astrophysics Data System (ADS)

    Terker, S. R.; Sanford, T. B.; Dunlap, J. H.; Girton, J. B.

    2013-01-01

    Electromagnetic current instrumentation has been added to the Bathy Systems, Inc. POGO transport sondes to produce a free-falling absolute velocity profiler called EM-POGO. The POGO is a free-fall profiler that measures a depth-averaged velocity using GPS fixes at the beginning and end of a round trip to the ocean floor (or a preset depth). The EM-POGO adds a velocity profile determined from measurements of motionally induced electric fields generated by the ocean current moving through the vertical component of the Earth's magnetic field. In addition to providing information about the vertical structure of the velocity, the depth-dependent measurements improve transport measurements by correcting for the non-constant fall-rate. Neglecting the variable fall rate results in errors O (1 cm s-1). The transition from POGO to EM-POGO included electrically isolating the POGO and electric-field-measuring circuits, installing a functional GPS receiver, finding a pressure case that provided an optimal balance among crush-depth, price and size, and incorporating the electrodes, electrode collar, and the circuitry required for the electric field measurement. The first EM-POGO sea-trial was in July 1999. In August 2006 a refurbished EM-POGO collected 15 absolute velocity profiles; relative and absolute velocity uncertainty was ˜1cms-1 and 0.5-5 cm s-1, respectively, at a vertical resolution of 25 m. Absolute velocity from the EM-POGO compared to shipboard ADCP measurements differed by ˜ 1-2 cm s-1, comparable to the uncertainty in absolute velocity from the ADCP. The EM-POGO is thus a low-cost, easy to deploy and recover, and accurate velocity profiler.

  16. Absolute wavelength calibration of a Doppler spectrometer with a custom Fabry-Perot optical system

    NASA Astrophysics Data System (ADS)

    Baltzer, M. M.; Craig, D.; Den Hartog, D. J.; Nishizawa, T.; Nornberg, M. D.

    2016-11-01

    An Ion Doppler Spectrometer (IDS) is used for fast measurements of C VI line emission (343.4 nm) in the Madison Symmetric Torus. Absolutely calibrated flow measurements are difficult because the IDS records data within 0.25 nm of the line. Commercial calibration lamps do not produce lines in this narrow range. A light source using an ultraviolet LED and etalon was designed to provide a fiducial marker 0.08 nm wide. The light is coupled into the IDS at f/4, and a holographic diffuser increases homogeneity of the final image. Random and systematic errors in data analysis were assessed. The calibration is accurate to 0.003 nm, allowing for flow measurements accurate to 3 km/s. This calibration is superior to the previous method which used a time-averaged measurement along a chord believed to have zero net Doppler shift.

  17. Assessment of absolute added correlative coding in optical intensity modulation and direct detection channels

    NASA Astrophysics Data System (ADS)

    Dong-Nhat, Nguyen; Elsherif, Mohamed A.; Malekmohammadi, Amin

    2016-06-01

    The performance of absolute added correlative coding (AACC) modulation format with direct detection has been numerically and analytically reported, targeting metro data center interconnects. Hereby, the focus lies on the performance of the bit error rate, noise contributions, spectral efficiency, and chromatic dispersion tolerance. The signal space model of AACC, where the average electrical and optical power expressions are derived for the first time, is also delineated. The proposed modulation format was also compared to other well-known signaling, such as on-off-keying (OOK) and four-level pulse-amplitude modulation, at the same bit rate in a directly modulated vertical-cavity surface-emitting laser-based transmission system. The comparison results show a clear advantage of AACC in achieving longer fiber delivery distance due to the higher dispersion tolerance.

  18. Absolute wavelength calibration of a Doppler spectrometer with a custom Fabry-Perot optical system.

    PubMed

    Baltzer, M M; Craig, D; Den Hartog, D J; Nishizawa, T; Nornberg, M D

    2016-11-01

    An Ion Doppler Spectrometer (IDS) is used for fast measurements of C VI line emission (343.4 nm) in the Madison Symmetric Torus. Absolutely calibrated flow measurements are difficult because the IDS records data within 0.25 nm of the line. Commercial calibration lamps do not produce lines in this narrow range. A light source using an ultraviolet LED and etalon was designed to provide a fiducial marker 0.08 nm wide. The light is coupled into the IDS at f/4, and a holographic diffuser increases homogeneity of the final image. Random and systematic errors in data analysis were assessed. The calibration is accurate to 0.003 nm, allowing for flow measurements accurate to 3 km/s. This calibration is superior to the previous method which used a time-averaged measurement along a chord believed to have zero net Doppler shift.

  19. Time-order errors and standard-position effects in duration discrimination: An experimental study and an analysis by the sensation-weighting model.

    PubMed

    Hellström, Åke; Rammsayer, Thomas H

    2015-10-01

    Studies have shown that the discriminability of successive time intervals depends on the presentation order of the standard (St) and the comparison (Co) stimuli. Also, this order affects the point of subjective equality. The first effect is here called the standard-position effect (SPE); the latter is known as the time-order error. In the present study, we investigated how these two effects vary across interval types and standard durations, using Hellström's sensation-weighting model to describe the results and relate them to stimulus comparison mechanisms. In Experiment 1, four modes of interval presentation were used, factorially combining interval type (filled, empty) and sensory modality (auditory, visual). For each mode, two presentation orders (St-Co, Co-St) and two standard durations (100 ms, 1,000 ms) were used; half of the participants received correctness feedback, and half of them did not. The interstimulus interval was 900 ms. The SPEs were negative (i.e., a smaller difference limen for St-Co than for Co-St), except for the filled-auditory and empty-visual 100-ms standards, for which a positive effect was obtained. In Experiment 2, duration discrimination was investigated for filled auditory intervals with four standards between 100 and 1,000 ms, an interstimulus interval of 900 ms, and no feedback. Standard duration interacted with presentation order, here yielding SPEs that were negative for standards of 100 and 1,000 ms, but positive for 215 and 464 ms. Our findings indicate that the SPE can be positive as well as negative, depending on the interval type and standard duration, reflecting the relative weighting of the stimulus information, as is described by the sensation-weighting model.

  20. a Portable Apparatus for Absolute Measurements of the Earth's Gravity.

    NASA Astrophysics Data System (ADS)

    Zumberge, Mark Andrew

    We have developed a new, portable apparatus for making absolute measurements of the acceleration due to the earth's gravity. We use the method of interferometrically determining the acceleration of a freely falling corner -cube prism. The falling object is surrounded by a chamber which is driven vertically inside a fixed vacuum chamber. This falling chamber is servoed to track the falling corner -cube to shield it from drag due to background gas. In addition, the drag-free falling chamber removes the need for a magnetic release, shields the falling object from electrostatic forces, and provides a means of both gently arresting the falling object and quickly returning it to its start position, to allow rapid acquisition of data. A synthesized long period isolation device reduces the noise due to seismic oscillations. A new type of Zeeman laser is used as the light source in the interferometer, and is compared with the wavelength of an iodine stabilized laser. The times of occurrence of 45 interference fringes are measured to within 0.2 nsec over a 20 cm drop and are fit to a quadratic by an on-line minicomputer. 150 drops can be made in ten minutes resulting in a value of g having a precision of 3 to 6 parts in 10('9). Systematic errors have been determined to be less than 5 parts in 10('9) through extensive tests. Three months of gravity data have been obtained with a reproducibility ranging from 5 to 10 parts in 10('9). The apparatus has been designed to be easily portable. Field measurements are planned for the immediate future. An accuracy of 6 parts in 10('9) corresponds to a height sensitivity of 2 cm. Vertical motions in the earth's crust and tectonic density changes that may precede earthquakes are to be investigated using this apparatus.

  1. Internal Correction Of Errors In A DRAM

    NASA Technical Reports Server (NTRS)

    Zoutendyk, John A.; Watson, R. Kevin; Schwartz, Harvey R.; Nevill, Leland R.; Hasnain, Zille

    1989-01-01

    Error-correcting Hamming code built into circuit. A 256 K dynamic random-access memory (DRAM) circuit incorporates Hamming error-correcting code in its layout. Feature provides faster detection and correction of errors at less cost in amount of equipment, operating time, and software. On-chip error-correcting feature also makes new DRAM less susceptible to single-event upsets.

  2. The use of X-ray crystallography to determine absolute configuration.

    PubMed

    Flack, H D; Bernardinelli, G

    2008-05-15

    Essential background on the determination of absolute configuration by way of single-crystal X-ray diffraction (XRD) is presented. The use and limitations of an internal chiral reference are described. The physical model underlying the Flack parameter is explained. Absolute structure and absolute configuration are defined and their similarities and differences are highlighted. The necessary conditions on the Flack parameter for satisfactory absolute-structure determination are detailed. The symmetry and purity conditions for absolute-configuration determination are discussed. The physical basis of resonant scattering is briefly presented and the insights obtained from a complete derivation of a Bijvoet intensity ratio by way of the mean-square Friedel difference are exposed. The requirements on least-squares refinement are emphasized. The topics of right-handed axes, XRD intensity measurement, software, crystal-structure evaluation, errors in crystal structures, and compatibility of data in their relation to absolute-configuration determination are described. Characterization of the compounds and crystals by the physicochemical measurement of optical rotation, CD spectra, and enantioselective chromatography are presented. Some simple and some complex examples of absolute-configuration determination using combined XRD and CD measurements, using XRD and enantioselective chromatography, and in multiply-twinned crystals clarify the technique. The review concludes with comments on absolute-configuration determination from light-atom structures.

  3. Absolute bioavailability of quinine formulations in Nigeria.

    PubMed

    Babalola, C P; Bolaji, O O; Ogunbona, F A; Ezeomah, E

    2004-09-01

    This study compared the absolute bioavailability of quinine sulphate as capsule and as tablet against the intravenous (i.v.) infusion of the drug in twelve male volunteers. Six of the volunteers received intravenous infusion over 4 h as well as the capsule formulation of the drug in a cross-over manner, while the other six received the tablet formulation. Blood samples were taken at predetermined time intervals and plasma analysed for quinine (QN) using reversed-phase HPLC method. QN was rapidly absorbed after the two oral formulations with average t(max) of 2.67 h for both capsule and tablet. The mean elimination half-life of QN from the i.v. and oral dosage forms varied between 10 and 13.5 hr and were not statistically different (P > 0.05). On the contrary, the maximum plasma concentration (C(max)) and area under the curve (AUC) from capsule were comparable to those from i.v. (P > 0.05), while these values were markedly higher than values from tablet formulation (P < 0.05). The therapeutic QN plasma levels were not achieved with the tablet formulation. The absolute bioavailability (F) were 73% (C.l., 53.3 - 92.4%) and 39 % (C.I., 21.7 - 56.6%) for the capsule and tablet respectively and the difference was significant (P < 0.05). The subtherapeutic levels obtained from the tablet form used in this study may cause treatment failure during malaria and caution should be taken when predictions are made from results obtained from different formulations of QN.

  4. Absolute irradiance of the Moon for on-orbit calibration

    USGS Publications Warehouse

    Stone, T.C.; Kieffer, H.H.; ,

    2002-01-01

    The recognized need for on-orbit calibration of remote sensing imaging instruments drives the ROLO project effort to characterize the Moon for use as an absolute radiance source. For over 5 years the ground-based ROLO telescopes have acquired spatially-resolved lunar images in 23 VNIR (Moon diameter ???500 pixels) and 9 SWIR (???250 pixels) passbands at phase angles within ??90 degrees. A numerical model for lunar irradiance has been developed which fits hundreds of ROLO images in each band, corrected for atmospheric extinction and calibrated to absolute radiance, then integrated to irradiance. The band-coupled extinction algorithm uses absorption spectra of several gases and aerosols derived from MODTRAN to fit time-dependent component abundances to nightly observations of standard stars. The absolute radiance scale is based upon independent telescopic measurements of the star Vega. The fitting process yields uncertainties in lunar relative irradiance over small ranges of phase angle and the full range of lunar libration well under 0.5%. A larger source of uncertainty enters in the absolute solar spectral irradiance, especially in the SWIR, where solar models disagree by up to 6%. Results of ROLO model direct comparisons to spacecraft observations demonstrate the ability of the technique to track sensor responsivity drifts to sub-percent precision. Intercomparisons among instruments provide key insights into both calibration issues and the absolute scale for lunar irradiance.

  5. STS-9 Shuttle grow - Ram angle effect and absolute intensities

    NASA Technical Reports Server (NTRS)

    Swenson, G. R.; Mende, S. B.; Clifton, K. S.

    1986-01-01

    Visible imagery from Space Shuttle mission STS-9 (Spacelab 1) has been analyzed for the ram angle effect and the absolute intensity of glow. The data are compared with earlier measurements and the anomalous high intensities at large ram angles are confirmed. Absolute intensities of the ram glow on the shuttle tile, at 6563 A, are observed to be about 20 times more intense than those measured on the AE-E spacecraft. Implications of these observations for an existing theory of glow involving NO2 are presented.

  6. Absolute configurations of zingiberenols isolated from ginger (Zingiber officinale) rhizomes

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The sesquiterpene alcohol zingiberenol, or 1,10-bisaboladien-3-ol, was isolated some time ago from ginger, Zingiber officinale, rhizomes, but its absolute configuration had not been determined. With three chiral centers present in the molecule, zingiberenol can exist in eight stereoisomeric forms. ...

  7. Urey: to measure the absolute age of Mars

    NASA Technical Reports Server (NTRS)

    Randolph, J. E.; Plescia, J.; Bar-Cohen, Y.; Bartlett, P.; Bickler, D.; Carlson, R.; Carr, G.; Fong, M.; Gronroos, H.; Guske, P. J.; Herring, M.; Javadi, H.; Johnson, D. W.; Larson, T.; Malaviarachchi, K.; Sherrit, S.; Stride, S.; Trebi-Ollennu, A.; Warwick, R.

    2003-01-01

    UREY, a proposed NASA Mars Scout mission will, for the first time, measure the absolute age of an identified igneous rock formation on Mars. By extension to relatively older and younger rock formations dated by remote sensing, these results will enable a new and better understanding of Martian geologic history.

  8. Mechanism for an absolute parametric instability of an inhomogeneous plasma

    NASA Astrophysics Data System (ADS)

    Arkhipenko, V. I.; Budnikov, V. N.; Gusakov, E. Z.; Romanchuk, I. A.; Simonchik, L. V.

    1984-05-01

    The structure of plasma oscillations in a region of parametric spatial amplification has been studied experimentally for the first time. A new mechanism for an absolute parametric instability has been observed. This mechanism operates when a pump wave with a spatial structure more complicated than a plane wave propagates through a plasma which is inhomogeneous along more than one dimension.

  9. Absolute optical metrology : nanometers to kilometers

    NASA Technical Reports Server (NTRS)

    Dubovitsky, Serge; Lay, O. P.; Peters, R. D.; Liebe, C. C.

    2005-01-01

    We provide and overview of the developments in the field of high-accuracy absolute optical metrology with emphasis on space-based applications. Specific work on the Modulation Sideband Technology for Absolute Ranging (MSTAR) sensor is described along with novel applications of the sensor.

  10. ON A SUFFICIENT CONDITION FOR ABSOLUTE CONTINUITY.

    DTIC Science & Technology

    The formulation of a condition which yields absolute continuity when combined with continuity and bounded variation is the problem considered in the...Briefly, the formulation is achieved through a discussion which develops a proof by contradiction of a sufficiently theorem for absolute continuity which uses in its hypothesis the condition of continuity and bounded variation .

  11. Introducing the Mean Absolute Deviation "Effect" Size

    ERIC Educational Resources Information Center

    Gorard, Stephen

    2015-01-01

    This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…

  12. Monolithically integrated absolute frequency comb laser system

    SciTech Connect

    Wanke, Michael C.

    2016-07-12

    Rather than down-convert optical frequencies, a QCL laser system directly generates a THz frequency comb in a compact monolithically integrated chip that can be locked to an absolute frequency without the need of a frequency-comb synthesizer. The monolithic, absolute frequency comb can provide a THz frequency reference and tool for high-resolution broad band spectroscopy.

  13. The relative and absolute reliability of center of pressure trajectory during gait initiation in older adults.

    PubMed

    Khanmohammadi, Roya; Talebian, Saeed; Hadian, Mohammad Reza; Olyaei, Gholamreza; Bagheri, Hossein

    2017-02-01

    It has been thought that for scientific acceptance of a parameter, its psychometric properties such as reliability, validity and responsiveness have critical roles. Therefore, this study was conducted to estimate how many trials are required to obtain a reliable center of pressure (COP) parameter during gait initiation (GI) and to investigate the effect of number of trials on the relative and absolute reliability. Twenty older adults participated in the study. Subjects began stepping over the force platform in response to an auditory stimulus. Ten trials were collected in one session. The displacement, velocity, mean and median frequency of the COP in the mediolateral (ML) and anteroposterior (AP) directions were evaluated. Relative reliability was determined using the intraclass correlation coefficient (ICC), and absolute reliability was evaluated using the standard error of measurement (SEM) and minimal detectable change (MDC95). The results revealed with respect to parameter, one to five trials should be averaged to ensure excellent reliability. Moreover, ICC, SEM% and MDC95% values were between 0.39-0.89, 4.84-41.5% and 13.4-115% for single trial and 0.86-0.99, 1.74-19.7% and 4.83-54.7% for ten trials averaged, respectively. Moreover, the ML and AP COP displacement in locomotor phase had the most relative reliability as well as the ML and AP median frequency in locomotor phase had the most absolute reliability. In general, the results showed that the COP-related parameters in time and frequency domains, based on average of five trials, provide reliable outcome measures for evaluation of dynamic postural control in older adults.

  14. Absolute instability of the Gaussian wake profile

    NASA Technical Reports Server (NTRS)

    Hultgren, Lennart S.; Aggarwal, Arun K.

    1987-01-01

    Linear parallel-flow stability theory has been used to investigate the effect of viscosity on the local absolute instability of a family of wake profiles with a Gaussian velocity distribution. The type of local instability, i.e., convective or absolute, is determined by the location of a branch-point singularity with zero group velocity of the complex dispersion relation for the instability waves. The effects of viscosity were found to be weak for values of the wake Reynolds number, based on the center-line velocity defect and the wake half-width, larger than about 400. Absolute instability occurs only for sufficiently large values of the center-line wake defect. The critical value of this parameter increases with decreasing wake Reynolds number, thereby indicating a shrinking region of absolute instability with decreasing wake Reynolds number. If backflow is not allowed, absolute instability does not occur for wake Reynolds numbers smaller than about 38.

  15. The Absolute Gravimeter FG5 - Adjustment and Residual Data Evaluation

    NASA Astrophysics Data System (ADS)

    Orlob, M.; Braun, A.; Henton, J.; Courtier, N.; Liard, J.

    2009-05-01

    The most widely used method of direct terrestrial gravity determination is performed by using a ballistic absolute gravimeter. Today, the FG5 (Micro-g LaCoste; Lafayette, CO) is the most common free-fall absolute gravimeter. It uses the Michelson-type interferometer to determine the absolute gravity value with accuracies up to one part- per-billion of g. Furthermore, absolute gravimeter measurements can be used to assist in the validation and interpretation of temporal variations of the global gravity field, e.g. from the GRACE mission. In addition, absolute gravimetry allows for monitoring gravity changes which are caused by subsurface mass redistributions and/or vertical displacements. In this study,adjustment software was developed and applied to the raw data sets of FG5#106 and FG5#236, made available by Natural Resources Canada. Both data sets have been collected at the same time and place which leads to an intercomparison of the instruments performance. The adjustment software was validated against the official FG5 software package developed by Micro-g Lacoste. In order to identify potential environmental or instrument disturbances in the observed time series, a Lomb- Scargle periodogram analysis was employed. The absolute gravimeter FG5 is particularly sensitive to low frequencies between 0-3Hz. Hence, the focus of the analysis is to detect signals in the band of 0-100 Hz. An artificial signal was added to the measurements for demonstration purposes. Both the performance of the adjustment software and the Lomb-Scargle analysis will be discussed.

  16. Absolute length measurement using manually decided stereo correspondence for endoscopy

    NASA Astrophysics Data System (ADS)

    Sasaki, M.; Koishi, T.; Nakaguchi, T.; Tsumura, N.; Miyake, Y.

    2009-02-01

    In recent years, various kinds of endoscope have been developed and widely used to endoscopic biopsy, endoscopic operation and endoscopy. The size of the inflammatory part is important to determine a method of medical treatment. However, it is not easy to measure absolute size of inflammatory part such as ulcer, cancer and polyp from the endoscopic image. Therefore, it is required measuring the size of those part in endoscopy. In this paper, we propose a new method to measure the absolute length in a straight line between arbitrary two points based on the photogrammetry using endoscope with magnetic tracking sensor which gives camera position and angle. In this method, the stereo-corresponding points between two endoscopic images are determined by the endoscopist without any apparatus of projection and calculation to find the stereo correspondences, then the absolute length can be calculated on the basis of the photogrammetry. The evaluation experiment using a checkerboard showed that the errors of the measurements are less than 2% of the target length when the baseline is sufficiently-long.

  17. On the influence of the rotation of a corner cube reflector in absolute gravimetry

    NASA Astrophysics Data System (ADS)

    Rothleitner, Ch; Francis, O.

    2010-10-01

    Test masses of absolute gravimeters contain prism or hollow retroreflectors. A rotation of such a retroreflector during free-fall can cause a bias in the measured g-value. In particular, prism retroreflectors produce phase shifts, which cannot be eliminated. Such an error is small if the rotation occurs about the optical centre of the retroreflector; however, under certain initial conditions the error can reach the microgal level. The contribution from these rotation-induced accelerations is calculated.

  18. Robust time-of-arrival source localization employing error covariance of sample mean and sample median in line-of-sight/non-line-of-sight mixture environments

    NASA Astrophysics Data System (ADS)

    Park, Chee-Hyun; Chang, Joon-Hyuk

    2016-12-01

    We propose a line-of-sight (LOS)/non-line-of-sight (NLOS) mixture source localization algorithm that utilizes the weighted least squares (WLS) method in LOS/NLOS mixture environments, where the weight matrix is determined in the algebraic form. Unless the contamination ratio exceeds 50 %, the asymptotic variance of the sample median can be approximately related to that of the sample mean. Based on this observation, we use the error covariance matrix for the sample mean and median to minimize the weighted squared error (WSE) loss function. The WSE loss function based on the sample median is utilized when statistical testing supports the LOS/NLOS state, while the WSE function using the sample mean is employed when statistical testing indicates that the sensor is in the LOS state. To testify the superiority of the proposed methods, the mean square error (MSE) performances are compared via simulation.

  19. On the effect of distortion and dispersion in fringe signal of the FG5 absolute gravimeters

    NASA Astrophysics Data System (ADS)

    Křen, Petr; Pálinkáš, Vojtech; Mašika, Pavel

    2016-02-01

    The knowledge of absolute gravity acceleration at the level of 1  ×  10-9 is needed in geosciences (e.g. for monitoring crustal deformations and mass transports) and in metrology for watt balance experiments related to the new SI definition of the unit of kilogram. The gravity reference, which results from the international comparisons held with the participation of numerous absolute gravimeters, is significantly affected by qualities of instruments prevailing in the comparisons (i.e. at present, FG5 gravimeters). Therefore, it is necessary to thoroughly investigate all instrumental (particularly systematic) errors. This paper deals with systematic errors of the FG5#215 coming from the distorted fringe signal and from the electronic dispersion at several electronic components including cables. In order to investigate these effects, we developed a new experimental system for acquiring and analysing the data parallel to the FG5 built-in system. The new system based on the analogue-to-digital converter with digital waveform processing using the FFT swept band pass filter is developed and tested on the FG5#215 gravimeter equipped with a new fast analogue output. The system is characterized by a low timing jitter, digital handling of the distorted swept signal with determination of zero-crossings for the fundamental frequency sweep and also for its harmonics and can be used for any gravimeter based on the laser interferometry. Comparison of the original FG5 system and the experimental systems is provided on g-values, residuals and additional measurements/models. Moreover, advanced approach for the solution of the free-fall motion is presented, which allows to take into account a non-linear gravity change with height.

  20. Density dependence and climate effects in Rocky Mountain elk: an application of regression with instrumental variables for population time series with sampling error.

    PubMed

    Creel, Scott; Creel, Michael

    2009-11-01

    1. Sampling error in annual estimates of population size creates two widely recognized problems for the analysis of population growth. First, if sampling error is mistakenly treated as process error, one obtains inflated estimates of the variation in true population trajectories (Staples, Taper & Dennis 2004). Second, treating sampling error as process error is thought to overestimate the importance of density dependence in population growth (Viljugrein et al. 2005; Dennis et al. 2006). 2. In ecology, state-space models are used to account for sampling error when estimating the effects of density and other variables on population growth (Staples et al. 2004; Dennis et al. 2006). In econometrics, regression with instrumental variables is a well-established method that addresses the problem of correlation between regressors and the error term, but requires fewer assumptions than state-space models (Davidson & MacKinnon 1993; Cameron & Trivedi 2005). 3. We used instrumental variables to account for sampling error and fit a generalized linear model to 472 annual observations of population size for 35 Elk Management Units in Montana, from 1928 to 2004. We compared this model with state-space models fit with the likelihood function of Dennis et al. (2006). We discuss the general advantages and disadvantages of each method. Briefly, regression with instrumental variables is valid with fewer distributional assumptions, but state-space models are more efficient when their distributional assumptions are met. 4. Both methods found that population growth was negatively related to population density and winter snow accumulation. Summer rainfall and wolf (Canis lupus) presence had much weaker effects on elk (Cervus elaphus) dynamics [though limitation by wolves is strong in some elk populations with well-established wolf populations (Creel et al. 2007; Creel & Christianson 2008)]. 5. Coupled with predictions for Montana from global and regional climate models, our results

  1. Automatic section thickness determination using an absolute gradient focus function.

    PubMed

    Elozory, D T; Kramer, K A; Chaudhuri, B; Bonam, O P; Goldgof, D B; Hall, L O; Mouton, P R

    2012-12-01

    Quantitative analysis of microstructures using computerized stereology systems is an essential tool in many disciplines of bioscience research. Section thickness determination in current nonautomated approaches requires manual location of upper and lower surfaces of tissue sections. In contrast to conventional autofocus functions that locate the optimally focused optical plane using the global maximum on a focus curve, this study identified by two sharp 'knees' on the focus curve as the transition from unfocused to focused optical planes. Analysis of 14 grey-scale focus functions showed, the thresholded absolute gradient function, was best for finding detectable bends that closely correspond to the bounding optical planes at the upper and lower tissue surfaces. Modifications to this function generated four novel functions that outperformed the original. The 'modified absolute gradient count' function outperformed all others with an average error of 0.56 μm on a test set of images similar to the training set; and, an average error of 0.39 μm on a test set comprised of images captured from a different case, that is, different staining methods on a different brain region from a different subject rat. We describe a novel algorithm that allows for automatic section thickness determination based on just out-of-focus planes, a prerequisite for fully automatic computerized stereology.

  2. Absolute quantitation of protein posttranslational modification isoform.

    PubMed

    Yang, Zhu; Li, Ning

    2015-01-01

    Mass spectrometry has been widely applied in characterization and quantification of proteins from complex biological samples. Because the numbers of absolute amounts of proteins are needed in construction of mathematical models for molecular systems of various biological phenotypes and phenomena, a number of quantitative proteomic methods have been adopted to measure absolute quantities of proteins using mass spectrometry. The liquid chromatography-tandem mass spectrometry (LC-MS/MS) coupled with internal peptide standards, i.e., the stable isotope-coded peptide dilution series, which was originated from the field of analytical chemistry, becomes a widely applied method in absolute quantitative proteomics research. This approach provides more and more absolute protein quantitation results of high confidence. As quantitative study of posttranslational modification (PTM) that modulates the biological activity of proteins is crucial for biological science and each isoform may contribute a unique biological function, degradation, and/or subcellular location, the absolute quantitation of protein PTM isoforms has become more relevant to its biological significance. In order to obtain the absolute cellular amount of a PTM isoform of a protein accurately, impacts of protein fractionation, protein enrichment, and proteolytic digestion yield should be taken into consideration and those effects before differentially stable isotope-coded PTM peptide standards are spiked into sample peptides have to be corrected. Assisted with stable isotope-labeled peptide standards, the absolute quantitation of isoforms of posttranslationally modified protein (AQUIP) method takes all these factors into account and determines the absolute amount of a protein PTM isoform from the absolute amount of the protein of interest and the PTM occupancy at the site of the protein. The absolute amount of the protein of interest is inferred by quantifying both the absolute amounts of a few PTM

  3. Absolute realization of low BRDF value

    NASA Astrophysics Data System (ADS)

    Liu, Zilong; Liao, Ningfang; Li, Ping; Wang, Yu

    2010-10-01

    Low BRDF value is widespread used in many critical domains such as space and military fairs. These values below 0.1 Sr-1 . So the Absolute realization of these value is the most critical issue in the absolute measurement of BRDF. To develop the Absolute value realization theory of BRDF , defining an arithmetic operators of BRDF , achieving an absolute measurement Eq. of BRDF based on radiance. This is a new theory method to solve the realization problem of low BRDF value. This theory method is realized on a self-designed common double orientation structure in space. By designing an adding structure to extend the range of the measurement system and a control and processing software, Absolute realization of low BRDF value is achieved. A material of low BRDF value is measured in this measurement system and the spectral BRDF value are showed within different angles allover the space. All these values are below 0.4 Sr-1 . This process is a representative procedure about the measurement of low BRDF value. A corresponding uncertainty analysis of this measurement data is given depend on the new theory of absolute realization and the performance of the measurement system. The relative expand uncertainty of the measurement data is 0.078. This uncertainty analysis is suitable for all measurements using the new theory of absolute realization and the corresponding measurement system.

  4. Absolute flatness testing of skip-flat interferometry by matrix analysis in polar coordinates.

    PubMed

    Han, Zhi-Gang; Yin, Lu; Chen, Lei; Zhu, Ri-Hong

    2016-03-20

    A new method utilizing matrix analysis in polar coordinates has been presented for absolute testing of skip-flat interferometry. The retrieval of the absolute profile mainly includes three steps: (1) transform the wavefront maps of the two cavity measurements into data in polar coordinates; (2) retrieve the profile of the reflective flat in polar coordinates by matrix analysis; and (3) transform the profile of the reflective flat back into data in Cartesian coordinates and retrieve the profile of the sample. Simulation of synthetic surface data has been provided, showing the capability of the approach to achieve an accuracy of the order of 0.01 nm RMS. The absolute profile can be retrieved by a set of closed mathematical formulas without polynomial fitting of wavefront maps or the iterative evaluation of an error function, making the new method more efficient for absolute testing.

  5. A New Gimmick for Assigning Absolute Configuration.

    ERIC Educational Resources Information Center

    Ayorinde, F. O.

    1983-01-01

    A five-step procedure is provided to help students in making the assignment absolute configuration less bothersome. Examples for both single (2-butanol) and multi-chiral carbon (3-chloro-2-butanol) molecules are included. (JN)

  6. Magnifying absolute instruments for optically homogeneous regions

    SciTech Connect

    Tyc, Tomas

    2011-09-15

    We propose a class of magnifying absolute optical instruments with a positive isotropic refractive index. They create magnified stigmatic images, either virtual or real, of optically homogeneous three-dimensional spatial regions within geometrical optics.

  7. The Simplicity Argument and Absolute Morality

    ERIC Educational Resources Information Center

    Mijuskovic, Ben

    1975-01-01

    In this paper the author has maintained that there is a similarity of thought to be found in the writings of Cudworth, Emerson, and Husserl in his investigation of an absolute system of morality. (Author/RK)

  8. Absolute cross sections of compound nucleus reactions

    NASA Astrophysics Data System (ADS)

    Capurro, O. A.

    1993-11-01

    The program SEEF is a Fortran IV computer code for the extraction of absolute cross sections of compound nucleus reactions. When the evaporation residue is fed by its parents, only cumulative cross sections will be obtained from off-line gamma ray measurements. But, if one has the parent excitation function (experimental or calculated), this code will make it possible to determine absolute cross sections of any exit channel.

  9. Kelvin and the absolute temperature scale

    NASA Astrophysics Data System (ADS)

    Erlichson, Herman

    2001-07-01

    This paper describes the absolute temperature scale of Kelvin (William Thomson). Kelvin found that Carnot's axiom about heat being a conserved quantity had to be abandoned. Nevertheless, he found that Carnot's fundamental work on heat engines was correct. Using the concept of a Carnot engine Kelvin found that Q1/Q2 = T1/T2. Thermometers are not used to obtain absolute temperatures since they are calculated temperatures.

  10. Correcting numerical integration errors caused by small aliasing errors

    SciTech Connect

    Smallwood, D.O.

    1997-11-01

    Small sampling errors can have a large effect on numerically integrated waveforms. An example is the integration of acceleration to compute velocity and displacement waveforms. These large integration errors complicate checking the suitability of the acceleration waveform for reproduction on shakers. For waveforms typically used for shaker reproduction, the errors become significant when the frequency content of the waveform spans a large frequency range. It is shown that these errors are essentially independent of the numerical integration method used, and are caused by small aliasing errors from the frequency components near the Nyquist frequency. A method to repair the integrated waveforms is presented. The method involves using a model of the acceleration error, and fitting this model to the acceleration, velocity, and displacement waveforms to force the waveforms to fit the assumed initial and final values. The correction is then subtracted from the acceleration before integration. The method is effective where the errors are isolated to a small section of the time history. It is shown that the common method to repair these errors using a high pass filter is sometimes ineffective for this class of problem.

  11. What is Needed for Absolute Paleointensity?

    NASA Astrophysics Data System (ADS)

    Valet, J. P.

    2015-12-01

    Many alternative approaches to the Thellier and Thellier technique for absolute paleointensity have been proposed during the past twenty years. One reason is the time consuming aspect of the experiments. Another reason is to avoid uncertainties in determinations of the paleofield which are mostly linked to the presence of multidomain grains. Despite great care taken by these new techniques, there is no indication that they always provide the right answer and in fact sometimes fail. We are convinced that the most valid approach remains the original double heating Thellier protocol provided that natural remanence is controlled by pure magnetite with a narrow distribution of small grain sizes, mostly single domains. The presence of titanium, even in small amount generates biases which yield incorrect field values. Single domain grains frequently dominate the magnetization of glass samples, which explains the success of this selective approach. They are also present in volcanic lava flows but much less frequently, and therefore contribute to the low success rate of most experiments. However the loss of at least 70% of the magnetization at very high temperatures prior to the Curie point appears to be an essential prerequisite that increases the success rate to almost 100% and has been validated from historical flows and from recent studies. This requirement can easily be tested by thermal demagnetization while low temperature experiments can document the detection of single domain magnetite using the δFC/δZFC parameter as suggested (Moskowitz et al, 1993) for biogenic magnetite.

  12. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  13. Landsat-7 ETM+ radiometric stability and absolute calibration

    USGS Publications Warehouse

    Markham, B.L.; Barker, J.L.; Barsi, J.A.; Kaita, E.; Thome, K.J.; Helder, D.L.; Palluconi, Frank Don; Schott, J.R.; Scaramuzza, P.; ,

    2002-01-01

    Launched in April 1999, the Landsat-7 ETM+ instrument is in its fourth year of operation. The quality of the acquired calibrated imagery continues to be high, especially with respect to its three most important radiometric performance parameters: reflective band instrument stability to better than ??1%, reflective band absolute calibration to better than ??5%, and thermal band absolute calibration to better than ??0.6 K. The ETM+ instrument has been the most stable of any of the Landsat instruments, in both the reflective and thermal channels. To date, the best on-board calibration source for the reflective bands has been the Full Aperture Solar Calibrator, which has indicated changes of at most -1.8% to -2.0% (95% C.I.) change per year in the ETM+ gain (band 4). However, this change is believed to be caused by changes in the solar diffuser panel, as opposed to a change in the instrument's gain. This belief is based partially on ground observations, which bound the changes in gain in band 4 at -0.7% to +1.5%. Also, ETM+ stability is indicated by the monitoring of desert targets. These image-based results for four Saharan and Arabian sites, for a collection of 35 scenes over the three years since launch, bound the gain change at -0.7% to +0.5% in band 4. Thermal calibration from ground observations revealed an offset error of +0.31 W/m 2 sr um soon after launch. This offset was corrected within the U. S. ground processing system at EROS Data Center on 21-Dec-00, and since then, the band 6 on-board calibration has indicated changes of at most +0.02% to +0.04% (95% C.I.) per year. The latest ground observations have detected no remaining offset error with an RMS error of ??0.6 K. The stability and absolute calibration of the Landsat-7 ETM+ sensor make it an ideal candidate to be used as a reference source for radiometric cross-calibrating to other land remote sensing satellite systems.

  14. Absolute determination of local tropospheric OH concentrations

    NASA Technical Reports Server (NTRS)

    Armerding, Wolfgang; Comes, Franz-Josef

    1994-01-01

    Long path absorption (LPA) according to Lambert Beer's law is a method to determine absolute concentrations of trace gases such as tropospheric OH. We have developed a LPA instrument which is based on a rapid tuning of the light source which is a frequency doubled dye laser. The laser is tuned across two or three OH absorption features around 308 nm with a scanning speed of 0.07 cm(exp -1)/microsecond and a repetition rate of 1.3 kHz. This high scanning speed greatly reduces the fluctuation of the light intensity caused by the atmosphere. To obtain the required high sensitivity the laser output power is additionally made constant and stabilized by an electro-optical modulator. The present sensitivity is of the order of a few times 10(exp 5) OH per cm(exp 3) for an acquisition time of a minute and an absorption path length of only 1200 meters so that a folding of the optical path in a multireflection cell was possible leading to a lateral dimension of the cell of a few meters. This allows local measurements to be made. Tropospheric measurements have been carried out in 1991 resulting in the determination of OH diurnal variation at specific days in late summer. Comparison with model calculations have been made. Interferences are mainly due to SO2 absorption. The problem of OH self generation in the multireflection cell is of minor extent. This could be shown by using different experimental methods. The minimum-maximum signal to noise ratio is about 8 x 10(exp -4) for a single scan. Due to the small size of the absorption cell the realization of an open air laboratory is possible in which by use of an additional UV light source or by additional fluxes of trace gases the chemistry can be changed under controlled conditions allowing kinetic studies of tropospheric photochemistry to be made in open air.

  15. Impaired rapid error monitoring but intact error signaling following rostral anterior cingulate cortex lesions in humans

    PubMed Central

    Maier, Martin E.; Di Gregorio, Francesco; Muricchio, Teresa; Di Pellegrino, Giuseppe

    2015-01-01

    Detecting one’s own errors and appropriately correcting behavior are crucial for efficient goal-directed performance. A correlate of rapid evaluation of behavioral outcomes is the error-related negativity (Ne/ERN) which emerges at the time of the erroneous response over frontal brain areas. However, whether the error monitoring system’s ability to distinguish between errors and correct responses at this early time point is a necessary precondition for the subsequent emergence of error awareness remains unclear. The present study investigated this question using error-related brain activity and vocal error signaling responses in seven human patients with lesions in the rostral anterior cingulate cortex (rACC) and adjoining ventromedial prefrontal cortex, while they performed a flanker task. The difference between errors and correct responses was severely attenuated in these patients indicating impaired rapid error monitong, but they showed no impairment in error signaling. However, impaired rapid error monitoring coincided with a failure to increase response accuracy on trials following errors. These results demonstrate that the error monitoring system’s ability to distinguish between errors and correct responses at the time of the response is crucial for adaptive post-error adjustments, but not a necessary precondition for error awareness. PMID:26136674

  16. Determination and error analysis of emittance and spectral emittance measurements by remote sensing

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Kumar, R.

    1977-01-01

    The author has identified the following significant results. From the theory of remote sensing of surface temperatures, an equation of the upper bound of absolute error of emittance was determined. It showed that the absolute error decreased with an increase in contact temperature, whereas, it increased with an increase in environmental integrated radiant flux density. Change in emittance had little effect on the absolute error. A plot of the difference between temperature and band radiance temperature vs. emittance was provided for the wavelength intervals: 4.5 to 5.5 microns, 8 to 13.5 microns, and 10.2 to 12.5 microns.

  17. Absolute Position of Targets Measured Through a Chamber Window Using Lidar Metrology Systems

    NASA Technical Reports Server (NTRS)

    Kubalak, David; Hadjimichael, Theodore; Ohl, Raymond; Slotwinski, Anthony; Telfer, Randal; Hayden, Joseph

    2012-01-01

    Lidar is a useful tool for taking metrology measurements without the need for physical contact with the parts under test. Lidar instruments are aimed at a target using azimuth and elevation stages, then focus a beam of coherent, frequency modulated laser energy onto the target, such as the surface of a mechanical structure. Energy from the reflected beam is mixed with an optical reference signal that travels in a fiber path internal to the instrument, and the range to the target is calculated based on the difference in the frequency of the returned and reference signals. In cases when the parts are in extreme environments, additional steps need to be taken to separate the operator and lidar from that environment. A model has been developed that accurately reduces the lidar data to an absolute position and accounts for the three media in the testbed air, fused silica, and vacuum but the approach can be adapted for any environment or material. The accuracy of laser metrology measurements depends upon knowing the parameters of the media through which the measurement beam travels. Under normal conditions, this means knowledge of the temperature, pressure, and humidity of the air in the measurement volume. In the past, chamber windows have been used to separate the measuring device from the extreme environment within the chamber and still permit optical measurement, but, so far, only relative changes have been diagnosed. The ability to make accurate measurements through a window presents a challenge as there are a number of factors to consider. In the case of the lidar, the window will increase the time-of-flight of the laser beam causing a ranging error, and refract the direction of the beam causing angular positioning errors. In addition, differences in pressure, temperature, and humidity on each side of the window will cause slight atmospheric index changes and induce deformation and a refractive index gradient within the window. Also, since the window is a

  18. Jasminum flexile flower absolute from India--a detailed comparison with three other jasmine absolutes.

    PubMed

    Braun, Norbert A; Kohlenberg, Birgit; Sim, Sherina; Meier, Manfred; Hammerschmidt, Franz-Josef

    2009-09-01

    Jasminum flexile flower absolute from the south of India and the corresponding vacuum headspace (VHS) sample of the absolute were analyzed using GC and GC-MS. Three other commercially available Indian jasmine absolutes from the species: J. sambac, J. officinale subsp. grandiflorum, and J. auriculatum and the respective VHS samples were used for comparison purposes. One hundred and twenty-one compounds were characterized in J. flexile flower absolute, with methyl linolate, benzyl salicylate, benzyl benzoate, (2E,6E)-farnesol, and benzyl acetate as the main constituents. A detailed olfactory evaluation was also performed.

  19. Establishing ion ratio thresholds based on absolute peak area for absolute protein quantification using protein cleavage isotope dilution mass spectrometry.

    PubMed

    Loziuk, Philip L; Sederoff, Ronald R; Chiang, Vincent L; Muddiman, David C

    2014-11-07

    Quantitative mass spectrometry has become central to the field of proteomics and metabolomics. Selected reaction monitoring is a widely used method for the absolute quantification of proteins and metabolites. This method renders high specificity using several product ions measured simultaneously. With growing interest in quantification of molecular species in complex biological samples, confident identification and quantitation has been of particular concern. A method to confirm purity or contamination of product ion spectra has become necessary for achieving accurate and precise quantification. Ion abundance ratio assessments were introduced to alleviate some of these issues. Ion abundance ratios are based on the consistent relative abundance (RA) of specific product ions with respect to the total abundance of all product ions. To date, no standardized method of implementing ion abundance ratios has been established. Thresholds by which product ion contamination is confirmed vary widely and are often arbitrary. This study sought to establish criteria by which the relative abundance of product ions can be evaluated in an absolute quantification experiment. These findings suggest that evaluation of the absolute ion abundance for any given transition is necessary in order to effectively implement RA thresholds. Overall, the variation of the RA value was observed to be relatively constant beyond an absolute threshold ion abundance. Finally, these RA values were observed to fluctuate significantly over a 3 year period, suggesting that these values should be assessed as close as possible to the time at which data is collected for quantification.

  20. Absolute pitch and pupillary response: effects of timbre and key color.

    PubMed

    Schlemmer, Kathrin B; Kulke, Franziska; Kuchinke, Lars; Van Der Meer, Elke

    2005-07-01

    The pitch identification performance of absolute pitch possessors has previously been shown to depend on pitch range, key color, and timbre of presented tones. In the present study, the dependence of pitch identification performance on key color and timbre of musical tones was examined by analyzing hit rates, reaction times, and pupillary responses of absolute pitch possessors (n = 9) and nonpossessors (n = 12) during a pitch identification task. Results revealed a significant dependence of pitch identification hit rate but not reaction time on timbre and key color in both groups. Among absolute pitch possessors, peak dilation of the pupil was significantly dependent on key color whereas the effect of timbre was marginally significant. Peak dilation of the pupil differed significantly between absolute pitch possessors and nonpossessors. The observed effects point to the importance of learning factors in the acquisition of absolute pitch.

  1. Universal Cosmic Absolute and Modern Science

    NASA Astrophysics Data System (ADS)

    Kostro, Ludwik

    The official Sciences, especially all natural sciences, respect in their researches the principle of methodic naturalism i.e. they consider all phenomena as entirely natural and therefore in their scientific explanations they do never adduce or cite supernatural entities and forces. The purpose of this paper is to show that Modern Science has its own self-existent, self-acting, and self-sufficient Natural All-in Being or Omni-Being i.e. the entire Nature as a Whole that justifies the scientific methodic naturalism. Since this Natural All-in Being is one and only It should be considered as the own scientifically justified Natural Absolute of Science and should be called, in my opinion, the Universal Cosmic Absolute of Modern Science. It will be also shown that the Universal Cosmic Absolute is ontologically enormously stratified and is in its ultimate i.e. in its most fundamental stratum trans-reistic and trans-personal. It means that in its basic stratum. It is neither a Thing or a Person although It contains in Itself all things and persons with all other sentient and conscious individuals as well, On the turn of the 20th century the Science has begun to look for a theory of everything, for a final theory, for a master theory. In my opinion the natural Universal Cosmic Absolute will constitute in such a theory the radical all penetrating Ultimate Basic Reality and will substitute step by step the traditional supernatural personal Absolute.

  2. Absolute Gravity Datum in the Age of Cold Atom Gravimeters

    NASA Astrophysics Data System (ADS)

    Childers, V. A.; Eckl, M. C.

    2014-12-01

    The international gravity datum is defined today by the International Gravity Standardization Net of 1971 (IGSN-71). The data supporting this network was measured in the 1950s and 60s using pendulum and spring-based gravimeter ties (plus some new ballistic absolute meters) to replace the prior protocol of referencing all gravity values to the earlier Potsdam value. Since this time, gravimeter technology has advanced significantly with the development and refinement of the FG-5 (the current standard of the industry) and again with the soon-to-be-available cold atom interferometric absolute gravimeters. This latest development is anticipated to provide improvement in the range of two orders of magnitude as compared to the measurement accuracy of technology utilized to develop ISGN-71. In this presentation, we will explore how the IGSN-71 might best be "modernized" given today's requirements and available instruments and resources. The National Geodetic Survey (NGS), along with other relevant US Government agencies, is concerned about establishing gravity control to establish and maintain high order geodetic networks as part of the nation's essential infrastructure. The need to modernize the nation's geodetic infrastructure was highlighted in "Precise Geodetic Infrastructure, National Requirements for a Shared Resource" National Academy of Science, 2010. The NGS mission, as dictated by Congress, is to establish and maintain the National Spatial Reference System, which includes gravity measurements. Absolute gravimeters measure the total gravity field directly and do not involve ties to other measurements. Periodic "intercomparisons" of multiple absolute gravimeters at reference gravity sites are used to constrain the behavior of the instruments to ensure that each would yield reasonably similar measurements of the same location (i.e. yield a sufficiently consistent datum when measured in disparate locales). New atomic interferometric gravimeters promise a significant

  3. TYPE Ia SUPERNOVA DISTANCE MODULUS BIAS AND DISPERSION FROM K-CORRECTION ERRORS: A DIRECT MEASUREMENT USING LIGHT CURVE FITS TO OBSERVED SPECTRAL TIME SERIES

    SciTech Connect

    Saunders, C.; Aldering, G.; Aragon, C.; Bailey, S.; Childress, M.; Fakhouri, H. K.; Kim, A. G.; Antilogus, P.; Bongard, S.; Canto, A.; Cellier-Holzem, F.; Guy, J.; Baltay, C.; Buton, C.; Chotard, N.; Copin, Y.; Gangler, E.; and others

    2015-02-10

    We estimate systematic errors due to K-corrections in standard photometric analyses of high-redshift Type Ia supernovae. Errors due to K-correction occur when the spectral template model underlying the light curve fitter poorly represents the actual supernova spectral energy distribution, meaning that the distance modulus cannot be recovered accurately. In order to quantify this effect, synthetic photometry is performed on artificially redshifted spectrophotometric data from 119 low-redshift supernovae from the Nearby Supernova Factory, and the resulting light curves are fit with a conventional light curve fitter. We measure the variation in the standardized magnitude that would be fit for a given supernova if located at a range of redshifts and observed with various filter sets corresponding to current and future supernova surveys. We find significant variation in the measurements of the same supernovae placed at different redshifts regardless of filters used, which causes dispersion greater than ∼0.05 mag for measurements of photometry using the Sloan-like filters and a bias that corresponds to a 0.03 shift in w when applied to an outside data set. To test the result of a shift in supernova population or environment at higher redshifts, we repeat our calculations with the addition of a reweighting of the supernovae as a function of redshift and find that this strongly affects the results and would have repercussions for cosmology. We discuss possible methods to reduce the contribution of the K-correction bias and uncertainty.

  4. Least absolute value state estimation with equality and inequality constraints

    SciTech Connect

    Abur, A. ); Celik, M.K. )

    1993-05-01

    A least absolute value (LAV) state estimator, which can handle both equality and inequality constraints on measurements, is developed. It is shown that, the use of equality constraints will actually reduce the number of Simplex iterations and thus the overall cpu time. The constraints can be used to enhance the reliability of the state estimator without affecting the computational efficiency of the estimator. The developed estimation program is tested using 14 through 1,000 bus power systems.

  5. Simplified fringe order correction for absolute phase maps recovered with multiple-spatial-frequency fringe projections

    NASA Astrophysics Data System (ADS)

    Ding, Yi; Peng, Kai; Lu, Lei; Zhong, Kai; Zhu, Ziqi

    2017-02-01

    Various kinds of fringe order errors may occur in the absolute phase maps recovered with multi-spatial-frequency fringe projections. In existing methods, multiple successive pixels corrupted by fringe order errors are detected and corrected pixel-by-pixel with repeating searches, which is inefficient for applications. To improve the efficiency of multiple successive fringe order corrections, in this paper we propose a method to simplify the error detection and correction by the stepwise increasing property of fringe order. In the proposed method, the numbers of pixels in each step are estimated to find the possible true fringe order values, repeating the search in detecting multiple successive errors can be avoided for efficient error correction. The effectiveness of our proposed method is validated by experimental results.

  6. Quantitative standards for absolute linguistic universals.

    PubMed

    Piantadosi, Steven T; Gibson, Edward

    2014-01-01

    Absolute linguistic universals are often justified by cross-linguistic analysis: If all observed languages exhibit a property, the property is taken to be a likely universal, perhaps specified in the cognitive or linguistic systems of language learners and users. In many cases, these patterns are then taken to motivate linguistic theory. Here, we show that cross-linguistic analysis will very rarely be able to statistically justify absolute, inviolable patterns in language. We formalize two statistical methods--frequentist and Bayesian--and show that in both it is possible to find strict linguistic universals, but that the numbers of independent languages necessary to do so is generally unachievable. This suggests that methods other than typological statistics are necessary to establish absolute properties of human language, and thus that many of the purported universals in linguistics have not received sufficient empirical justification.

  7. Full field imaging based instantaneous hyperspectral absolute refractive index measurement

    SciTech Connect

    Baba, Justin S; Boudreaux, Philip R

    2012-01-01

    Multispectral refractometers typically measure refractive index (RI) at discrete monochromatic wavelengths via a serial process. We report on the demonstration of a white light full field imaging based refractometer capable of instantaneous multispectral measurement of absolute RI of clear liquid/gel samples across the entire visible light spectrum. The broad optical bandwidth refractometer is capable of hyperspectral measurement of RI in the range 1.30 1.70 between 400nm 700nm with a maximum error of 0.0036 units (0.24% of actual) at 414nm for a = 1.50 sample. We present system design and calibration method details as well as results from a system validation sample.

  8. Absolute positioning using DORIS tracking of the SPOT-2 satellite

    NASA Technical Reports Server (NTRS)

    Watkins, M. M.; Ries, J. C.; Davis, G. W.

    1992-01-01

    The ability of the French DORIS system operating on the SPOT-2 satellite to provide absolute site positioning at the 20-30-centimeter level using 80 d of data is demonstrated. The accuracy of the vertical component is comparable to that of the horizontal components, indicating that residual troposphere error is not a limiting factor. The translation parameters indicate that the DORIS network realizes a geocentric frame to about 50 nm in each component. The considerable amount of data provided by the nearly global, all-weather DORIS network allowed this complex parameterization required to reduce the unmodeled forces acting on the low-earth satellite. Site velocities with accuracies better than 10 mm/yr should certainly be possible using the multiyear span of the SPOT series and Topex/Poseidon missions.

  9. Measurement of absolute hadronic branching fractions of D mesons

    NASA Astrophysics Data System (ADS)

    Shi, Xin

    Using 818 pb-1 of e +e- collisions recorded at the psi(3770) resonance with the CLEO-c detector at CESR, we determine absolute hadronic branching fractions of charged and neutral D mesons using a double tag technique. Among measurements for three D 0 and six D+ modes, we obtain reference branching fractions B (D0 → K -pi+) = (3.906 +/- 0.021 +/- 0.062)% and B (D+ → K -pi+pi+) = (9.157 +/- 0.059 +/- 0.125)%, where the first uncertainty is statistical, the second is systematic errors. Using an independent determination of the integrated luminosity, we also extract the cross sections sigma(e +e- → D 0D¯0) = (3.650 +/- 0.017 +/- 0.083) nb and sigma(e+ e- → D+ D-) = (2.920 +/- 0.018 +/- 0.062) nb at a center of mass energy, Ecm = 3774 +/- 1 MeV.

  10. Absolute blood velocity measured with a modified fundus camera

    NASA Astrophysics Data System (ADS)

    Duncan, Donald D.; Lemaillet, Paul; Ibrahim, Mohamed; Nguyen, Quan Dong; Hiller, Matthias; Ramella-Roman, Jessica

    2010-09-01

    We present a new method for the quantitative estimation of blood flow velocity, based on the use of the Radon transform. The specific application is for measurement of blood flow velocity in the retina. Our modified fundus camera uses illumination from a green LED and captures imagery with a high-speed CCD camera. The basic theory is presented, and typical results are shown for an in vitro flow model using blood in a capillary tube. Subsequently, representative results are shown for representative fundus imagery. This approach provides absolute velocity and flow direction along the vessel centerline or any lateral displacement therefrom. We also provide an error analysis allowing estimation of confidence intervals for the estimated velocity.

  11. Orion Absolute Navigation System Progress and Challenge

    NASA Technical Reports Server (NTRS)

    Holt, Greg N.; D'Souza, Christopher

    2012-01-01

    The absolute navigation design of NASA's Orion vehicle is described. It has undergone several iterations and modifications since its inception, and continues as a work-in-progress. This paper seeks to benchmark the current state of the design and some of the rationale and analysis behind it. There are specific challenges to address when preparing a timely and effective design for the Exploration Flight Test (EFT-1), while still looking ahead and providing software extensibility for future exploration missions. The primary onboard measurements in a Near-Earth or Mid-Earth environment consist of GPS pseudo-range and delta-range, but for future explorations missions the use of star-tracker and optical navigation sources need to be considered. Discussions are presented for state size and composition, processing techniques, and consider states. A presentation is given for the processing technique using the computationally stable and robust UDU formulation with an Agee-Turner Rank-One update. This allows for computational savings when dealing with many parameters which are modeled as slowly varying Gauss-Markov processes. Preliminary analysis shows up to a 50% reduction in computation versus a more traditional formulation. Several state elements are discussed and evaluated, including position, velocity, attitude, clock bias/drift, and GPS measurement biases in addition to bias, scale factor, misalignment, and non-orthogonalities of the accelerometers and gyroscopes. Another consideration is the initialization of the EKF in various scenarios. Scenarios such as single-event upset, ground command, and cold start are discussed as are strategies for whole and partial state updates as well as covariance considerations. Strategies are given for dealing with latent measurements and high-rate propagation using multi-rate architecture. The details of the rate groups and the data ow between the elements is discussed and evaluated.

  12. Evaluation of the Absolute Regional Temperature Potential

    NASA Technical Reports Server (NTRS)

    Shindell, D. T.

    2012-01-01

    The Absolute Regional Temperature Potential (ARTP) is one of the few climate metrics that provides estimates of impacts at a sub-global scale. The ARTP presented here gives the time-dependent temperature response in four latitude bands (90-28degS, 28degS-28degN, 28-60degN and 60-90degN) as a function of emissions based on the forcing in those bands caused by the emissions. It is based on a large set of simulations performed with a single atmosphere-ocean climate model to derive regional forcing/response relationships. Here I evaluate the robustness of those relationships using the forcing/response portion of the ARTP to estimate regional temperature responses to the historic aerosol forcing in three independent climate models. These ARTP results are in good accord with the actual responses in those models. Nearly all ARTP estimates fall within +/-20%of the actual responses, though there are some exceptions for 90-28degS and the Arctic, and in the latter the ARTP may vary with forcing agent. However, for the tropics and the Northern Hemisphere mid-latitudes in particular, the +/-20% range appears to be roughly consistent with the 95% confidence interval. Land areas within these two bands respond 39-45% and 9-39% more than the latitude band as a whole. The ARTP, presented here in a slightly revised form, thus appears to provide a relatively robust estimate for the responses of large-scale latitude bands and land areas within those bands to inhomogeneous radiative forcing and thus potentially to emissions as well. Hence this metric could allow rapid evaluation of the effects of emissions policies at a finer scale than global metrics without requiring use of a full climate model.

  13. The Mathematical Structure of Error Correction Models.

    DTIC Science & Technology

    1985-05-01

    The error correction model for a vector valued time series has been proposed and applied in the economic literature with the papers by Sargan (1964...the notion of cointegratedness of a vector process and showed the relation between cointegration and error correction models. This paper defines a...general error correction model, that encompasses the usual error correction model as well as the integral correction model by allowing a finite number of

  14. The study of absolute distance measurement based on the self-mixing interference in laser diode

    NASA Astrophysics Data System (ADS)

    Wang, Ting-ting; Zhang, Chuang

    2009-07-01

    In this work, an absolute distance measurement method based on the self-mixing interference is presented. The principles of the method used three-mirror cavity equivalent model are studied in this paper, and the mathematical model is given. Wavelength modulation of the laser beam is obtained by saw-tooth modulating the infection current of the laser diode. Absolute distance of the external target is determined by Fourier analysis method. The frequency of signal from PD is linearly dependent on absolute distance, but also affected by temperature and fluctuation of current source. A dual-path method which uses the reference technique for absolute distance measurement has been proposed. The theoretical analysis shows that the method can eliminate errors resulting from distance-independent variations in the setup. Accuracy and stability can be improved. Simulated results show that a resolution of +/-0.2mm can be achieved for absolute distance ranging from 250mm to 500mm. In the same measurement range, the resolution we obtained is better than other absolute distance measurement system proposed base on self-mixing interference.

  15. Absolute Distance Measurement with the MSTAR Sensor

    NASA Technical Reports Server (NTRS)

    Lay, Oliver P.; Dubovitsky, Serge; Peters, Robert; Burger, Johan; Ahn, Seh-Won; Steier, William H.; Fetterman, Harrold R.; Chang, Yian

    2003-01-01

    The MSTAR sensor (Modulation Sideband Technology for Absolute Ranging) is a new system for measuring absolute distance, capable of resolving the integer cycle ambiguity of standard interferometers, and making it possible to measure distance with sub-nanometer accuracy. The sensor uses a single laser in conjunction with fast phase modulators and low frequency detectors. We describe the design of the system - the principle of operation, the metrology source, beamlaunching optics, and signal processing - and show results for target distances up to 1 meter. We then demonstrate how the system can be scaled to kilometer-scale distances.

  16. Absolutely relative or relatively absolute: violations of value invariance in human decision making.

    PubMed

    Teodorescu, Andrei R; Moran, Rani; Usher, Marius

    2016-02-01

    Making decisions based on relative rather than absolute information processing is tied to choice optimality via the accumulation of evidence differences and to canonical neural processing via accumulation of evidence ratios. These theoretical frameworks predict invariance of decision latencies to absolute intensities that maintain differences and ratios, respectively. While information about the absolute values of the choice alternatives is not necessary for choosing the best alternative, it may nevertheless hold valuable information about the context of the decision. To test the sensitivity of human decision making to absolute values, we manipulated the intensities of brightness stimuli pairs while preserving either their differences or their ratios. Although asked to choose the brighter alternative relative to the other, participants responded faster to higher absolute values. Thus, our results provide empirical evidence for human sensitivity to task irrelevant absolute values indicating a hard-wired mechanism that precedes executive control. Computational investigations of several modelling architectures reveal two alternative accounts for this phenomenon, which combine absolute and relative processing. One account involves accumulation of differences with activation dependent processing noise and the other emerges from accumulation of absolute values subject to the temporal dynamics of lateral inhibition. The potential adaptive role of such choice mechanisms is discussed.

  17. Novel isotopic N, N-Dimethyl Leucine (iDiLeu) Reagents Enable Absolute Quantification of Peptides and Proteins Using a Standard Curve Approach

    NASA Astrophysics Data System (ADS)

    Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun

    2015-01-01

    Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).

  18. Multichannel error correction code decoder

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Ivancic, William D.

    1993-01-01

    A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.

  19. Bio-Inspired Stretchable Absolute Pressure Sensor Network

    PubMed Central

    Guo, Yue; Li, Yu-Hung; Guo, Zhiqiang; Kim, Kyunglok; Chang, Fu-Kuo; Wang, Shan X.

    2016-01-01

    A bio-inspired absolute pressure sensor network has been developed. Absolute pressure sensors, distributed on multiple silicon islands, are connected as a network by stretchable polyimide wires. This sensor network, made on a 4’’ wafer, has 77 nodes and can be mounted on various curved surfaces to cover an area up to 0.64 m × 0.64 m, which is 100 times larger than its original size. Due to Micro Electro-Mechanical system (MEMS) surface micromachining technology, ultrathin sensing nodes can be realized with thicknesses of less than 100 µm. Additionally, good linearity and high sensitivity (~14 mV/V/bar) have been achieved. Since the MEMS sensor process has also been well integrated with a flexible polymer substrate process, the entire sensor network can be fabricated in a time-efficient and cost-effective manner. Moreover, an accurate pressure contour can be obtained from the sensor network. Therefore, this absolute pressure sensor network holds significant promise for smart vehicle applications, especially for unmanned aerial vehicles. PMID:26729134

  20. Bio-Inspired Stretchable Absolute Pressure Sensor Network.

    PubMed

    Guo, Yue; Li, Yu-Hung; Guo, Zhiqiang; Kim, Kyunglok; Chang, Fu-Kuo; Wang, Shan X

    2016-01-02

    A bio-inspired absolute pressure sensor network has been developed. Absolute pressure sensors, distributed on multiple silicon islands, are connected as a network by stretchable polyimide wires. This sensor network, made on a 4'' wafer, has 77 nodes and can be mounted on various curved surfaces to cover an area up to 0.64 m × 0.64 m, which is 100 times larger than its original size. Due to Micro Electro-Mechanical system (MEMS) surface micromachining technology, ultrathin sensing nodes can be realized with thicknesses of less than 100 µm. Additionally, good linearity and high sensitivity (~14 mV/V/bar) have been achieved. Since the MEMS sensor process has also been well integrated with a flexible polymer substrate process, the entire sensor network can be fabricated in a time-efficient and cost-effective manner. Moreover, an accurate pressure contour can be obtained from the sensor network. Therefore, this absolute pressure sensor network holds significant promise for smart vehicle applications, especially for unmanned aerial vehicles.

  1. Absolute calibration of vacuum ultraviolet spectrograph system for plasma diagnostics

    SciTech Connect

    Yoshikawa, M.; Kubota, Y.; Kobayashi, T.; Saito, M.; Numada, N.; Nakashima, Y.; Cho, T.; Koguchi, H.; Yagi, Y.; Yamaguchi, N.

    2004-10-01

    A space- and time-resolving vacuum ultraviolet (VUV) spectrograph system has been applied to diagnose impurity ions behavior in plasmas produced in the tandem mirror GAMMA 10 and the reversed field pinch TPE-RX. We have carried out ray tracing calculations for obtaining the characteristics of the VUV spectrograph and calibration experiments to measure the absolute sensitivities of the VUV spectrograph system for the wavelength range from 100 to 1100 A. By changing the incident angle, 50.6 deg. -51.4 deg., to the spectrograph whose nominal incident angle is 51 deg., we can change the observing spectral range of the VUV spectrograph. In this article, we show the ray tracing calculation results and absolute sensitivities when the angle of incidence into the VUV spectrograph is changed, and the results of VUV spectroscopic measurement in both GAMMA 10 and TPE-RX plasmas.

  2. Consistent set of nuclear parameters values for absolute INAA

    SciTech Connect

    Heft, R.E.

    1980-01-01

    Gamma spectral analysis of irradiated material can be used to determine absolute disintegration rates for specific radionuclides. These data, together with measured values for the thermal and epithermal neutron fluxes, and irradiation, cooling and counting time values, are all the experimental information required to do absolute Instrumental Neutron Activation Analysis. The calculations required to go from product photon emission rate to target nuclide amount depend upon values used for the thermal neutron capture cross-section, the resonance absorption integral, the half-life and photon branching ratios. Values for these parameters were determined by irradiating and analyzing a series of elemental standards. The results of these measurements were combined with values reported by other workers to arrive at a set of recommended values for the constants. Values for 114 nuclides are listed.

  3. Remote ultrasound palpation for robotic interventions using absolute elastography.

    PubMed

    Schneider, Caitlin; Baghani, Ali; Rohling, Robert; Salcudean, Septimiu

    2012-01-01

    Although robotic surgery has addressed many of the challenges presented by minimally invasive surgery, haptic feedback and the lack of knowledge of tissue stiffness is an unsolved problem. This paper presents a system for finding the absolute elastic properties of tissue using a freehand ultrasound scanning technique, which utilizes the da Vinci Surgical robot and a custom 2D ultrasound transducer for intraoperative use. An external exciter creates shear waves in the tissue, and a local frequency estimation method computes the shear modulus. Results are reported for both phantom and in vivo models. This system can be extended to any 6 degree-of-freedom tracking method and any 2D transducer to provide real-time absolute elastic properties of tissue.

  4. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  5. Usage tests of oak moss absolutes containing high and low levels of atranol and chloroatranol.

    PubMed

    Mowitz, Martin; Svedman, Cecilia; Zimerson, Erik; Bruze, Magnus

    2014-07-01

    Atranol and chloroatranol are strong contact allergens in oak moss absolute, a lichen extract used in perfumery. Fifteen subjects with contact allergy to oak moss absolute underwent a repeated open application test (ROAT) using solutions of an untreated oak moss absolute (sample A) and an oak moss absolute with reduced content of atranol and chloroatranol (sample B). All subjects were in addition patch-tested with serial dilutions of samples A and B. Statistically significantly more subjects reacted to sample A than to sample B in the patch tests. No corresponding difference was observed in the ROAT, though there was a significant difference in the time required to elicit a positive reaction. Still, the ROAT indicates that the use of a cosmetic product containing oak moss absolute with reduced levels of atranol and chloroatranol is capable of eliciting an allergic reaction in previously sensitised individuals.

  6. Comparative vs. Absolute Judgments of Trait Desirability

    ERIC Educational Resources Information Center

    Hofstee, Willem K. B.

    1970-01-01

    Reversals of trait desirability are studied. Terms indicating conservativw behavior appeared to be judged relatively desirable in comparative judgement, while traits indicating dynamic and expansive behavior benefited from absolute judgement. The reversal effect was shown to be a general one, i.e. reversals were not dependent upon the specific…

  7. New Techniques for Absolute Gravity Measurements.

    DTIC Science & Technology

    1983-01-07

    Hammond, J.A. (1978) Bollettino Di Geofisica Teorica ed Applicata Vol. XX. 8. Hammond, J. A., and Iliff, R. L. (1979) The AFGL absolute gravity system...International Gravimetric Bureau, No. L:I-43. 7. Hammond. J.A. (1978) Bollettino Di Geofisica Teorica ed Applicata Vol. XX. 8. Hammond, J.A., and

  8. An Absolute Electrometer for the Physics Laboratory

    ERIC Educational Resources Information Center

    Straulino, S.; Cartacci, A.

    2009-01-01

    A low-cost, easy-to-use absolute electrometer is presented: two thin metallic plates and an electronic balance, usually available in a laboratory, are used. We report on the very good performance of the device that allows precise measurements of the force acting between two charged plates. (Contains 5 footnotes, 2 tables, and 6 figures.)

  9. Absolute Positioning Using the Global Positioning System

    DTIC Science & Technology

    1994-04-01

    Global Positioning System ( GPS ) has becom a useful tool In providing relativ survey...Includes the development of a low cost navigator for wheeled vehicles. ABSTRACT The Global Positioning System ( GPS ) has become a useful tool In providing...technique of absolute or point positioning involves the use of a single Global Positioning System ( GPS ) receiver to determine the three-dimenslonal

  10. Error sources in the real-time NLDAS incident surface solar radiation and an evaluation against field observations and the NARR

    NASA Astrophysics Data System (ADS)

    Park, G.; Gao, X.; Sorooshian, S.

    2005-12-01

    The atmospheric model is sensitive to the land surface interactions and its coupling with Land surface Models (LSMs) leads to a better ability to forecast weather under extreme climate conditions, such as droughts and floods (Atlas et al. 1993; Beljaars et al. 1996). However, it is still questionable how accurately the surface exchanges can be simulated using LSMs, since terrestrial properties and processes have high variability and heterogeneity. Examinations with long-term and multi-site surface observations including both remotely sensed and ground observations are highly needed to make an objective evaluation on the effectiveness and uncertainty of LSMs at different circumstances. Among several atmospheric forcing required for the offline simulation of LSMs, incident surface solar radiation is one of the most significant components, since it plays a major role in total incoming energy into the land surface. The North American Land Data Assimilation System (NLDAS) and North American Regional Reanalysis (NARR) are two important data sources providing high-resolution surface solar radiation data for the use of research communities. In this study, these data are evaluated against field observations (AmeriFlux) to identify their advantages, deficiencies and sources of errors. The NLDAS incident solar radiation shows a pretty good agreement in monthly mean prior to the summer of 2001, while it overestimates after the summer of 2001 and its bias is pretty close to the EDAS. Two main error sources are identified: 1) GOES solar radiation was not used in the NLDAS for several months in 2001 and 2003, and 2) GOES incident solar radiation when available, was positively biased in year 2002. The known snow detection problem is sometimes identified in the NLDAS, since it is inherited from GOES incident solar radiation. The NARR consistently overestimates incident surface solar radiation, which might produce erroneous outputs if used in the LSMs. Further attention is given to

  11. Simple absolute quantification method correcting for quantitative PCR efficiency variations for microbial community samples.

    PubMed

    Brankatschk, Robert; Bodenhausen, Natacha; Zeyer, Josef; Bürgmann, Helmut

    2012-06-01

    Real-time quantitative PCR (qPCR) is a widely used technique in microbial community analysis, allowing the quantification of the number of target genes in a community sample. Currently, the standard-curve (SC) method of absolute quantification is widely employed for these kinds of analysis. However, the SC method assumes that the amplification efficiency (E) is the same for both the standard and the sample target template. We analyzed 19 bacterial strains and nine environmental samples in qPCR assays, targeting the nifH and 16S rRNA genes. The E values of the qPCRs differed significantly, depending on the template. This has major implications for the quantification. If the sample and standard differ in their E values, quantification errors of up to orders of magnitude are possible. To address this problem, we propose and test the one-point calibration (OPC) method for absolute quantification. The OPC method corrects for differences in E and was derived from the ΔΔC(T) method with correction for E, which is commonly used for relative quantification in gene expression studies. The SC and OPC methods were compared by quantifying artificial template mixtures from Geobacter sulfurreducens (DSM 12127) and Nostoc commune (Culture Collection of Algae and Protozoa [CCAP] 1453/33), which differ in their E values. While the SC method deviated from the expected nifH gene copy number by 3- to 5-fold, the OPC method quantified the template mixtures with high accuracy. Moreover, analyzing environmental samples, we show that even small differences in E between the standard and the sample can cause significant differences between the copy numbers calculated by the SC and the OPC methods.

  12. Picoliter Well Array Chip-Based Digital Recombinase Polymerase Amplification for Absolute Quantification of Nucleic Acids.

    PubMed

    Li, Zhao; Liu, Yong; Wei, Qingquan; Liu, Yuanjie; Liu, Wenwen; Zhang, Xuelian; Yu, Yude

    2016-01-01

    Absolute, precise quantification methods expand the scope of nucleic acids research and have many practical applications. Digital polymerase chain reaction (dPCR) is a powerful method for nucleic acid detection and absolute quantification. However, it requires thermal cycling and accurate temperature control, which are difficult in resource-limited conditions. Accordingly, isothermal methods, such as recombinase polymerase amplification (RPA), are more attractive. We developed a picoliter well array (PWA) chip with 27,000 consistently sized picoliter reactions (314 pL) for isothermal DNA quantification using digital RPA (dRPA) at 39°C. Sample loading using a scraping liquid blade was simple, fast, and required small reagent volumes (i.e., <20 μL). Passivating the chip surface using a methoxy-PEG-silane agent effectively eliminated cross-contamination during dRPA. Our creative optical design enabled wide-field fluorescence imaging in situ and both end-point and real-time analyses of picoliter wells in a 6-cm(2) area. It was not necessary to use scan shooting and stitch serial small images together. Using this method, we quantified serial dilutions of a Listeria monocytogenes gDNA stock solution from 9 × 10(-1) to 4 × 10(-3) copies per well with an average error of less than 11% (N = 15). Overall dRPA-on-chip processing required less than 30 min, which was a 4-fold decrease compared to dPCR, requiring approximately 2 h. dRPA on the PWA chip provides a simple and highly sensitive method to quantify nucleic acids without thermal cycling or precise micropump/microvalve control. It has applications in fast field analysis and critical clinical diagnostics under resource-limited settings.

  13. Picoliter Well Array Chip-Based Digital Recombinase Polymerase Amplification for Absolute Quantification of Nucleic Acids

    PubMed Central

    Li, Zhao; Liu, Yong; Wei, Qingquan; Liu, Yuanjie; Liu, Wenwen; Zhang, Xuelian; Yu, Yude

    2016-01-01

    Absolute, precise quantification methods expand the scope of nucleic acids research and have many practical applications. Digital polymerase chain reaction (dPCR) is a powerful method for nucleic acid detection and absolute quantification. However, it requires thermal cycling and accurate temperature control, which are difficult in resource-limited conditions. Accordingly, isothermal methods, such as recombinase polymerase amplification (RPA), are more attractive. We developed a picoliter well array (PWA) chip with 27,000 consistently sized picoliter reactions (314 pL) for isothermal DNA quantification using digital RPA (dRPA) at 39°C. Sample loading using a scraping liquid blade was simple, fast, and required small reagent volumes (i.e., <20 μL). Passivating the chip surface using a methoxy-PEG-silane agent effectively eliminated cross-contamination during dRPA. Our creative optical design enabled wide-field fluorescence imaging in situ and both end-point and real-time analyses of picoliter wells in a 6-cm2 area. It was not necessary to use scan shooting and stitch serial small images together. Using this method, we quantified serial dilutions of a Listeria monocytogenes gDNA stock solution from 9 × 10-1 to 4 × 10-3 copies per well with an average error of less than 11% (N = 15). Overall dRPA-on-chip processing required less than 30 min, which was a 4-fold decrease compared to dPCR, requiring approximately 2 h. dRPA on the PWA chip provides a simple and highly sensitive method to quantify nucleic acids without thermal cycling or precise micropump/microvalve control. It has applications in fast field analysis and critical clinical diagnostics under resource-limited settings. PMID:27074005

  14. Absolute Radiation Thermometry in the NIR

    NASA Astrophysics Data System (ADS)

    Bünger, L.; Taubert, R. D.; Gutschwager, B.; Anhalt, K.; Briaudeau, S.; Sadli, M.

    2017-04-01

    A near infrared (NIR) radiation thermometer (RT) for temperature measurements in the range from 773 K up to 1235 K was characterized and calibrated in terms of the "Mise en Pratique for the definition of the Kelvin" (MeP-K) by measuring its absolute spectral radiance responsivity. Using Planck's law of thermal radiation allows the direct measurement of the thermodynamic temperature independently of any ITS-90 fixed-point. To determine the absolute spectral radiance responsivity of the radiation thermometer in the NIR spectral region, an existing PTB monochromator-based calibration setup was upgraded with a supercontinuum laser system (0.45 μm to 2.4 μm) resulting in a significantly improved signal-to-noise ratio. The RT was characterized with respect to its nonlinearity, size-of-source effect, distance effect, and the consistency of its individual temperature measuring ranges. To further improve the calibration setup, a new tool for the aperture alignment and distance measurement was developed. Furthermore, the diffraction correction as well as the impedance correction of the current-to-voltage converter is considered. The calibration scheme and the corresponding uncertainty budget of the absolute spectral responsivity are presented. A relative standard uncertainty of 0.1 % (k=1) for the absolute spectral radiance responsivity was achieved. The absolute radiometric calibration was validated at four temperature values with respect to the ITS-90 via a variable temperature heatpipe blackbody (773 K ...1235 K) and at a gold fixed-point blackbody radiator (1337.33 K).

  15. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  16. Absolute Binding Energies of Core Levels in Solids from First Principles

    NASA Astrophysics Data System (ADS)

    Ozaki, Taisuke; Lee, Chi-Cheng

    2017-01-01

    A general method is presented to calculate absolute binding energies of core levels in metals and insulators, based on a penalty functional and an exact Coulomb cutoff method in the framework of density functional theory. The spurious interaction of core holes between supercells is avoided by the exact Coulomb cutoff method, while the variational penalty functional enables us to treat multiple splittings due to chemical shift, spin-orbit coupling, and exchange interaction on equal footing, both of which are not accessible by previous methods. It is demonstrated that the absolute binding energies of core levels for both metals and insulators are calculated by the proposed method in a mean absolute (relative) error of 0.4 eV (0.16%) for eight cases compared to experimental values measured with x-ray photoemission spectroscopy within a generalized gradient approximation to the exchange-correlation functional.

  17. Errors of measurement by laser goniometer

    NASA Astrophysics Data System (ADS)

    Agapov, Mikhail Y.; Bournashev, Milhail N.

    2000-11-01

    The report is dedicated to research of systematic errors of angle measurement by a dynamic laser goniometer (DLG) on the basis of a ring laser (RL), intended of certification of optical angle encoders (OE), and development of methods of separation the errors of different types and their algorithmic compensation. The OE was of the absolute photoelectric angle encoder type with an informational capacity of 14 bits. Cinematic connection with a rotary platform was made through mechanical connection unit (CU). The measurement and separation of a systematic error to components was carried out with applying of a method of cross-calibration at mutual turns OE in relation to DLG base and CU in relation to OE rotor. Then the Fourier analysis of observed data was made. The research of dynamic errors of angle measurements was made with use of dependence of measured angle between reference direction assigned by the interference null-indicator (NI) with an 8-faced optical polygon (OP), and direction defined by means of the OE, on angular rate of rotation. The obtained results allow to make algorithmic compensation of a systematic error and in the total considerably to reduce a total error of measurements.

  18. Programming Errors in APL.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  19. The high-precision videometrics methods to determining absolute vertical benchmark

    NASA Astrophysics Data System (ADS)

    Liu, Jinbo; Zhu, Zhaokun

    2013-01-01

    The mobile measurement equipment plays an important role in engineering measurement tasks and its measuring device is fixed with the vehicle platform. Therefore, how to correct the measured error in time that caused by swayed platform is a basic problem. Videometrics has its inherent advantages in solving this problem. First of all, videometrics technology is non-contact measurement, which has no effect on the target's structural characteristics and motion characteristics. Secondly, videometrics technology has high precision especially for surface targets and linear targets in the field of view. Thirdly, videometrics technology has the advantages of automatic, real-time and dynamic. This paper is mainly for mobile theodolite.etc that works under the environment of absolute vertical benchmark and proposed two high-precision methods to determine vertical benchmark: Direct-Extracting, which is based on the intersection of plats under the help of two cameras; Benchmark-Transformation, which gets the vertical benchmark by reconstructing the level-plat. Two methods both have the precision of under 10 seconds by digital simulation and physical experiments. The methods proposed by this paper have significance both on the theory and application.

  20. Absolute Measurements of Optical Oscillator Strengths of Xe

    NASA Astrophysics Data System (ADS)

    Gibson, N. D.

    1998-05-01

    The dramatically increased interest in Xe as a discharge medium for the efficient generation of UV radiation, and Xe use in high technology applications such as flat panel displays for laptop computer screens and home TV and theater applications, has created the need for significantly more accurate oscillator strength data. Modeling of plasma processing systems and lighting discharges critically depends on accurate, precise atomic data. We are measuring the optical oscillator strengths of several Xe resonance lines. These measurements use a 900 eV collimated electron beam to excite the Xe atoms. In the method of self absorption used here, the transmission of the emitted radiation is measured as a function of the gas density. The measured oscillator strengths are proportional to the distance between the electron beam and the fixed aperture of the spectrometer-detector system. Since the theoretical form of the transmission function is well understood, there are few systematic errors. Absolute errors as low as 3-4% can be obtained.

  1. Comparison of different approaches to evaluation of statistical error of the DSMC method

    NASA Astrophysics Data System (ADS)

    Plotnikov, M. Yu.; Shkarupa, E. V.

    2012-11-01

    Although the direct simulation Monte Carlo (DSMC) method is widely used for solving the steady problems of the rarefied gas dynamics, the questions of its statistical error evaluation are far from being absolutely clear. Typically, the statistical error in the Monte Carlo method is estimated by the standard deviation determined by the variance of the estimate and the number of its realizations. It is assumed that sampled realizations are independent. In distinction from the classical Monte Carlo method, in the DSMC method the time-averaged estimate is used and the sampled realizations are dependent. Additional difficulties in the evaluation of the statistical error are caused by the complexity of the estimates used in the DSMC method. In the presented work we compare two approaches to evaluating the statistical error. One of them is based on the results of the equilibrium statistical mechanics and the "persistent random walk". Another approach is based on the central limit theorem for Markov processes. Each of these approaches has its own benefits and disadvantages. The first approach mentioned above does not require additional computations to construct estimates of the statistical error. On the other hand it allows evaluating statistical error only in the case when all components of velocity and temperature are equivalent. The second approach to evaluating the statistical error is applicable to simulation by the DSMC method a flows with any degree of nonequilibrium. It allows evaluating the statistical errors of the estimates of velocity and temperature components. The comparison of these approaches was realized on the example of a number of classic problems with different degree of nonequilibrium.

  2. [Errors Analysis and Correction in Atmospheric Methane Retrieval Based on Greenhouse Gases Observing Satellite Data].

    PubMed

    Bu, Ting-ting; Wang, Xian-hua; Ye, Han-han; Jiang, Xin-hua

    2016-01-01

    High precision retrieval of atmospheric CH4 is influenced by a variety of factors. The uncertainties of ground properties and atmospheric conditions are important factors, such as surface reflectance, temperature profile, humidity profile and pressure profile. Surface reflectance is affected by many factors so that it is difficult to get the precise value. The uncertainty of surface reflectance will cause large error to retrieval result. The uncertainties of temperature profile, humidity profile and pressure profile are also important sources of retrieval error and they will cause unavoidable systematic error. This error is hard to eliminate only using CH4 band. In this paper, ratio spectrometry method and CO2 band correction method are proposed to reduce the error caused by these factors. Ratio spectrometry method can decrease the effect of surface reflectance in CH4 retrieval by converting absolute radiance spectrometry into ratio spectrometry. CO2 band correction method converts column amounts of CH4 into column averaged mixing ratio by using CO2 1.61 μm band and it can correct the systematic error caused by temperature profile, humidity profile and pressure profile. The combination of these two correction methods will decrease the effect caused by surface reflectance, temperature profile, humidity profile and pressure profile at the same time and reduce the retrieval error. GOSAT data were used to retrieve atmospheric CH4 to test and validate the two correction methods. The results showed that CH4 column averaged mixing ratio retrieved after correction was close to GOSAT Level2 product and the retrieval precision was up to -0.24%. The studies suggest that the error of CH4 retrieval caused by the uncertainties of ground properties and atmospheric conditions can be significantly reduced and the retrieval precision can be highly improved by using ratio spectrometry method and CO2 hand correction method.

  3. Partially supervised P300 speller adaptation for eventual stimulus timing optimization: target confidence is superior to error-related potential score as an uncertain label

    NASA Astrophysics Data System (ADS)

    Zeyl, Timothy; Yin, Erwei; Keightley, Michelle; Chau, Tom

    2016-04-01

    Objective. Error-related potentials (ErrPs) have the potential to guide classifier adaptation in BCI spellers, for addressing non-stationary performance as well as for online optimization of system parameters, by providing imperfect or partial labels. However, the usefulness of ErrP-based labels for BCI adaptation has not been established in comparison to other partially supervised methods. Our objective is to make this comparison by retraining a two-step P300 speller on a subset of confident online trials using naïve labels taken from speller output, where confidence is determined either by (i) ErrP scores, (ii) posterior target scores derived from the P300 potential, or (iii) a hybrid of these scores. We further wish to evaluate the ability of partially supervised adaptation and retraining methods to adjust to a new stimulus-onset asynchrony (SOA), a necessary step towards online SOA optimization. Approach. Eleven consenting able-bodied adults attended three online spelling sessions on separate days with feedback in which SOAs were set at 160 ms (sessions 1 and 2) and 80 ms (session 3). A post hoc offline analysis and a simulated online analysis were performed on sessions two and three to compare multiple adaptation methods. Area under the curve (AUC) and symbols spelled per minute (SPM) were the primary outcome measures. Main results. Retraining using supervised labels confirmed improvements of 0.9 percentage points (session 2, p < 0.01) and 1.9 percentage points (session 3, p < 0.05) in AUC using same-day training data over using data from a previous day, which supports classifier adaptation in general. Significance. Using posterior target score alone as a confidence measure resulted in the highest SPM of the partially supervised methods, indicating that ErrPs are not necessary to boost the performance of partially supervised adaptive classification. Partial supervision significantly improved SPM at a novel SOA, showing promise for eventual online SOA

  4. Error image aware content restoration

    NASA Astrophysics Data System (ADS)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  5. A novel absolute measurement for the low-frequency figure correction of aspheric surfaces

    NASA Astrophysics Data System (ADS)

    Lin, Wei-Cheng; Chang, Shenq-Tsong; Ho, Cheng-Fang; Kuo, Ching-Hsiang; Chung, Chien-Kai; Hsu, Wei-Yao; Tseng, Shih-Feng; Sung, Cheng-Kuo

    2015-07-01

    This study proposes an absolute measurement method with a computer-generated hologram (CGHs) to assist the identification of manufacturing form error, and gravity and mounting resulted distortions for a 300 mm aspherical mirror. This method adopts the frequency of peaks and valleys of each Zernike coefficient grabbed by the measurement with various orientations of the mirror in horizontal optical-axis configuration. In addition, the rotational-symmetric aberration (spherical aberration) is calibrated with random ball test method. According to the measured absolute surface figure, a high accuracy aspherical surface with peak to valley (P-V) value of 1/8 wave @ 632.8 nm was fabricated after surface figure correction with the reconstructed error map.

  6. Constraint checking during error recovery

    NASA Technical Reports Server (NTRS)

    Lutz, Robyn R.; Wong, Johnny S. K.

    1993-01-01

    The system-level software onboard a spacecraft is responsible for recovery from communication, power, thermal, and computer-health anomalies that may occur. The recovery must occur without disrupting any critical scientific or engineering activity that is executing at the time of the error. Thus, the error-recovery software may have to execute concurrently with the ongoing acquisition of scientific data or with spacecraft maneuvers. This work provides a technique by which the rules that constrain the concurrent execution of these processes can be modeled in a graph. An algorithm is described that uses this model to validate that the constraints hold for all concurrent executions of the error-recovery software with the software that controls the science and engineering activities of the spacecraft. The results are applicable to a variety of control systems with critical constraints on the timing and ordering of the events they control.

  7. Mean and Random Errors of Visual Roll Rate Perception from Central and Peripheral Visual Displays

    NASA Technical Reports Server (NTRS)

    Vandervaart, J. C.; Hosman, R. J. A. W.

    1984-01-01

    A large number of roll rate stimuli, covering rates from zero to plus or minus 25 deg/sec, were presented to subjects in random order at 2 sec intervals. Subjects were to make estimates of magnitude of perceived roll rate stimuli presented on either a central display, on displays in the peripheral ield of vision, or on all displays simultaneously. Response was by way of a digital keyboard device, stimulus exposition times were varied. The present experiment differs from earlier perception tasks by the same authors in that mean rate perception error (and standard deviation) was obtained as a function of rate stimulus magnitude, whereas the earlier experiments only yielded mean absolute error magnitude. Moreover, in the present experiment, all stimulus rates had an equal probability of occurrence, whereas the earlier tests featured a Gaussian stimulus probability density function. Results yield a ood illustration of the nonlinear functions relating rate presented to rate perceived by human observers or operators.

  8. From Hubble's NGSL to Absolute Fluxes

    NASA Technical Reports Server (NTRS)

    Heap, Sara R.; Lindler, Don

    2012-01-01

    Hubble's Next Generation Spectral Library (NGSL) consists of R-l000 spectra of 374 stars of assorted temperature, gravity, and metallicity. Each spectrum covers the wavelength range, 0.18-1.00 microns. The library can be viewed and/or downloaded from the website, http://archive.stsci.edu/prepds/stisngsll. Stars in the NGSL are now being used as absolute flux standards at ground-based observatories. However, the uncertainty in the absolute flux is about 2%, which does not meet the requirements of dark-energy surveys. We are therefore developing an observing procedure that should yield fluxes with uncertainties less than 1 % and will take part in an HST proposal to observe up to 15 stars using this new procedure.

  9. Consistent thermostatistics forbids negative absolute temperatures

    NASA Astrophysics Data System (ADS)

    Dunkel, Jörn; Hilbert, Stefan

    2014-01-01

    Over the past 60 years, a considerable number of theories and experiments have claimed the existence of negative absolute temperature in spin systems and ultracold quantum gases. This has led to speculation that ultracold gases may be dark-energy analogues and also suggests the feasibility of heat engines with efficiencies larger than one. Here, we prove that all previous negative temperature claims and their implications are invalid as they arise from the use of an entropy definition that is inconsistent both mathematically and thermodynamically. We show that the underlying conceptual deficiencies can be overcome if one adopts a microcanonical entropy functional originally derived by Gibbs. The resulting thermodynamic framework is self-consistent and implies that absolute temperature remains positive even for systems with a bounded spectrum. In addition, we propose a minimal quantum thermometer that can be implemented with available experimental techniques.

  10. Asteroid absolute magnitudes and slope parameters

    NASA Technical Reports Server (NTRS)

    Tedesco, Edward F.

    1991-01-01

    A new listing of absolute magnitudes (H) and slope parameters (G) has been created and published in the Minor Planet Circulars; this same listing will appear in the 1992 Ephemerides of Minor Planets. Unlike previous listings, the values of the current list were derived from fits of data at the V band. All observations were reduced in the same fashion using, where appropriate, a single basis default value of 0.15 for the slope parameter. Distances and phase angles were computed for each observation. The data for 113 asteroids was of sufficiently high quality to permit derivation of their H and G. These improved absolute magnitudes and slope parameters will be used to deduce the most reliable bias-corrected asteroid size-frequency distribution yet made.

  11. Computer processing of spectrograms for absolute intensities.

    PubMed

    Guttman, A; Golden, J; Galbraith, H J

    1967-09-01

    A computer program was developed to process photographically recorded spectra for absolute intensity. Test and calibration films are subjected to densitometric scans that provide digitally recorded densities on magnetic tapes. The nonlinear calibration data are fitted by least-squares cubic polynomials to yield a good approximation to the monochromatic H&D curves for commonly used emulsions (2475 recording film, Royal-X, Tri-X, 4-X). Several test cases were made. Results of these cases show that the machine processed absolute intensities are accurate to within 15%o. Arbitrarily raising the sensitivity threshold by 0.1 density units above gross fog yields cubic polynomial fits to the H&D curves that are radiometrically accurate within 10%. In addition, curves of gamma vs wavelength for 2475, Tri-X, and 4-X emulsions were made. These data show slight evidence of the photographic Purkinje effect in the 2475 emulsion.

  12. Probing absolute spin polarization at the nanoscale.

    PubMed

    Eltschka, Matthias; Jäck, Berthold; Assig, Maximilian; Kondrashov, Oleg V; Skvortsov, Mikhail A; Etzkorn, Markus; Ast, Christian R; Kern, Klaus

    2014-12-10

    Probing absolute values of spin polarization at the nanoscale offers insight into the fundamental mechanisms of spin-dependent transport. Employing the Zeeman splitting in superconducting tips (Meservey-Tedrow-Fulde effect), we introduce a novel spin-polarized scanning tunneling microscopy that combines the probing capability of the absolute values of spin polarization with precise control at the atomic scale. We utilize our novel approach to measure the locally resolved spin polarization of magnetic Co nanoislands on Cu(111). We find that the spin polarization is enhanced by 65% when increasing the width of the tunnel barrier by only 2.3 Å due to the different decay of the electron orbitals into vacuum.

  13. Absolute and relative dosimetry for ELIMED

    NASA Astrophysics Data System (ADS)

    Cirrone, G. A. P.; Cuttone, G.; Candiano, G.; Carpinelli, M.; Leonora, E.; Lo Presti, D.; Musumarra, A.; Pisciotta, P.; Raffaele, L.; Randazzo, N.; Romano, F.; Schillaci, F.; Scuderi, V.; Tramontana, A.; Cirio, R.; Marchetto, F.; Sacchi, R.; Giordanengo, S.; Monaco, V.

    2013-07-01

    The definition of detectors, methods and procedures for the absolute and relative dosimetry of laser-driven proton beams is a crucial step toward the clinical use of this new kind of beams. Hence, one of the ELIMED task, will be the definition of procedures aiming to obtain an absolute dose measure at the end of the transport beamline with an accuracy as close as possible to the one required for clinical applications (i.e. of the order of 5% or less). Relative dosimetry procedures must be established, as well: they are necessary in order to determine and verify the beam dose distributions and to monitor the beam fluence and the energetic spectra during irradiations. Radiochromic films, CR39, Faraday Cup, Secondary Emission Monitor (SEM) and transmission ionization chamber will be considered, designed and studied in order to perform a fully dosimetric characterization of the ELIMED proton beam.

  14. Measurement of absolute displacement by a double-modulation technique based on a Michelson interferometer.

    PubMed

    Chang, L W; Chien, P Y; Lee, C T

    1999-05-01

    A novel method is presented for of measuring absolute displacement with a synthesized wavelength interferometer. The optical phase of the interferometer is simultaneously modulated with a frequency-modulated laser diode and optical path-length difference. The error signal originating from the intensity modulation of the source is eliminated by a signal processing circuit. In addition, a lock-in technique is used to demodulate the envelope of the interferometric signal. The displacement signal is derived by the self-mixing technique.

  15. Silicon Absolute X-Ray Detectors

    SciTech Connect

    Seely, John F.; Korde, Raj; Sprunck, Jacob; Medjoubi, Kadda; Hustache, Stephanie

    2010-06-23

    The responsivity of silicon photodiodes having no loss in the entrance window, measured using synchrotron radiation in the 1.75 to 60 keV range, was compared to the responsivity calculated using the silicon thickness measured using near-infrared light. The measured and calculated responsivities agree with an average difference of 1.3%. This enables their use as absolute x-ray detectors.

  16. Negative absolute temperature for mobile particles

    NASA Astrophysics Data System (ADS)

    Braun, Simon; Ronzheimer, Philipp; Schreiber, Michael; Hodgman, Sean; Bloch, Immanuel; Schneider, Ulrich

    2013-05-01

    Absolute temperature is usually bound to be strictly positive. However, negative absolute temperature states, where the occupation probability of states increases with their energy, are possible in systems with an upper energy bound. So far, such states have only been demonstrated in localized spin systems with finite, discrete spectra. We realized a negative absolute temperature state for motional degrees of freedom with ultracold bosonic 39K atoms in an optical lattice, by implementing the attractive Bose-Hubbard Hamiltonian. This new state strikingly revealed itself by a quasimomentum distribution that is peaked at maximum kinetic energy. The measured kinetic energy distribution and the extracted negative temperature indicate that the ensemble is close to degeneracy, with coherence over several lattice sites. The state is as stable as a corresponding positive temperature state: The negative temperature stabilizes the system against mean-field collapse driven by negative pressure. Negative temperatures open up new parameter regimes for cold atoms, enabling fundamentally new many-body states. Additionally, they give rise to several counterintuitive effects such as heat engines with above unity efficiency.

  17. Measurement of absolute gravity acceleration in Firenze

    NASA Astrophysics Data System (ADS)

    de Angelis, M.; Greco, F.; Pistorio, A.; Poli, N.; Prevedelli, M.; Saccorotti, G.; Sorrentino, F.; Tino, G. M.

    2011-01-01

    This paper reports the results from the accurate measurement of the acceleration of gravity g taken at two separate premises in the Polo Scientifico of the University of Firenze (Italy). In these laboratories, two separate experiments aiming at measuring the Newtonian constant and testing the Newtonian law at short distances are in progress. Both experiments require an independent knowledge on the local value of g. The only available datum, pertaining to the italian zero-order gravity network, was taken more than 20 years ago at a distance of more than 60 km from the study site. Gravity measurements were conducted using an FG5 absolute gravimeter, and accompanied by seismic recordings for evaluating the noise condition at the site. The absolute accelerations of gravity at the two laboratories are (980 492 160.6 ± 4.0) μGal and (980 492 048.3 ± 3.0) μGal for the European Laboratory for Non-Linear Spectroscopy (LENS) and Dipartimento di Fisica e Astronomia, respectively. Other than for the two referenced experiments, the data here presented will serve as a benchmark for any future study requiring an accurate knowledge of the absolute value of the acceleration of gravity in the study region.

  18. System for absolute measurements by interferometric sensors

    NASA Astrophysics Data System (ADS)

    Norton, Douglas A.

    1993-03-01

    The most common problem of interferometric sensors is their inability to measure absolute path imbalance. Presented in this paper is a signal processing system that gives absolute, unambiguous reading of optical path difference for almost any style of interferometric sensor. Key components are a wide band (incoherent) optical source, a polychromator, and FFT electronics. Advantages include no moving parts in the signal processor, no active components at the sensor location, and the use of standard single mode fiber for sensor illumination and signal transmission. Actual absolute path imbalance of the interferometer is determined without using fringe counting or other inferential techniques. The polychromator extracts the interference information that occurs at each discrete wavelength within the spectral band of the optical source. The signal processing consists of analog and digital filtering, Fast Fourier analysis, and a peak detection and interpolation algorithm. This system was originally designed for use in a remote pressure sensing application that employed a totally passive fiber optic interferometer. A performance qualification was made using a Fabry-Perot interferometer and a commercially available laser interferometer to measure the reference displacement.

  19. Chemical composition of French mimosa absolute oil.

    PubMed

    Perriot, Rodolphe; Breme, Katharina; Meierhenrich, Uwe J; Carenini, Elise; Ferrando, Georges; Baldovini, Nicolas

    2010-02-10

    Since decades mimosa (Acacia dealbata) absolute oil has been used in the flavor and perfume industry. Today, it finds an application in over 80 perfumes, and its worldwide industrial production is estimated five tons per year. Here we report on the chemical composition of French mimosa absolute oil. Straight-chain analogues from C6 to C26 with different functional groups (hydrocarbons, esters, aldehydes, diethyl acetals, alcohols, and ketones) were identified in the volatile fraction. Most of them are long-chain molecules: (Z)-heptadec-8-ene, heptadecane, nonadecane, and palmitic acid are the most abundant, and constituents such as 2-phenethyl alcohol, methyl anisate, and ethyl palmitate are present in smaller amounts. The heavier constituents were mainly triterpenoids such as lupenone and lupeol, which were identified as two of the main components. (Z)-Heptadec-8-ene, lupenone, and lupeol were quantified by GC-MS in SIM mode using external standards and represents 6%, 20%, and 7.8% (w/w) of the absolute oil. Moreover, odorant compounds were extracted by SPME and analyzed by GC-sniffing leading to the perception of 57 odorant zones, of which 37 compounds were identified by their odorant description, mass spectrum, retention index, and injection of the reference compound.

  20. Error latency measurements in symbolic architectures

    NASA Technical Reports Server (NTRS)

    Young, L. T.; Iyer, R. K.

    1991-01-01

    Error latency, the time that elapses between the occurrence of an error and its detection, has a significant effect on reliability. In computer systems, failure rates can be elevated during a burst of system activity due to increased detection of latent errors. A hybrid monitoring environment is developed to measure the error latency distribution of errors occurring in main memory. The objective of this study is to develop a methodology for gauging the dependability of individual data categories within a real-time application. The hybrid monitoring technique is novel in that it selects and categorizes a specific subset of the available blocks of memory to monitor. The precise times of reads and writes are collected, so no actual faults need be injected. Unlike previous monitoring studies that rely on a periodic sampling approach or on statistical approximation, this new approach permits continuous monitoring of referencing activity and precise measurement of error latency.

  1. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  2. Absolute flux density calibrations: Receiver saturation effects

    NASA Technical Reports Server (NTRS)

    Freiley, A. J.; Ohlson, J. E.; Seidel, B. L.

    1978-01-01

    The effect of receiver saturation was examined for a total power radiometer which uses an ambient load for calibration. Extension to other calibration schemes is indicated. The analysis shows that a monotonic receiver saturation characteristic could cause either positive or negative measurement errors, with polarity depending upon operating conditions. A realistic model of the receiver was made by using a linear-cubic voltage transfer characteristic. The evaluation of measurement error for this model provided a means for correcting radio source measurements.

  3. The theoretical accuracy of Runge-Kutta time discretizations for the initial boundary value problem: A careful study of the boundary error

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Gottlieb, David; Abarbanel, Saul; Don, Wai-Sun

    1993-01-01

    The conventional method of imposing time dependent boundary conditions for Runge-Kutta (RK) time advancement reduces the formal accuracy of the space-time method to first order locally, and second order globally, independently of the spatial operator. This counter intuitive result is analyzed in this paper. Two methods of eliminating this problem are proposed for the linear constant coefficient case: (1) impose the exact boundary condition only at the end of the complete RK cycle, (2) impose consistent intermediate boundary conditions derived from the physical boundary condition and its derivatives. The first method, while retaining the RK accuracy in all cases, results in a scheme with much reduced CFL condition, rendering the RK scheme less attractive. The second method retains the same allowable time step as the periodic problem. However it is a general remedy only for the linear case. For non-linear hyperbolic equations the second method is effective only for for RK schemes of third order accuracy or less. Numerical studies are presented to verify the efficacy of each approach.

  4. Sensitivity of disease management decision aids to temperature input errors associated with out-of-canopy and reduced time-resolution measurements

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Plant disease management decision aids typically require inputs of weather elements such as air temperature. Whereas many disease models are created based on weather elements at the crop canopy, and with relatively fine time resolution, the decision aids commonly are implemented with hourly weather...

  5. Radar error statistics for the space shuttle

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1979-01-01

    Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.

  6. Self-control of feedback during motor learning: accounting for the absolute amount of feedback using a yoked group with self-control over feedback.

    PubMed

    Hansen, Steve; Pfeiffer, Jacob; Patterson, Jae Todd

    2011-01-01

    A traditional control group yoked to a group that self-controls their reception of feedback receives feedback in the same relative and absolute manner. This traditional control group typically does not learn the task as well as the self-control group. Although the groups are matched for the amount of feedback they receive, the information is provided on trials in which the individual may not request feedback if he or she were provided the opportunity. Similarly, individuals may not receive feedback on trials for which it would be a beneficial learning experience. Subsequently, the mismatch between the provision of feedback and the potential learning opportunity leads to a decrement in retention. The present study was designed to examine motor learning for a yoked group with the same absolute amount of feedback, but who could self-control when they received feedback. Increased mental processing of error detection and correction was expected for the participants in the yoked self-control group because of their choice to employ a limited resource in the form of a decreasing amount of feedback opportunities. Participants in the yoked with self-control group committed fewer errors than the self-control group in retention and the traditional yoked group in both the retention and time transfer blocks. The results suggest that the yoked with self-control group was able to produce efficient learning effects and can be a viable control group for further motor learning studies.

  7. The NASTRAN Error Correction Information System (ECIS)

    NASA Technical Reports Server (NTRS)

    Rosser, D. C., Jr.; Rogers, J. L., Jr.

    1975-01-01

    A data management procedure, called Error Correction Information System (ECIS), is described. The purpose of this system is to implement the rapid transmittal of error information between the NASTRAN Systems Management Office (NSMO) and the NASTRAN user community. The features of ECIS and its operational status are summarized. The mode of operation for ECIS is compared to the previous error correction procedures. It is shown how the user community can have access to error information much more rapidly when using ECIS. Flow charts and time tables characterize the convenience and time saving features of ECIS.

  8. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  9. On measuring the absolute scale of baryon acoustic oscillations

    NASA Astrophysics Data System (ADS)

    Sutherland, Will

    2012-10-01

    The baryon acoustic oscillation (BAO) feature in the distribution of galaxies provides a fundamental standard ruler which is widely used to constrain cosmological parameters. In most analyses, the comoving length of the ruler is inferred from a combination of cosmic microwave background (CMB) observations and theory. However, this inferred length may be biased by various non-standard effects in early universe physics; this can lead to biased inferences of cosmological parameters such as H0, Ωm and w, so it would be valuable to measure the absolute BAO length by combining a galaxy redshift survey and a suitable direct low-z distance measurement. One obstacle is that low-redshift BAO surveys mainly constrain the ratio rS/DV(z), where DV is a dilation scale which is not directly observable by standard candles. Here, we find a new approximation DV(z)≃34DL(43z)(1+43z)-1(1-0.02455 z3+0.0105 z4) which connects DV to the standard luminosity distance DL at a somewhat higher redshift; this is shown to be very accurate (relative error <0.2 per cent) for all Wilkinson Microwave Anisotropy Probe compatible Friedmann models at z < 0.4, with very weak dependence on cosmological parameters H0, Ωm, Ωk, w. This provides a route to measure the absolute BAO length using only observations at z ≲ 0.3, including Type Ia supernovae, and potentially future H0-free physical distance indicators such as gravitational lenses or gravitational wave standard sirens. This would provide a zero-parameter check of the standard cosmology at 103 ≲ z ≲ 105, and can constrain the number of relativistic species Neff with fewer degeneracies than the CMB.

  10. Absolute and relative reliability of lumbar interspinous process ultrasound imaging measurements

    PubMed Central

    Tozawa, Ryosuke; Katoh, Munenori; Aramaki, Hidefumi; Kawasaki, Tsubasa; Nishikawa, Yuichi; Kumamoto, Tsuneo; Fujinawa, Osamu

    2016-01-01

    [Purpose] The intra- and inter-examiner reliabilities of lumbar interspinous process distances measured by ultrasound imaging were examined. [Subjects and Methods] The subjects were 10 males who had no history of orthopedic diseases or dysfunctions. Ten lumbar interspinous images from 360 images captured from 10 subjects were selected. The 10 images were measured by nine examiners. The lumbar interspinous process distance measurements were performed five times by each examiner. In addition, four of the nine examiners measured the distances again after 4 days for test-retest analysis. In statistical analysis, the intraclass correlation coefficient was used to investigate relative reliability, and Bland-Altman analysis was used to investigate absolute reliability. [Results] The intraclass correlation coefficients (1, 1) for intra-examiner reliability ranged from 0.985 to 0.998. For inter-rater reliability, the intraclass correlation coefficient (2, 1) was 0.969. The intraclass correlation coefficients (1, 2) for test-retest reliability ranged from 0.991 to 0.999. The Bland-Altman analysis results indicated no systematic error. [Conclusion] The results indicate that ultrasound measurements of interspinous process distance are highly reliable even when measured only once by a single person. PMID:27630399

  11. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  12. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  13. Sensation seeking and error processing.

    PubMed

    Zheng, Ya; Sheng, Wenbin; Xu, Jing; Zhang, Yuanyuan

    2014-09-01

    Sensation seeking is defined by a strong need for varied, novel, complex, and intense stimulation, and a willingness to take risks for such experience. Several theories propose that the insensitivity to negative consequences incurred by risks is one of the hallmarks of sensation-seeking behaviors. In this study, we investigated the time course of error processing in sensation seeking by recording event-related potentials (ERPs) while high and low sensation seekers performed an Eriksen flanker task. Whereas there were no group differences in ERPs to correct trials, sensation seeking was associated with a blunted error-related negativity (ERN), which was female-specific. Further, different subdimensions of sensation seeking were related to ERN amplitude differently. These findings indicate that the relationship between sensation seeking and error processing is sex-specific.

  14. Standardization of the cumulative absolute velocity. Final report

    SciTech Connect

    O`Hara, T.F.; Jacobson, J.P.

    1991-12-01

    EPRI NP-5930, ``A Criterion for Determining Exceedance of the Operating Basis Earthquake,`` was published in July 1988. As defined in that report, the Operating Basis Earthquake (OBE) is exceeded when both a response spectrum parameter and a second damage parameter, referred to as the Cumulative Absolute Velocity (CAV), are exceeded. In the review process of the above report, it was noted that the calculation of CAV could be confounded by time history records of long duration containing low (nondamaging) acceleration. Therefore, it is necessary to standardize the method of calculating CAV to account for record length. This standardized methodology allows consistent comparisons between future CAV calculations and the adjusted CAV threshold value based upon applying the standardized methodology to the data set presented in EPRI NP-5930. The recommended method to standardize the CAV calculation is to window its calculation on a second-by-second basis for a given time history. If the absolute acceleration exceeds 0.025g at any time during each one second interval, the earthquake records used in EPRI NP-5930 have been reanalyzed and the adjusted threshold of damage for CAV was found to be 0.16g-set.

  15. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  16. Achieving Climate Change Absolute Accuracy in Orbit

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; Bowman, K.; Brindley, H.; Butler, J. J.; Collins, W.; Dykema, J. A.; Doelling, D. R.; Feldman, D. R.; Fox, N.; Huang, X.; Holz, R.; Huang, Y.; Jennings, D.; Jin, Z.; Johnson, D. G.; Jucks, K.; Kato, S.; Kratz, D. P.; Liu, X.; Lukashin, C.; Mannucci, A. J.; Phojanamongkolkij, N.; Roithmayr, C. M.; Sandford, S.; Taylor, P. C.; Xiong, X.

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  17. Brownian motion: Absolute negative particle mobility

    NASA Astrophysics Data System (ADS)

    Ros, Alexandra; Eichhorn, Ralf; Regtmeier, Jan; Duong, Thanh Tu; Reimann, Peter; Anselmetti, Dario

    2005-08-01

    Noise effects in technological applications, far from being a nuisance, can be exploited with advantage - for example, unavoidable thermal fluctuations have found application in the transport and sorting of colloidal particles and biomolecules. Here we use a microfluidic system to demonstrate a paradoxical migration mechanism in which particles always move in a direction opposite to the net acting force (`absolute negative mobility') as a result of an interplay between thermal noise, a periodic and symmetric microstructure, and a biased alternating-current electric field. This counterintuitive phenomenon could be used for bioanalytical purposes, for example in the separation and fractionation of colloids, biological molecules and cells.

  18. Arbitrary segments of absolute negative mobility

    NASA Astrophysics Data System (ADS)

    Chen, Ruyin; Nie, Linru; Chen, Chongyang; Wang, Chaojie

    2017-01-01

    In previous research work, investigators have reported only one or two segments of absolute negative mobility (ANM) in a periodic potential. In fact, many segments of ANM also occur in the system considered here. We investigate transport of an inertial particle in a gating ratchet periodic potential subjected to a constant bias force. Our numerical results show that its mean velocity can decrease with the bias force increasing, i.e. ANM phenomenon. Furthermore, the ANM can take place arbitrary segments, even up to more than thirty. Intrinsic physical mechanism and conditions for arbitrary segments of ANM to occur are discussed in detail.

  19. Absolute quantification of myocardial blood flow.

    PubMed

    Yoshinaga, Keiichiro; Manabe, Osamu; Tamaki, Nagara

    2016-07-21

    With the increasing availability of positron emission tomography (PET) myocardial perfusion imaging, the absolute quantification of myocardial blood flow (MBF) has become popular in clinical settings. Quantitative MBF provides an important additional diagnostic or prognostic information over conventional visual assessment. The success of MBF quantification using PET/computed tomography (CT) has increased the demand for this quantitative diagnostic approach to be more accessible. In this regard, MBF quantification approaches have been developed using several other diagnostic imaging modalities including single-photon emission computed tomography, CT, and cardiac magnetic resonance. This review will address the clinical aspects of PET MBF quantification and the new approaches to MBF quantification.

  20. Absolute Rate Theories of Epigenetic Stability

    NASA Astrophysics Data System (ADS)

    Walczak, Aleksandra M.; Onuchic, Jose N.; Wolynes, Peter G.

    2006-03-01

    Spontaneous switching events in most characterized genetic switches are rare, resulting in extremely stable epigenetic properties. We show how simple arguments lead to theories of the rate of such events much like the absolute rate theory of chemical reactions corrected by a transmission factor. Both the probability of the rare cellular states that allow epigenetic escape, and the transmission factor, depend on the rates of DNA binding and unbinding events and on the rates of protein synthesis and degradation. Different mechanisms of escape from the stable attractors occur in the nonadiabatic, weakly adiabatic and strictly adiabatic regimes, characterized by the relative values of those input rates.