Science.gov

Sample records for absolute average error

  1. Absolute surface metrology by rotational averaging in oblique incidence interferometry.

    PubMed

    Lin, Weihao; He, Yumei; Song, Li; Luo, Hongxin; Wang, Jie

    2014-06-01

    A modified method for measuring the absolute figure of a large optical flat surface in synchrotron radiation by a small aperture interferometer is presented. The method consists of two procedures: the first step is oblique incidence measurement; the second is multiple rotating measurements. This simple method is described in terms of functions that are symmetric or antisymmetric with respect to reflections at the vertical axis. Absolute deviations of a large flat surface could be obtained when mirror antisymmetric errors are removed by N-position rotational averaging. Formulas are derived for measuring the absolute surface errors of a rectangle flat, and experiments on high-accuracy rectangle flats are performed to verify the method. Finally, uncertainty analysis is carried out in detail. PMID:24922410

  2. Relative errors can cue absolute visuomotor mappings.

    PubMed

    van Dam, Loes C J; Ernst, Marc O

    2015-12-01

    When repeatedly switching between two visuomotor mappings, e.g. in a reaching or pointing task, adaptation tends to speed up over time. That is, when the error in the feedback corresponds to a mapping switch, fast adaptation occurs. Yet, what is learned, the relative error or the absolute mappings? When switching between mappings, errors with a size corresponding to the relative difference between the mappings will occur more often than other large errors. Thus, we could learn to correct more for errors with this familiar size (Error Learning). On the other hand, it has been shown that the human visuomotor system can store several absolute visuomotor mappings (Mapping Learning) and can use associated contextual cues to retrieve them. Thus, when contextual information is present, no error feedback is needed to switch between mappings. Using a rapid pointing task, we investigated how these two types of learning may each contribute when repeatedly switching between mappings in the absence of task-irrelevant contextual cues. After training, we examined how participants changed their behaviour when a single error probe indicated either the often-experienced error (Error Learning) or one of the previously experienced absolute mappings (Mapping Learning). Results were consistent with Mapping Learning despite the relative nature of the error information in the feedback. This shows that errors in the feedback can have a double role in visuomotor behaviour: they drive the general adaptation process by making corrections possible on subsequent movements, as well as serve as contextual cues that can signal a learned absolute mapping. PMID:26280315

  3. Space Saving Statistics: An Introduction to Constant Error, Variable Error, and Absolute Error.

    ERIC Educational Resources Information Center

    Guth, David

    1990-01-01

    Article discusses research on orientation and mobility (O&M) for individuals with visual impairments, examining constant, variable, and absolute error (descriptive statistics that quantify fundamentally different characteristics of distributions of spatially directed behavior). It illustrates the statistics with examples, noting their application…

  4. Sub-nanometer periodic nonlinearity error in absolute distance interferometers.

    PubMed

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°. PMID:26026510

  5. On the Error Sources in Absolute Individual Antenna Calibrations

    NASA Astrophysics Data System (ADS)

    Aerts, Wim; Baire, Quentin; Bilich, Andria; Bruyninx, Carine; Legrand, Juliette

    2013-04-01

    field) multi path errors, both during calibration and later on at the station, absolute sub-millimeter positioning with GPS is not (yet) possible. References [1] G. Wübbena, M. Schmitz, G. Boettcher, C. Schumann, "Absolute GNSS Antenna Calibration with a Robot: Repeatability of Phase Variations, Calibration of GLONASS and Determination of Carrier-to-Noise Pattern", International GNSS Service: Analysis Center workshop, 8-12 May 2006, Darmstadt, Germany. [2] P. Zeimetz, H. Kuhlmann, "On the Accuracy of Absolute GNSS Antenna Calibration and the Conception of a New Anechoic Chamber", FIG Working Week 2008, 14-19 June 2008, Stockholm, Sweden. [3] P. Zeimetz, H. Kuhlmann, L. Wanninger, V. Frevert, S. Schön and K. Strauch, "Ringversuch 2009", 7th GNSS-Antennen-Workshop, 19-20 March 2009, Dresden, Germany.

  6. Estimates of Random Error in Satellite Rainfall Averages

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.

    2003-01-01

    Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.

  7. On various definitions of shadowing with average error in tracing

    NASA Astrophysics Data System (ADS)

    Wu, Xinxing; Oprocha, Piotr; Chen, Guanrong

    2016-07-01

    When computing a trajectory of a dynamical system, influence of noise can lead to large perturbations which can appear, however, with small probability. Then when calculating approximate trajectories, it makes sense to consider errors small on average, since controlling them in each iteration may be impossible. Demand to relate approximate trajectories with genuine orbits leads to various notions of shadowing (on average) which we consider in the paper. As the main tools in our studies we provide a few equivalent characterizations of the average shadowing property, which also partly apply to other notions of shadowing. We prove that almost specification on the whole space induces this property on the measure center which in turn implies the average shadowing property. Finally, we study connections among sensitivity, transitivity, equicontinuity and (average) shadowing.

  8. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval R.; Wallman, Joel J.; Sanders, Barry C.

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates.

  9. Bounding quantum gate error rate based on reported average fidelity

    NASA Astrophysics Data System (ADS)

    Sanders, Yuval; Wallman, Joel; Sanders, Barry

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli-distance as a measure of this deviation, and we show that knowledge of the Pauli-distance enables tighter estimates of the error rate of quantum gates.

  10. Measurements of Aperture Averaging on Bit-Error-Rate

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  11. Astigmatism error modification for absolute shape reconstruction using Fourier transform method

    NASA Astrophysics Data System (ADS)

    He, Yuhang; Li, Qiang; Gao, Bo; Liu, Ang; Xu, Kaiyuan; Wei, Xiaohong; Chai, Liqun

    2014-12-01

    A method is proposed to modify astigmatism errors in absolute shape reconstruction of optical plane using Fourier transform method. If a transmission and reflection flat are used in an absolute test, two translation measurements lead to obtain the absolute shapes by making use of the characteristic relationship between the differential and original shapes in spatial frequency domain. However, because the translation device cannot guarantee the test and reference flats rigidly parallel to each other after the translations, a tilt error exists in the obtained differential data, which caused power and astigmatism errors in the reconstructed shapes. In order to modify the astigmatism errors, a rotation measurement is added. Based on the rotation invariability of the form of Zernike polynomial in circular domain, the astigmatism terms are calculated by solving polynomial coefficient equations related to the rotation differential data, and subsequently the astigmatism terms including error are modified. Computer simulation proves the validity of the proposed method.

  12. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    NASA Astrophysics Data System (ADS)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  13. Absolute plate velocities from seismic anisotropy: Importance of correlated errors

    NASA Astrophysics Data System (ADS)

    Zheng, Lin; Gordon, Richard G.; Kreemer, Corné

    2014-09-01

    The errors in plate motion azimuths inferred from shear wave splitting beneath any one tectonic plate are shown to be correlated with the errors of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. Our preferred set of angular velocities, SKS-MORVEL, is determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25 ± 0.11° Ma-1 (95% confidence limits) right handed about 57.1°S, 68.6°E. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ = 19.2°) differs insignificantly from that for continental lithosphere (σ = 21.6°). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ = 7.4°) than for continental lithosphere (σ = 14.7°). Two of the slowest-moving plates, Antarctica (vRMS = 4 mm a-1, σ = 29°) and Eurasia (vRMS = 3 mm a-1, σ = 33°), have two of the largest within-plate dispersions, which may indicate that a plate must move faster than ≈ 5 mm a-1 to result in seismic anisotropy useful for estimating plate motion. The tendency of observed azimuths on the Arabia plate to be counterclockwise of plate motion may provide information about the direction and amplitude of superposed asthenospheric flow or about anisotropy in the lithospheric mantle.

  14. Assessing suturing skills in a self-guided learning setting: absolute symmetry error.

    PubMed

    Brydges, Ryan; Carnahan, Heather; Dubrowski, Adam

    2009-12-01

    Directed self-guidance, whereby trainees independently practice a skill-set in a structured setting, may be an effective technique for novice training. Currently, however, most evaluation methods require an expert to be present during practice. The study aim was to determine if absolute symmetry error, a clinically important measure that can be assessed by the trainee, is a feasible assessment tool for self-guided learning of suturing skill. Forty-eight undergraduate medical trainees independently practiced suturing and knot tying skills using a benchtop model. Performance on a pretest, posttest, retention test and a transfer test was assessed using (1) the validated final product analysis (FPA), (2) the surgical efficiency score (SES), a combination of the FPA and hand motion analysis and (3) absolute symmetry error, a new measure that assesses the symmetry of the final product. Absolute symmetry error, along with the other objective assessment tools, detected improvements in performance from pretest to posttest (P < 0.05). A battery of correlation analyses indicated that absolute symmetry error correlates moderately with the FPA and SES. The development of valid, reliable and feasible technical skill assessments is needed to ensure all training centers evaluate trainee performance in a standardized fashion. Measures that do not require the use of experts or computers have potential for widespread use. We suggest that absolute symmetry error is a useful approximation of novices' suturing and knot tying performance. Future research should evaluate whether absolute symmetry error can enhance learning when used as a source of feedback during self-guided practice. PMID:19132540

  15. Assessing Suturing Skills in a Self-Guided Learning Setting: Absolute Symmetry Error

    ERIC Educational Resources Information Center

    Brydges, Ryan; Carnahan, Heather; Dubrowski, Adam

    2009-01-01

    Directed self-guidance, whereby trainees independently practice a skill-set in a structured setting, may be an effective technique for novice training. Currently, however, most evaluation methods require an expert to be present during practice. The study aim was to determine if absolute symmetry error, a clinically important measure that can be…

  16. IMPROVEMENT OF SMVGEAR II ON VECTOR AND SCALAR MACHINES THROUGH ABSOLUTE ERROR TOLERANCE CONTROL (R823186)

    EPA Science Inventory

    The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...

  17. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  18. Demonstrating the Error Budget for the Climate Absolute Radiance and Refractivity Observatory Through Solar Irradiance Measurements

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2016-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.

  19. Iterative Method and Dithering with Averaging used for Correction of ADC Error

    NASA Astrophysics Data System (ADS)

    Kamenský, M.; Kováč, K.

    2009-01-01

    Additive iterative method in combination with averaging of dithered samples is designed for self-correction of ADC linearity error in the paper. Iterative method is one of the automated error correction techniques. Dithering is a special tool for quantizer performance enhancement. Dither theory for Gaussian noise and averaging has been used for exhibition of method abilities in ADC characteristic improvement.

  20. Preliminary error budget for the reflected solar instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Astrophysics Data System (ADS)

    Thome, K.; Gubbels, T.; Barnes, R.

    2011-10-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as the most critical method to determine the accuracy of climate change. The CLARREO Project will implement a spaceborne earth observation mission designed to provide rigorous SI-traceable observations (i.e., radiance, reflectance, and refractivity) that are sensitive to a wide range of key decadal change variables. The instrument suite includes emitted infrared spectrometers, global navigation receivers for radio occultation, and reflected solar spectrometers. The measurements will be acquired for a period of five years and will enable follow-on missions to extend the climate record over the decades needed to understand climate change. This work describes a preliminary error budget for the RS sensor. The RS sensor will retrieve at-sensor reflectance over the spectral range from 320 to 2300 nm with 500-m GIFOV and a 100-km swath width. The current design is based on an Offner spectrometer with two separate focal planes each with its own entrance aperture and grating covering spectral ranges of 320-640, 600-2300 nm. Reflectance is obtained from the ratio of measurements of radiance while viewing the earth's surface to measurements of irradiance while viewing the sun. The requirement for the RS instrument is that the reflectance must be traceable to SI standards at an absolute uncertainty <0.3%. The calibration approach to achieve the ambitious 0.3% absolute calibration uncertainty is predicated on a reliance on heritage hardware, reduction of sensor complexity, and adherence to detector-based calibration standards. The design above has been used to develop a preliminary error budget that meets the 0.3% absolute requirement. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and

  1. Generalized approach for using unbiased symmetric metrics with negative values: normalized mean bias factor and normalized mean absolute error factor

    EPA Science Inventory

    Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations...

  2. Minimum mean absolute error estimation over the class of generalized stack filters

    NASA Astrophysics Data System (ADS)

    Lin, Jean-Hsang; Coyle, Edward J.

    1990-04-01

    A class of sliding window operators called generalized stack filters is developed. This class of filters, which includes all rank order filters, stack filters, and digital morphological filters, is the set of all filters possessing the threshold decomposition architecture and a consistency property called the stacking property. Conditions under which these filters possess the weak superposition property known as threshold decomposition are determined. An algorithm is provided for determining a generalized stack filter which minimizes the mean absolute error (MAE) between the output of the filter and a desired input signal, given noisy observations of that signal. The algorithm is a linear program whose complexity depends on the window width of the filter and the number of threshold levels observed by each of the filters in the superposition architecture. The results show that choosing the generalized stack filter which minimizes the MAE is equivalent to massively parallel threshold-crossing decision making when the decisions are consistent with each other.

  3. Effective connectivity associated with auditory error detection in musicians with absolute pitch

    PubMed Central

    Parkinson, Amy L.; Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Larson, Charles R.; Robin, Donald A.

    2014-01-01

    It is advantageous to study a wide range of vocal abilities in order to fully understand how vocal control measures vary across the full spectrum. Individuals with absolute pitch (AP) are able to assign a verbal label to musical notes and have enhanced abilities in pitch identification without reliance on an external referent. In this study we used dynamic causal modeling (DCM) to model effective connectivity of ERP responses to pitch perturbation in voice auditory feedback in musicians with relative pitch (RP), AP, and non-musician controls. We identified a network compromising left and right hemisphere superior temporal gyrus (STG), primary motor cortex (M1), and premotor cortex (PM). We specified nine models and compared two main factors examining various combinations of STG involvement in feedback pitch error detection/correction process. Our results suggest that modulation of left to right STG connections are important in the identification of self-voice error and sensory motor integration in AP musicians. We also identify reduced connectivity of left hemisphere PM to STG connections in AP and RP groups during the error detection and corrections process relative to non-musicians. We suggest that this suppression may allow for enhanced connectivity relating to pitch identification in the right hemisphere in those with more precise pitch matching abilities. Musicians with enhanced pitch identification abilities likely have an improved auditory error detection and correction system involving connectivity of STG regions. Our findings here also suggest that individuals with AP are more adept at using feedback related to pitch from the right hemisphere. PMID:24634644

  4. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  5. Spatial averaging errors in creating hemispherical reflectance (albedo) maps from directional reflectance data

    SciTech Connect

    Kimes, D.S.; Kerber, A.G.; Sellers, P.J. )

    1993-06-01

    The problems in moving from a radiance measurement made for a particular sun-target-sensor geometry to an accurate estimate of the hemispherical reflectance are considerable. A knowledge-based system called VEG was used in this study to infer hemispherical reflectance. Given directional reflectance(s) and the sun angle, VEG selects the most suitable inference technique(s) and estimates the surface hemispherical reflectance with an estimate of the error. Ideally, VEG is applied to homogeneous vegetation. However, what is typically done in GCM (global circulation model) models and related studies is to obtain an average hemispherical reflectance on a square grid cell on the order of 200 km x 200 km. All available directional data for a given cell are averaged (for each view direction), and then a particular technique for inferring hemispherical reflectance is applied to this averaged data. Any given grid cell can contain several surface types that directionally scatter radiation very differently. When averaging over a set of view angles, the resulting mean values may be atypical of the actual surface types that occur on the ground, and the resulting inferred hemispherical reflectance can be in error. These errors were explored by creating a simulated scene and applying VEG to estimate the area-averaged hemispherical reflectance using various sampling procedures. The reduction in the hemispherical reflectance errors provided by using VEG ranged from a factor of 2-4, depending on conditions. This improvement represents a shift from the calculation of a hemispherical reflectance product of relative value (errors of 20% or more), to a product that could be used quantitatively in global modeling applications, where the requirement is for errors to be limited to around 5-10 %.

  6. Preliminary Error Budget for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; Gubbels, Timothy; Barnes, Robert

    2011-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) plans to observe climate change trends over decadal time scales to determine the accuracy of climate projections. The project relies on spaceborne earth observations of SI-traceable variables sensitive to key decadal change parameters. The mission includes a reflected solar instrument retrieving at-sensor reflectance over the 320 to 2300 nm spectral range with 500-m spatial resolution and 100-km swath. Reflectance is obtained from the ratio of measurements of the earth s surface to those while viewing the sun relying on a calibration approach that retrieves reflectance with uncertainties less than 0.3%. The calibration is predicated on heritage hardware, reduction of sensor complexity, adherence to detector-based calibration standards, and an ability to simulate in the laboratory on-orbit sources in both size and brightness to provide the basis of a transfer to orbit of the laboratory calibration including a link to absolute solar irradiance measurements. The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe high-accuracy, long-term climate change trends and to use decadal change observations as the most critical method to determine the accuracy of climate change projections such as those in the IPCC Report. A rigorously known accuracy of both decadal change observations as well as climate projections is critical in order to enable sound policy decisions. The CLARREO Project will implement a spaceborne earth observation mission designed to provide rigorous SI traceable observations (i.e., radiance, reflectance, and refractivity) that are sensitive to a wide range of key decadal change variables, including: 1) Surface temperature and atmospheric temperature profile 2) Atmospheric water vapor profile 3) Far infrared water vapor greenhouse 4) Aerosol properties and anthropogenic aerosol direct radiative forcing 5) Total and spectral solar

  7. A diagnostic study of time variations of regionally averaged background error covariances

    NASA Astrophysics Data System (ADS)

    Monteiro, Maria; Berre, LoïK.

    2010-12-01

    In variational data assimilation systems, background error covariances are often estimated from a temporal and spatial average. For a limited area model such as the Aire Limited Adaptation Dynamique Developpment International (ALADIN)/France, the spatial average is calculated over the regional computation domain, which covers western Europe. The purpose of this study is to revise the temporal stationarity assumption by diagnosing time variations of such regionally averaged covariances. This is done through examination of covariance changes as a function of season (winter versus summer), day (in connection with the synoptic situation), and hour (related to the diurnal cycle), with the ALADIN/France regional ensemble Three-Dimensional Variational analysis (3D-Var) system. In summer, compared to winter, average error variances are larger, and spatial correlation functions are sharper horizontally but broader vertically. Daily changes in covariances are particularly strong during the winter period, with larger variances and smaller-scale error structures when an unstable low-pressure system is present in the regional domain. Diurnal variations are also significant in the boundary layer in particular, and, as expected, they tend to be more pronounced in summer. Moreover, the comparison between estimates provided by two independent ensembles indicates that these covariance time variations are estimated in a robust way from a six-member ensemble. All these results support the idea of representing these time variations by using a real-time ensemble assimilation system.

  8. Error budget for a calibration demonstration system for the reflected solar instrument for the climate absolute radiance and refractivity observatory

    NASA Astrophysics Data System (ADS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2013-09-01

    A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.

  9. Error Budget for a Calibration Demonstration System for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2013-01-01

    A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.

  10. Statistical theory for estimating sampling errors of regional radiation averages based on satellite measurements

    NASA Technical Reports Server (NTRS)

    Smith, G. L.; Bess, T. D.; Minnis, P.

    1983-01-01

    The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.

  11. Generalized approach for using unbiased symmetric metrics with negative values: normalized mean bias factor and normalized mean absolute error factor

    SciTech Connect

    Gustafson, William I.; Yu, Shaocai

    2012-10-23

    Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations of these metrics are only valid for datasets with positive means. This paper presents a methodology to use and interpret the metrics with datasets that have negative means. The updated formulations give identical results compared to the original formulations for the case of positive means, so researchers are encouraged to use the updated formulations going forward without introducing ambiguity.

  12. Error analysis in the measurement of average power with application to switching controllers

    NASA Technical Reports Server (NTRS)

    Maisel, J. E.

    1979-01-01

    The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current and the signal multiplier was studied. It was concluded that this measurement error can be minimized if the frequency responses of the first order transfer functions are identical.

  13. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  14. Average BER performance of FSO SIM-QAM systems in the presence of atmospheric turbulence and pointing errors

    NASA Astrophysics Data System (ADS)

    Djordjevic, Goran T.; Petkovic, Milica I.

    2016-04-01

    This paper presents the exact average bit error rate (BER) analysis of the free-space optical system employing subcarrier intensity modulation (SIM) with Gray-coded quadrature amplitude modulation (QAM). The intensity fluctuations of the received optical signal are caused by the path loss, atmospheric turbulence and pointing errors. The exact closed-form analytical expressions for the average BER are derived assuming the SIM-QAM with arbitrary constellation size in the presence of the Gamma-Gamma scintillation. The simple approximate average BER expressions are also provided, considering only the dominant term in the finite summations of obtained expressions. Derived expressions are reduced to the special case when optical signal transmission is affected only by the atmospheric turbulence. Numerical results are presented in order to illustrate usefulness of the derived expressions and also to give insights into the effects of different modulation, channel and receiver parameters on the average BER performance. The results show that the misalignment between the transmitter laser and receiver detector has the strong effect on the average BER value, especially in the range of the high values of the average electrical signal-to-noise ratio.

  15. Left-hemisphere activation is associated with enhanced vocal pitch error detection in musicians with absolute pitch

    PubMed Central

    Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A.; Larson, Charles R.

    2014-01-01

    The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing. PMID:24355545

  16. Phase Error Correction in Time-Averaged 3D Phase Contrast Magnetic Resonance Imaging of the Cerebral Vasculature

    PubMed Central

    MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard

    2016-01-01

    Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600

  17. Outage Performance and Average Symbol Error Rate of M-QAM for Maximum Ratio Combining with Multiple Interferers

    NASA Astrophysics Data System (ADS)

    Ahn, Kyung Seung

    In this paper, we investigate the performance of maximum ratio combining (MRC) in the presence of multiple cochannel interferences over a flat Rayleigh fading channel. Closed-form expressions of signal-to-interference-plus-noise ratio (SINK), outage probability, and average symbol error rate (SER) of quadrature amplitude modulation (QAM) with Mary signaling are obtained for unequal-power interference-to-noise ratio (INR). We also provide an upper-bound for the average SER using moment generating function (MGF) of the SINR. Moreover, we quantify the array gain loss between pure MRC (MRC system in the absence of CCI) and MRC system in the presence of CCI. Finally, we verify our analytical results by numerical simulations.

  18. Relationships Between Selected Oral Reading Errors and Levels of Reading Comprehension for Good, Average, and Poor Readers.

    ERIC Educational Resources Information Center

    Packman, Linda Arlene

    Some oral reading errors were found to be more significant than others in evaluating a pupil's performance in reading at six comprehension levels. The percentage of seven kinds of errors (pronunciation, mispronunciation, omission, substitution, addition, repetition, and punctuation) was computed to the levels of reading comprehension for good,…

  19. Easy Absolute Values? Absolutely

    ERIC Educational Resources Information Center

    Taylor, Sharon E.; Mittag, Kathleen Cage

    2015-01-01

    The authors teach a problem-solving course for preservice middle-grades education majors that includes concepts dealing with absolute-value computations, equations, and inequalities. Many of these students like mathematics and plan to teach it, so they are adept at symbolic manipulations. Getting them to think differently about a concept that they…

  20. Average capacity of ground-to-train wireless optical communication links in the non-Kolmogorov and gamma-gamma distribution turbulence with pointing errors

    NASA Astrophysics Data System (ADS)

    Gao, Jie; Zhang, Yixin; Cheng, Mingjian; Zhu, Yun; Hu, Zhengda

    2016-01-01

    A model of the average capacity of the ground-to-train wireless optical communication (WOC) link is established by using the gamma-gamma distribution of moderate to strong scintillation regions. Our numerical propagations indicate that the average channel capacity increases with the increase of refractive-index structure parameter and turbulence spectral index. For the link operating distance being larger than 100 m, the influences of the change for the normalized beamwidth on the average channel capacity can be ignored. The higher the average SNR results, the higher is the equivalent average channel capacity. The point errors between the transmitter laser and receiver detector are dominant factor to decrease the average capacity of links.

  1. Field evaluation of the error arising from inadequate time averaging in the standard use of depth-integrating suspended-sediment samplers

    USGS Publications Warehouse

    Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.

    2011-01-01

    Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required

  2. Average bit error rate performance analysis of subcarrier intensity modulated MRC and EGC FSO systems with dual branches over M distribution turbulence channels

    NASA Astrophysics Data System (ADS)

    Wang, Ran-ran; Wang, Ping; Cao, Tian; Guo, Li-xin; Yang, Yintang

    2015-07-01

    Based on the space diversity reception, the binary phase-shift keying (BPSK) modulated free space optical (FSO) system over Málaga (M) fading channels is investigated in detail. Under independently and identically distributed and independently and non-identically distributed dual branches, the analytical average bit error rate (ABER) expressions in terms of H-Fox function for maximal ratio combining (MRC) and equal gain combining (EGC) diversity techniques are derived, respectively, by transforming the modified Bessel function of the second kind into the integral form of Meijer G-function. Monte Carlo (MC) simulation is also provided to verify the accuracy of the presented models.

  3. Combined Use of Absolute and Differential Seismic Arrival Time Data to Improve Absolute Event Location

    NASA Astrophysics Data System (ADS)

    Myers, S.; Johannesson, G.

    2012-12-01

    Arrival time measurements based on waveform cross correlation are becoming more common as advanced signal processing methods are applied to seismic data archives and real-time data streams. Waveform correlation can precisely measure the time difference between the arrival of two phases, and differential time data can be used to constrain relative location of events. Absolute locations are needed for many applications, which generally requires the use of absolute time data. Current methods for measuring absolute time data are approximately two orders of magnitude less precise than differential time measurements. To exploit the strengths of both absolute and differential time data, we extend our multiple-event location method Bayesloc, which previously used absolute time data only, to include the use of differential time measurements that are based on waveform cross correlation. Fundamentally, Bayesloc is a formulation of the joint probability over all parameters comprising the multiple event location system. The Markov-Chain Monte Carlo method is used to sample from the joint probability distribution given arrival data sets. The differential time component of Bayesloc includes scaling a stochastic estimate of differential time measurement precision based the waveform correlation coefficient for each datum. For a regional-distance synthetic data set with absolute and differential time measurement error of 0.25 seconds and 0.01 second, respectively, epicenter location accuracy is improved from and average of 1.05 km when solely absolute time data are used to 0.28 km when absolute and differential time data are used jointly (73% improvement). The improvement in absolute location accuracy is the result of conditionally limiting absolute location probability regions based on the precise relative position with respect to neighboring events. Bayesloc estimates of data precision are found to be accurate for the synthetic test, with absolute and differential time measurement

  4. Absolute Zero

    NASA Astrophysics Data System (ADS)

    Donnelly, Russell J.; Sheibley, D.; Belloni, M.; Stamper-Kurn, D.; Vinen, W. F.

    2006-12-01

    Absolute Zero is a two hour PBS special attempting to bring to the general public some of the advances made in 400 years of thermodynamics. It is based on the book “Absolute Zero and the Conquest of Cold” by Tom Shachtman. Absolute Zero will call long-overdue attention to the remarkable strides that have been made in low-temperature physics, a field that has produced 27 Nobel Prizes. It will explore the ongoing interplay between science and technology through historical examples including refrigerators, ice machines, frozen foods, liquid oxygen and nitrogen as well as much colder fluids such as liquid hydrogen and liquid helium. A website has been established to promote the series: www.absolutezerocampaign.org. It contains information on the series, aimed primarily at students at the middle school level. There is a wealth of material here and we hope interested teachers will draw their student’s attention to this website and its substantial contents, which have been carefully vetted for accuracy.

  5. Absolute Summ

    NASA Astrophysics Data System (ADS)

    Phillips, Alfred, Jr.

    Summ means the entirety of the multiverse. It seems clear, from the inflation theories of A. Guth and others, that the creation of many universes is plausible. We argue that Absolute cosmological ideas, not unlike those of I. Newton, may be consistent with dynamic multiverse creations. As suggested in W. Heisenberg's uncertainty principle, and with the Anthropic Principle defended by S. Hawking, et al., human consciousness, buttressed by findings of neuroscience, may have to be considered in our models. Predictability, as A. Einstein realized with Invariants and General Relativity, may be required for new ideas to be part of physics. We present here a two postulate model geared to an Absolute Summ. The seedbed of this work is part of Akhnaton's philosophy (see S. Freud, Moses and Monotheism). Most important, however, is that the structure of human consciousness, manifest in Kenya's Rift Valley 200,000 years ago as Homo sapiens, who were the culmination of the six million year co-creation process of Hominins and Nature in Africa, allows us to do the physics that we do. .

  6. Absolute calibration of optical flats

    DOEpatents

    Sommargren, Gary E.

    2005-04-05

    The invention uses the phase shifting diffraction interferometer (PSDI) to provide a true point-by-point measurement of absolute flatness over the surface of optical flats. Beams exiting the fiber optics in a PSDI have perfect spherical wavefronts. The measurement beam is reflected from the optical flat and passed through an auxiliary optic to then be combined with the reference beam on a CCD. The combined beams include phase errors due to both the optic under test and the auxiliary optic. Standard phase extraction algorithms are used to calculate this combined phase error. The optical flat is then removed from the system and the measurement fiber is moved to recombine the two beams. The newly combined beams include only the phase errors due to the auxiliary optic. When the second phase measurement is subtracted from the first phase measurement, the absolute phase error of the optical flat is obtained.

  7. An Analysis of Java Programming Behaviors, Affect, Perceptions, and Syntax Errors among Low-Achieving, Average, and High-Achieving Novice Programmers

    ERIC Educational Resources Information Center

    Rodrigo, Ma. Mercedes T.; Andallaza, Thor Collin S.; Castro, Francisco Enrique Vicente G.; Armenta, Marc Lester V.; Dy, Thomas T.; Jadud, Matthew C.

    2013-01-01

    In this article we quantitatively and qualitatively analyze a sample of novice programmer compilation log data, exploring whether (or how) low-achieving, average, and high-achieving students vary in their grasp of these introductory concepts. High-achieving students self-reported having the easiest time learning the introductory programming…

  8. FOREGROUND MODEL AND ANTENNA CALIBRATION ERRORS IN THE MEASUREMENT OF THE SKY-AVERAGED λ21 cm SIGNAL AT z∼ 20

    SciTech Connect

    Bernardi, G.; McQuinn, M.; Greenhill, L. J.

    2015-01-20

    The most promising near-term observable of the cosmic dark age prior to widespread reionization (z ∼ 15-200) is the sky-averaged λ21 cm background arising from hydrogen in the intergalactic medium. Though an individual antenna could in principle detect the line signature, data analysis must separate foregrounds that are orders of magnitude brighter than the λ21 cm background (but that are anticipated to vary monotonically and gradually with frequency, e.g., they are considered {sup s}pectrally smooth{sup )}. Using more physically motivated models for foregrounds than in previous studies, we show that the intrinsic spectral smoothness of the foregrounds is likely not a concern, and that data analysis for an ideal antenna should be able to detect the λ21 cm signal after subtracting a ∼fifth-order polynomial in log ν. However, we find that the foreground signal is corrupted by the angular and frequency-dependent response of a real antenna. The frequency dependence complicates modeling of foregrounds commonly based on the assumption of spectral smoothness. Our calculations focus on the Large-aperture Experiment to detect the Dark Age, which combines both radiometric and interferometric measurements. We show that statistical uncertainty remaining after fitting antenna gain patterns to interferometric measurements is not anticipated to compromise extraction of the λ21 cm signal for a range of cosmological models after fitting a seventh-order polynomial to radiometric data. Our results generalize to most efforts to measure the sky-averaged spectrum.

  9. Foreground Model and Antenna Calibration Errors in the Measurement of the Sky-averaged λ21 cm Signal at z~ 20

    NASA Astrophysics Data System (ADS)

    Bernardi, G.; McQuinn, M.; Greenhill, L. J.

    2015-01-01

    The most promising near-term observable of the cosmic dark age prior to widespread reionization (z ~ 15-200) is the sky-averaged λ21 cm background arising from hydrogen in the intergalactic medium. Though an individual antenna could in principle detect the line signature, data analysis must separate foregrounds that are orders of magnitude brighter than the λ21 cm background (but that are anticipated to vary monotonically and gradually with frequency, e.g., they are considered "spectrally smooth"). Using more physically motivated models for foregrounds than in previous studies, we show that the intrinsic spectral smoothness of the foregrounds is likely not a concern, and that data analysis for an ideal antenna should be able to detect the λ21 cm signal after subtracting a ~fifth-order polynomial in log ν. However, we find that the foreground signal is corrupted by the angular and frequency-dependent response of a real antenna. The frequency dependence complicates modeling of foregrounds commonly based on the assumption of spectral smoothness. Our calculations focus on the Large-aperture Experiment to detect the Dark Age, which combines both radiometric and interferometric measurements. We show that statistical uncertainty remaining after fitting antenna gain patterns to interferometric measurements is not anticipated to compromise extraction of the λ21 cm signal for a range of cosmological models after fitting a seventh-order polynomial to radiometric data. Our results generalize to most efforts to measure the sky-averaged spectrum.

  10. Teaching Absolute Value Meaningfully

    ERIC Educational Resources Information Center

    Wade, Angela

    2012-01-01

    What is the meaning of absolute value? And why do teachers teach students how to solve absolute value equations? Absolute value is a concept introduced in first-year algebra and then reinforced in later courses. Various authors have suggested instructional methods for teaching absolute value to high school students (Wei 2005; Stallings-Roberts…

  11. Eosinophil count - absolute

    MedlinePlus

    Eosinophils; Absolute eosinophil count ... the white blood cell count to give the absolute eosinophil count. ... than 500 cells per microliter (cells/mcL). Normal value ranges may vary slightly among different laboratories. Talk ...

  12. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  13. Improving HST Pointing & Absolute Astrometry

    NASA Astrophysics Data System (ADS)

    Lallo, Matthew; Nelan, E.; Kimmer, E.; Cox, C.; Casertano, S.

    2007-05-01

    Accurate absolute astrometry is becoming increasingly important in an era of multi-mission archives and virtual observatories. Hubble Space Telescope's (HST's) Guidestar Catalog II (GSC2) has reduced coordinate error to around 0.25 arcsecond, a factor 2 or more compared with GSC1. With this reduced catalog error, special attention must be given to calibrate and maintain the Fine Guidance Sensors (FGSs) and Science Instruments (SIs) alignments in HST to a level well below this in order to ensure that the accuracy of science product's astrometry keywords and target positioning are limited only by the catalog errors. After HST Servicing Mission 4, such calibrations' improvement in "blind" pointing accuracy will allow for more efficient COS acquisitions. Multiple SIs and FGSs each have their own footprints in the spatially shared HST focal plane. It is the small changes over time in primarily the whole-body positions & orientations of these instruments & guiders relative to one another that is addressed by this work. We describe the HST Cycle 15 program CAL/OTA 11021 which, along with future variants of it, determines and maintains positions and orientations of the SIs and FGSs to better than 50 milli- arcseconds and 0.04 to 0.004 degrees of roll, putting errors associated with the alignment sufficiently below GSC2 errors. We present recent alignment results and assess their errors, illustrate trends, and describe where and how the observer sees benefit from these calibrations when using HST.

  14. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

    2012-05-15

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  15. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

    2010-07-13

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  16. Covariant approximation averaging

    NASA Astrophysics Data System (ADS)

    Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2015-06-01

    We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.

  17. Absolute biological needs.

    PubMed

    McLeod, Stephen

    2014-07-01

    Absolute needs (as against instrumental needs) are independent of the ends, goals and purposes of personal agents. Against the view that the only needs are instrumental needs, David Wiggins and Garrett Thomson have defended absolute needs on the grounds that the verb 'need' has instrumental and absolute senses. While remaining neutral about it, this article does not adopt that approach. Instead, it suggests that there are absolute biological needs. The absolute nature of these needs is defended by appeal to: their objectivity (as against mind-dependence); the universality of the phenomenon of needing across the plant and animal kingdoms; the impossibility that biological needs depend wholly upon the exercise of the abilities characteristic of personal agency; the contention that the possession of biological needs is prior to the possession of the abilities characteristic of personal agency. Finally, three philosophical usages of 'normative' are distinguished. On two of these, to describe a phenomenon or claim as 'normative' is to describe it as value-dependent. A description of a phenomenon or claim as 'normative' in the third sense does not entail such value-dependency, though it leaves open the possibility that value depends upon the phenomenon or upon the truth of the claim. It is argued that while survival needs (or claims about them) may well be normative in this third sense, they are normative in neither of the first two. Thus, the idea of absolute need is not inherently normative in either of the first two senses. PMID:23586876

  18. The absolute path command

    2012-05-11

    The ap command traveres all symlinks in a given file, directory, or executable name to identify the final absolute path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it canmore » provide the absolute path to a relative directory from the current working directory.« less

  19. The absolute path command

    SciTech Connect

    Moody, A.

    2012-05-11

    The ap command traveres all symlinks in a given file, directory, or executable name to identify the final absolute path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it can provide the absolute path to a relative directory from the current working directory.

  20. A binary spelling interface with random errors.

    PubMed

    Perelmouter, J; Birbaumer, N

    2000-06-01

    An algorithm for design of a spelling interface based on a modified Huffman's algorithm is presented. This algorithm builds a full binary tree that allows to maximize an average probability to reach a leaf where a required character is located when a choice at each node is made with possible errors. A means to correct errors (a delete-function) and an optimization method to build this delete-function into the binary tree are also discussed. Such a spelling interface could be successfully applied to any menu-orientated alternative communication system when a user (typically, a patient with devastating neuromuscular handicap) is not able to express an intended single binary response, either through motor responses or by using of brain-computer interfaces, with an absolute reliability. PMID:10896195

  1. Electronic Absolute Cartesian Autocollimator

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B.

    2006-01-01

    An electronic absolute Cartesian autocollimator performs the same basic optical function as does a conventional all-optical or a conventional electronic autocollimator but differs in the nature of its optical target and the manner in which the position of the image of the target is measured. The term absolute in the name of this apparatus reflects the nature of the position measurement, which, unlike in a conventional electronic autocollimator, is based absolutely on the position of the image rather than on an assumed proportionality between the position and the levels of processed analog electronic signals. The term Cartesian in the name of this apparatus reflects the nature of its optical target. Figure 1 depicts the electronic functional blocks of an electronic absolute Cartesian autocollimator along with its basic optical layout, which is the same as that of a conventional autocollimator. Referring first to the optical layout and functions only, this or any autocollimator is used to measure the compound angular deviation of a flat datum mirror with respect to the optical axis of the autocollimator itself. The optical components include an illuminated target, a beam splitter, an objective or collimating lens, and a viewer or detector (described in more detail below) at a viewing plane. The target and the viewing planes are focal planes of the lens. Target light reflected by the datum mirror is imaged on the viewing plane at unit magnification by the collimating lens. If the normal to the datum mirror is parallel to the optical axis of the autocollimator, then the target image is centered on the viewing plane. Any angular deviation of the normal from the optical axis manifests itself as a lateral displacement of the target image from the center. The magnitude of the displacement is proportional to the focal length and to the magnitude (assumed to be small) of the angular deviation. The direction of the displacement is perpendicular to the axis about which the

  2. Absolute Timing of the Crab Pulsar with RXTE

    NASA Technical Reports Server (NTRS)

    Rots, Arnold H.; Jahoda, Keith; Lyne, Andrew G.

    2004-01-01

    We have monitored the phase of the main X-ray pulse of the Crab pulsar with the Rossi X-ray Timing Explorer (RXTE) for almost eight years, since the start of the mission in January 1996. The absolute time of RXTE's clock is sufficiently accurate to allow this phase to be compared directly with the radio profile. Our monitoring observations of the pulsar took place bi-weekly (during the periods when it was at least 30 degrees from the Sun) and we correlated the data with radio timing ephemerides derived from observations made at Jodrell Bank. We have determined the phase of the X-ray main pulse for each observation with a typical error in the individual data points of 50 microseconds. The total ensemble is consistent with a phase that is constant over the monitoring period, with the X-ray pulse leading the radio pulse by 0.01025 plus or minus 0.00120 period in phase, or 344 plus or minus 40 microseconds in time. The error estimate is dominated by a systematic error of 40 microseconds, most likely constant, arising from uncertainties in the instrumental calibration of the radio data. The statistical error is 0.00015 period, or 5 microseconds. The separation of the main pulse and interpulse appears to be unchanging at time scales of a year or less, with an average value of 0.4001 plus or minus 0.0002 period. There is no apparent variation in these values with energy over the 2-30 keV range. The lag between the radio and X-ray pulses ma be constant in phase (i.e., rotational in nature) or constant in time (i.e., due to a pathlength difference). We are not (yet) able to distinguish between these two interpretations.

  3. ABSOLUTE POLARIMETRY AT RHIC.

    SciTech Connect

    OKADA; BRAVAR, A.; BUNCE, G.; GILL, R.; HUANG, H.; MAKDISI, Y.; NASS, A.; WOOD, J.; ZELENSKI, Z.; ET AL.

    2007-09-10

    Precise and absolute beam polarization measurements are critical for the RHIC spin physics program. Because all experimental spin-dependent results are normalized by beam polarization, the normalization uncertainty contributes directly to final physics uncertainties. We aimed to perform the beam polarization measurement to an accuracy Of {Delta}P{sub beam}/P{sub beam} < 5%. The absolute polarimeter consists of Polarized Atomic Hydrogen Gas Jet Target and left-right pairs of silicon strip detectors and was installed in the RHIC-ring in 2004. This system features proton-proton elastic scattering in the Coulomb nuclear interference (CNI) region. Precise measurements of the analyzing power A{sub N} of this process has allowed us to achieve {Delta}P{sub beam}/P{sub beam} = 4.2% in 2005 for the first long spin-physics run. In this report, we describe the entire set up and performance of the system. The procedure of beam polarization measurement and analysis results from 2004-2005 are described. Physics topics of AN in the CNI region (four-momentum transfer squared 0.001 < -t < 0.032 (GeV/c){sup 2}) are also discussed. We point out the current issues and expected optimum accuracy in 2006 and the future.

  4. Developing control charts to review and monitor medication errors.

    PubMed

    Ciminera, J L; Lease, M P

    1992-03-01

    There is a need to monitor reported medication errors in a hospital setting. Because the quantity of errors vary due to external reporting, quantifying the data is extremely difficult. Typically, these errors are reviewed using classification systems that often have wide variations in the numbers per class per month. The authors recommend the use of control charts to review historical data and to monitor future data. The procedure they have adopted is a modification of schemes using absolute (i.e., positive) values of successive differences to estimate the standard deviation when only single incidence values are available in time rather than sample averages, and when many successive differences may be zero. PMID:10116719

  5. First Absolutely Calibrated Localized Measurements of Ion Velocity in the MST in Locked and Rotating Plasmas

    NASA Astrophysics Data System (ADS)

    Baltzer, M.; Craig, D.; den Hartog, D. J.; Nornberg, M. D.; Munaretto, S.

    2015-11-01

    An Ion Doppler Spectrometer (IDS) is used on MST for high time-resolution passive and active measurements of impurity ion emission. Absolutely calibrated measurements of flow are difficult because the spectrometer records data within 0.3 nm of the C+5 line of interest, and commercial calibration lamps do not produce lines in this narrow range . A novel optical system was designed to absolutely calibrate the IDS. The device uses an UV LED to produce a broad emission curve in the desired region. A Fabry-Perot etalon filters this light, cutting transmittance peaks into the pattern of the LED emission. An optical train of fused silica lenses focuses the light into the IDS with f/4. A holographic diffuser blurs the light cone to increase homogeneity. Using this light source, the absolute Doppler shift of ion emissions can be measured in MST plasmas. In combination with charge exchange recombination spectroscopy, localized ion velocities can now be measured. Previously, a time-averaged measurement along the chord bisecting the poloidal plane was used to calibrate the IDS; the quality of these central chord calibrations can be characterized with our absolute calibration. Calibration errors may also be quantified and minimized by optimizing the curve-fitting process. Preliminary measurements of toroidal velocity in locked and rotating plasmas will be shown. This work has been supported by the US DOE.

  6. Absolute magnitudes and slope parameters of Pan-STARRS PS1 asteroids --- preliminary results

    NASA Astrophysics Data System (ADS)

    Vereš, P.; Jedicke, R.; Fitzsimmons, A.; Denneau, L.; Bolin, B.; Wainscoat, R.; Tonry, J.

    2014-07-01

    We present the study of absolute magnitude (H) and slope parameter (G) of 170,000 asteroids observed by the Pan-STARRS1 telescope during the period of 15 months within its 3-year all-sky survey mission. The exquisite photometry with photometric errors below 0.04 mag and well-defined filter and photometric system allowed to derive H and G with statistical and systematic errors. Our new approach lies in the Monte Carlo technique simulating rotation periods, amplitudes, and colors, and deriving most-likely H, G and their systematic errors. Comparison of H_M by Muinonen's phase function (Muinonen et al., 2010) with the Minor Planet Center database revealed a negative offset of 0.22±0.29 meaning that Pan-STARRS1 asteroids are fainter. We showed that the absolute magnitude derived by Muinonen's function is systematically larger on average by 0.14±0.29 and by 0.30±0.16 when assuming fixed slope parameter (G=0.15, G_{12}=0.53) than Bowell's absolute magnitude (Bowell et al., 1989). We also derived slope parameters of asteroids of known spectral types and showed a good agreement with the previous studies within the derived uncertainties. However, our systematic errors on G and G_{12} are significantly larger than in previous work, which is caused by poor temporal and phase coverage of vast majority of the detected asteroids. This disadvantage will vanish when full survey data will be available and ongoing extended and enhanced mission will provide new data.

  7. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close ...

  8. Implants as absolute anchorage.

    PubMed

    Rungcharassaeng, Kitichai; Kan, Joseph Y K; Caruso, Joseph M

    2005-11-01

    Anchorage control is essential for successful orthodontic treatment. Each tooth has its own anchorage potential as well as propensity to move when force is applied. When teeth are used as anchorage, the untoward movements of the anchoring units may result in the prolonged treatment time, and unpredictable or less-than-ideal outcome. To maximize tooth-related anchorage, techniques such as differential torque, placing roots into the cortex of the bone, the use of various intraoral devices and/or extraoral appliances have been implemented. Implants, as they are in direct contact with bone, do not possess a periodontal ligament. As a result, they do not move when orthodontic/orthopedic force is applied, and therefore can be used as "absolute anchorage." This article describes different types of implants that have been used as orthodontic anchorage. Their clinical applications and limitations are also discussed. PMID:16463910

  9. Absolute Equilibrium Entropy

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    1997-01-01

    The entropy associated with absolute equilibrium ensemble theories of ideal, homogeneous, fluid and magneto-fluid turbulence is discussed and the three-dimensional fluid case is examined in detail. A sigma-function is defined, whose minimum value with respect to global parameters is the entropy. A comparison is made between the use of global functions sigma and phase functions H (associated with the development of various H-theorems of ideal turbulence). It is shown that the two approaches are complimentary though conceptually different: H-theorems show that an isolated system tends to equilibrium while sigma-functions allow the demonstration that entropy never decreases when two previously isolated systems are combined. This provides a more complete picture of entropy in the statistical mechanics of ideal fluids.

  10. Silicon Absolute X-Ray Detectors

    SciTech Connect

    Seely, John F.; Korde, Raj; Sprunck, Jacob; Medjoubi, Kadda; Hustache, Stephanie

    2010-06-23

    The responsivity of silicon photodiodes having no loss in the entrance window, measured using synchrotron radiation in the 1.75 to 60 keV range, was compared to the responsivity calculated using the silicon thickness measured using near-infrared light. The measured and calculated responsivities agree with an average difference of 1.3%. This enables their use as absolute x-ray detectors.

  11. Spatially resolved absolute spectrophotometry of Saturn - 3390 to 8080 A

    NASA Technical Reports Server (NTRS)

    Bergstralh, J. T.; Diner, D. J.; Baines, K. H.; Neff, J. S.; Allen, M. A.; Orton, G. S.

    1981-01-01

    A series of spatially resolved absolute spectrophotometric measurements of Saturn was conducted for the expressed purpose of calibrating the data obtained with the Imaging Photopolarimeter (IPP) on Pioneer 11 during its recent encounter with Saturn. All observations reported were made at the Mt. Wilson 1.5-m telescope, using a 1-m Ebert-Fastie scanning spectrometer. Spatial resolution was 1.92 arcsec. Photometric errors are considered, taking into account the fixed error, the variable error, and the composite error. The results are compared with earlier observations, as well as with synthetic spectra derived from preliminary physical models, giving attention to the equatorial region and the South Temperate Zone.

  12. Absolute neutrino mass measurements

    NASA Astrophysics Data System (ADS)

    Wolf, Joachim

    2011-10-01

    The neutrino mass plays an important role in particle physics, astrophysics and cosmology. In recent years the detection of neutrino flavour oscillations proved that neutrinos carry mass. However, oscillation experiments are only sensitive to the mass-squared difference of the mass eigenvalues. In contrast to cosmological observations and neutrino-less double beta decay (0v2β) searches, single β-decay experiments provide a direct, model-independent way to determine the absolute neutrino mass by measuring the energy spectrum of decay electrons at the endpoint region with high accuracy. Currently the best kinematic upper limits on the neutrino mass of 2.2eV have been set by two experiments in Mainz and Troitsk, using tritium as beta emitter. The next generation tritium β-experiment KATRIN is currently under construction in Karlsruhe/Germany by an international collaboration. KATRIN intends to improve the sensitivity by one order of magnitude to 0.2eV. The investigation of a second isotope (137Rh) is being pursued by the international MARE collaboration using micro-calorimeters to measure the beta spectrum. The technology needed to reach 0.2eV sensitivity is still in the R&D phase. This paper reviews the present status of neutrino-mass measurements with cosmological data, 0v2β decay and single β-decay.

  13. Absolute neutrino mass measurements

    SciTech Connect

    Wolf, Joachim

    2011-10-06

    The neutrino mass plays an important role in particle physics, astrophysics and cosmology. In recent years the detection of neutrino flavour oscillations proved that neutrinos carry mass. However, oscillation experiments are only sensitive to the mass-squared difference of the mass eigenvalues. In contrast to cosmological observations and neutrino-less double beta decay (0v2{beta}) searches, single {beta}-decay experiments provide a direct, model-independent way to determine the absolute neutrino mass by measuring the energy spectrum of decay electrons at the endpoint region with high accuracy.Currently the best kinematic upper limits on the neutrino mass of 2.2eV have been set by two experiments in Mainz and Troitsk, using tritium as beta emitter. The next generation tritium {beta}-experiment KATRIN is currently under construction in Karlsruhe/Germany by an international collaboration. KATRIN intends to improve the sensitivity by one order of magnitude to 0.2eV. The investigation of a second isotope ({sup 137}Rh) is being pursued by the international MARE collaboration using micro-calorimeters to measure the beta spectrum. The technology needed to reach 0.2eV sensitivity is still in the R and D phase. This paper reviews the present status of neutrino-mass measurements with cosmological data, 0v2{beta} decay and single {beta}-decay.

  14. Absolute oral bioavailability of ciprofloxacin.

    PubMed

    Drusano, G L; Standiford, H C; Plaisance, K; Forrest, A; Leslie, J; Caldwell, J

    1986-09-01

    We evaluated the absolute bioavailability of ciprofloxacin, a new quinoline carboxylic acid, in 12 healthy male volunteers. Doses of 200 mg were given to each of the volunteers in a randomized, crossover manner 1 week apart orally and as a 10-min intravenous infusion. Half-lives (mean +/- standard deviation) for the intravenous and oral administration arms were 4.2 +/- 0.77 and 4.11 +/- 0.74 h, respectively. The serum clearance rate averaged 28.5 +/- 4.7 liters/h per 1.73 m2 for the intravenous administration arm. The renal clearance rate accounted for approximately 60% of the corresponding serum clearance rate and was 16.9 +/- 3.0 liters/h per 1.73 m2 for the intravenous arm and 17.0 +/- 2.86 liters/h per 1.73 m2 for the oral administration arm. Absorption was rapid, with peak concentrations in serum occurring at 0.71 +/- 0.15 h. Bioavailability, defined as the ratio of the area under the curve from 0 h to infinity for the oral to the intravenous dose, was 69 +/- 7%. We conclude that ciprofloxacin is rapidly absorbed and reliably bioavailable in these healthy volunteers. Further studies with ciprofloxacin should be undertaken in target patient populations under actual clinical circumstances. PMID:3777908

  15. An absolute radius scale for Saturn's rings

    NASA Technical Reports Server (NTRS)

    Nicholson, Philip D.; Cooke, Maren L.; Pelton, Emily

    1990-01-01

    Radio and stellar occultation observations of Saturn's rings made by the Voyager spacecraft are discussed. The data reveal systematic discrepancies of almost 10 km in some parts of the rings, limiting some of the investigations. A revised solution for Saturn's rotation pole has been proposed which removes the discrepancies between the stellar and radio occultation profiles. Corrections to previously published radii vary from -2 to -10 km for the radio occultation, and +5 to -6 km for the stellar occultation. An examination of spiral density waves in the outer A Ring supports that the revised absolute radii are in error by no more than 2 km.

  16. Absolute method of measuring magnetic susceptibility

    USGS Publications Warehouse

    Thorpe, A.; Senftle, F.E.

    1959-01-01

    An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.

  17. Absolute Identification by Relative Judgment

    ERIC Educational Resources Information Center

    Stewart, Neil; Brown, Gordon D. A.; Chater, Nick

    2005-01-01

    In unidimensional absolute identification tasks, participants identify stimuli that vary along a single dimension. Performance is surprisingly poor compared with discrimination of the same stimuli. Existing models assume that identification is achieved using long-term representations of absolute magnitudes. The authors propose an alternative…

  18. Be Resolute about Absolute Value

    ERIC Educational Resources Information Center

    Kidd, Margaret L.

    2007-01-01

    This article explores how conceptualization of absolute value can start long before it is introduced. The manner in which absolute value is introduced to students in middle school has far-reaching consequences for their future mathematical understanding. It begins to lay the foundation for students' understanding of algebra, which can change…

  19. Measuring Time-Averaged Blood Pressure

    NASA Technical Reports Server (NTRS)

    Rothman, Neil S.

    1988-01-01

    Device measures time-averaged component of absolute blood pressure in artery. Includes compliant cuff around artery and external monitoring unit. Ceramic construction in monitoring unit suppresses ebb and flow of pressure-transmitting fluid in sensor chamber. Transducer measures only static component of blood pressure.

  20. Medication Errors

    MedlinePlus

    ... to reduce the risk of medication errors to industry and others at FDA. Additionally, DMEPA prospectively reviews ... List of Abbreviations Regulations and Guidances Guidance for Industry: Safety Considerations for Product Design to Minimize Medication ...

  1. Medication Errors

    MedlinePlus

    Medicines cure infectious diseases, prevent problems from chronic diseases, and ease pain. But medicines can also cause harmful reactions if not used ... You can help prevent errors by Knowing your medicines. Keep a list of the names of your ...

  2. Improving the Glucose Meter Error Grid With the Taguchi Loss Function.

    PubMed

    Krouwer, Jan S

    2016-07-01

    Glucose meters often have similar performance when compared by error grid analysis. This is one reason that other statistics such as mean absolute relative deviation (MARD) are used to further differentiate performance. The problem with MARD is that too much information is lost. But additional information is available within the A zone of an error grid by using the Taguchi loss function. Applying the Taguchi loss function gives each glucose meter difference from reference a value ranging from 0 (no error) to 1 (error reaches the A zone limit). Values are averaged over all data which provides an indication of risk of an incorrect medical decision. This allows one to differentiate glucose meter performance for the common case where meters have a high percentage of values in the A zone and no values beyond the B zone. Examples are provided using simulated data. PMID:26719136

  3. Absolute position calculation for a desktop mobile rehabilitation robot based on three optical mouse sensors.

    PubMed

    Zabaleta, Haritz; Valencia, David; Perry, Joel; Veneman, Jan; Keller, Thierry

    2011-01-01

    ArmAssist is a wireless robot for post stroke upper limb rehabilitation. Knowing the position of the arm is essential for any rehabilitation device. In this paper, we describe a method based on an artificial landmark navigation system. The navigation system uses three optical mouse sensors. This enables the building of a cheap but reliable position sensor. Two of the sensors are the data source for odometry calculations, and the third optical mouse sensor takes very low resolution pictures of a custom designed mat. These pictures are processed by an optical symbol recognition algorithm which will estimate the orientation of the robot and recognize the landmarks placed on the mat. The data fusion strategy is described to detect the misclassifications of the landmarks in order to fuse only reliable information. The orientation given by the optical symbol recognition (OSR) algorithm is used to improve significantly the odometry and the recognition of the landmarks is used to reference the odometry to a absolute coordinate system. The system was tested using a 3D motion capture system. With the actual mat configuration, in a field of motion of 710 × 450 mm, the maximum error in position estimation was 49.61 mm with an average error of 36.70 ± 22.50 mm. The average test duration was 36.5 seconds and the average path length was 4173 mm. PMID:22254744

  4. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  5. A new method to estimate average hourly global solar radiation on the horizontal surface

    NASA Astrophysics Data System (ADS)

    Pandey, Pramod K.; Soupir, Michelle L.

    2012-10-01

    A new model, Global Solar Radiation on Horizontal Surface (GSRHS), was developed to estimate the average hourly global solar radiation on the horizontal surfaces (Gh). The GSRHS model uses the transmission function (Tf,ij), which was developed to control hourly global solar radiation, for predicting solar radiation. The inputs of the model were: hour of day, day (Julian) of year, optimized parameter values, solar constant (H0), latitude, and longitude of the location of interest. The parameter values used in the model were optimized at a location (Albuquerque, NM), and these values were applied into the model for predicting average hourly global solar radiations at four different locations (Austin, TX; El Paso, TX; Desert Rock, NV; Seattle, WA) of the United States. The model performance was assessed using correlation coefficient (r), Mean Absolute Bias Error (MABE), Root Mean Square Error (RMSE), and coefficient of determinations (R2). The sensitivities of parameter to prediction were estimated. Results show that the model performed very well. The correlation coefficients (r) range from 0.96 to 0.99, while coefficients of determination (R2) range from 0.92 to 0.98. For daily and monthly prediction, error percentages (i.e. MABE and RMSE) were less than 20%. The approach we proposed here can be potentially useful for predicting average hourly global solar radiation on the horizontal surface for different locations, with the use of readily available data (i.e. latitude and longitude of the location) as inputs.

  6. Absolute Radiometer for Reproducing the Solar Irradiance Unit

    NASA Astrophysics Data System (ADS)

    Sapritskii, V. I.; Pavlovich, M. N.

    1989-01-01

    A high-precision absolute radiometer with a thermally stabilized cavity as receiving element has been designed for use in solar irradiance measurements. The State Special Standard of the Solar Irradiance Unit has been built on the basis of the developed absolute radiometer. The Standard also includes the sun tracking system and the system for automatic thermal stabilization and information processing, comprising a built-in microcalculator which calculates the irradiance according to the input program. During metrological certification of the Standard, main error sources have been analysed and the non-excluded systematic and accidental errors of the irradiance-unit realization have been determined. The total error of the Standard does not exceed 0.3%. Beginning in 1984 the Standard has been taking part in a comparison with the Å 212 pyrheliometer and other Soviet and foreign standards. In 1986 it took part in the international comparison of absolute radiometers and standard pyrheliometers of socialist countries. The results of the comparisons proved the high metrological quality of this Standard based on an absolute radiometer.

  7. Neutron resonance averaging

    SciTech Connect

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.

  8. Clock time is absolute and universal

    NASA Astrophysics Data System (ADS)

    Shen, Xinhang

    2015-09-01

    A critical error is found in the Special Theory of Relativity (STR): mixing up the concepts of the STR abstract time of a reference frame and the displayed time of a physical clock, which leads to use the properties of the abstract time to predict time dilation on physical clocks and all other physical processes. Actually, a clock can never directly measure the abstract time, but can only record the result of a physical process during a period of the abstract time such as the number of cycles of oscillation which is the multiplication of the abstract time and the frequency of oscillation. After Lorentz Transformation, the abstract time of a reference frame expands by a factor gamma, but the frequency of a clock decreases by the same factor gamma, and the resulting multiplication i.e. the displayed time of a moving clock remains unchanged. That is, the displayed time of any physical clock is an invariant of Lorentz Transformation. The Lorentz invariance of the displayed times of clocks can further prove within the framework of STR our earth based standard physical time is absolute, universal and independent of inertial reference frames as confirmed by both the physical fact of the universal synchronization of clocks on the GPS satellites and clocks on the earth, and the theoretical existence of the absolute and universal Galilean time in STR which has proved that time dilation and space contraction are pure illusions of STR. The existence of the absolute and universal time in STR has directly denied that the reference frame dependent abstract time of STR is the physical time, and therefore, STR is wrong and all its predictions can never happen in the physical world.

  9. Absolute geostrophic currents in global tropical oceans

    NASA Astrophysics Data System (ADS)

    Yang, Lina; Yuan, Dongliang

    2016-03-01

    A set of absolute geostrophic current (AGC) data for the period January 2004 to December 2012 are calculated using the P-vector method based on monthly gridded Argo profiles in the world tropical oceans. The AGCs agree well with altimeter geostrophic currents, Ocean Surface Current Analysis-Real time currents, and moored current-meter measurements at 10-m depth, based on which the classical Sverdrup circulation theory is evaluated. Calculations have shown that errors of wind stress calculation, AGC transport, and depth ranges of vertical integration cannot explain non-Sverdrup transport, which is mainly in the subtropical western ocean basins and equatorial currents near the Equator in each ocean basin (except the North Indian Ocean, where the circulation is dominated by monsoons). The identified non-Sverdrup transport is thereby robust and attributed to the joint effect of baroclinicity and relief of the bottom (JEBAR) and mesoscale eddy nonlinearity.

  10. Stitching interferometry: recent results and absolute calibration

    NASA Astrophysics Data System (ADS)

    Bray, Michael

    2004-02-01

    Stitching Interferometry is a method of analysing large optical components using a standard "small" interferometer. This result is obtained by taking multiple overlapping images of the large component, and numerically "stitching" these sub-apertures together. We have already reported the industrial use our Stitching Interferometry systems (Previous SPIE symposia), but experimental results had been lacking because this technique is still new, and users needed to get accustomed to it before producing reliable measurements. We now have more results. We will report user comments and show new, unpublished results. We will discuss sources of error, and show how some of these can be reduced to arbitrarily small values. These will be discussed in some detail. We conclude with a few graphical examples of absolute measurements performed by us.

  11. Assessment of absolute added correlative coding in optical intensity modulation and direct detection channels

    NASA Astrophysics Data System (ADS)

    Dong-Nhat, Nguyen; Elsherif, Mohamed A.; Malekmohammadi, Amin

    2016-06-01

    The performance of absolute added correlative coding (AACC) modulation format with direct detection has been numerically and analytically reported, targeting metro data center interconnects. Hereby, the focus lies on the performance of the bit error rate, noise contributions, spectral efficiency, and chromatic dispersion tolerance. The signal space model of AACC, where the average electrical and optical power expressions are derived for the first time, is also delineated. The proposed modulation format was also compared to other well-known signaling, such as on-off-keying (OOK) and four-level pulse-amplitude modulation, at the same bit rate in a directly modulated vertical-cavity surface-emitting laser-based transmission system. The comparison results show a clear advantage of AACC in achieving longer fiber delivery distance due to the higher dispersion tolerance.

  12. Differences between absolute and predicted values of forced expiratory volumes to classify ventilatory impairment in chronic obstructive pulmonary disease.

    PubMed

    Checkley, William; Foreman, Marilyn G; Bhatt, Surya P; Dransfield, Mark T; Han, MeiLan; Hanania, Nicola A; Hansel, Nadia N; Regan, Elizabeth A; Wise, Robert A

    2016-02-01

    The Global Initiative for Chronic Obstructive Lung Disease (GOLD) severity criterion for COPD is used widely in clinical and research settings; however, it requires the use of ethnic- or population-specific reference equations. We propose two alternative severity criteria based on absolute post-bronchodilator FEV1 values (FEV1 and FEV1/height2) that do not depend on reference equations. We compared the accuracy of these classification schemasto those based on % predicted values (GOLD criterion) and Z-scores of post-bronchodilator FEV1 to predict COPD-related functional outcomes or percent emphysema by computerized tomography of the lung. We tested the predictive accuracy of all severity criteria for the 6-minute walk distance (6MWD), St. George's Respiratory Questionnaire (SGRQ), 36-item Short-Form Health Survey physical health component score (SF-36) and the MMRC Dyspnea Score. We used 10-fold cross-validation to estimate average prediction errors and Bonferroni-adjusted t-tests to compare average prediction errors across classification criteria. We analyzed data of 3772 participants with COPD (average age 63 years, 54% male). Severity criteria based on absolute post-bronchodilator FEV1 or FEV1/height2 yielded similar prediction errors for 6MWD, SGRQ, SF-36 physical health component score, and the MMRC Dyspnea Score when compared to the GOLD criterion (all p > 0.34); and, had similar predictive accuracy when compared with the Z-scores criterion, with the exception for 6MWD where post-bronchodilator FEV1 appeared to perform slightly better than Z-scores (p = 0.01). Subgroup analyses did not identify differences across severity criteria by race, sex, or age between absolute values and the GOLD criterion or one based on Z-scores. Severity criteria for COPD based on absolute values of post-bronchodilator FEV1 performed equally as well as did criteria based on predicted values when benchmarked against COPD-related functional and structural outcomes, are simple to use

  13. Absolute transition probabilities of phosphorus.

    NASA Technical Reports Server (NTRS)

    Miller, M. H.; Roig, R. A.; Bengtson, R. D.

    1971-01-01

    Use of a gas-driven shock tube to measure the absolute strengths of 21 P I lines and 126 P II lines (from 3300 to 6900 A). Accuracy for prominent, isolated neutral and ionic lines is estimated to be 28 to 40% and 18 to 30%, respectively. The data and the corresponding theoretical predictions are examined for conformity with the sum rules.-

  14. Absolute measurement of the extreme UV solar flux

    NASA Technical Reports Server (NTRS)

    Carlson, R. W.; Ogawa, H. S.; Judge, D. L.; Phillips, E.

    1984-01-01

    A windowless rare-gas ionization chamber has been developed to measure the absolute value of the solar extreme UV flux in the 50-575-A region. Successful results were obtained on a solar-pointing sounding rocket. The ionization chamber, operated in total absorption, is an inherently stable absolute detector of ionizing UV radiation and was designed to be independent of effects from secondary ionization and gas effusion. The net error of the measurement is + or - 7.3 percent, which is primarily due to residual outgassing in the instrument, other errors such as multiple ionization, photoelectron collection, and extrapolation to the zero atmospheric optical depth being small in comparison. For the day of the flight, Aug. 10, 1982, the solar irradiance (50-575 A), normalized to unit solar distance, was found to be 5.71 + or - 0.42 x 10 to the 10th photons per sq cm sec.

  15. Attractors and Time Averages for Random Maps

    NASA Astrophysics Data System (ADS)

    Araujo, Vitor

    2006-07-01

    Considering random noise in finite dimensional parameterized families of diffeomorphisms of a compact finite dimensional boundaryless manifold M, we show the existence of time averages for almost every orbit of each point of M, imposing mild conditions on the families. Moreover these averages are given by a finite number of physical absolutely continuous stationary probability measures. We use this result to deduce that situations with infinitely many sinks and Henon-like attractors are not stable under random perturbations, e.g., Newhouse's and Colli's phenomena in the generic unfolding of a quadratic homoclinic tangency by a one-parameter family of diffeomorphisms.

  16. Mathematical Model for Absolute Magnetic Measuring Systems in Industrial Applications

    NASA Astrophysics Data System (ADS)

    Fügenschuh, Armin; Fügenschuh, Marzena; Ludszuweit, Marina; Mojsic, Aleksandar; Sokół, Joanna

    2015-09-01

    Scales for measuring systems are either based on incremental or absolute measuring methods. Incremental scales need to initialize a measurement cycle at a reference point. From there, the position is computed by counting increments of a periodic graduation. Absolute methods do not need reference points, since the position can be read directly from the scale. The positions on the complete scales are encoded using two incremental tracks with different graduation. We present a new method for absolute measuring using only one track for position encoding up to micrometre range. Instead of the common perpendicular magnetic areas, we use a pattern of trapezoidal magnetic areas, to store more complex information. For positioning, we use the magnetic field where every position is characterized by a set of values measured by a hall sensor array. We implement a method for reconstruction of absolute positions from the set of unique measured values. We compare two patterns with respect to uniqueness, accuracy, stability and robustness of positioning. We discuss how stability and robustness are influenced by different errors during the measurement in real applications and how those errors can be compensated.

  17. Application Bayesian Model Averaging method for ensemble system for Poland

    NASA Astrophysics Data System (ADS)

    Guzikowski, Jakub; Czerwinska, Agnieszka

    2014-05-01

    The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation

  18. Absolute optical surface measurement with deflectometry

    NASA Astrophysics Data System (ADS)

    Li, Wansong; Sandner, Marc; Gesierich, Achim; Burke, Jan

    Deflectometry utilises the deformation and displacement of a sample pattern after reflection from a test surface to infer the surface slopes. Differentiation of the measurement data leads to a curvature map, which is very useful for surface quality checks with sensitivity down to the nanometre range. Integration of the data allows reconstruction of the absolute surface shape, but the procedure is very error-prone because systematic errors may add up to large shape deviations. In addition, there are infinitely many combinations for slope and object distance that satisfy a given observation. One solution for this ambiguity is to include information on the object's distance. It must be known very accurately. Two laser pointers can be used for positioning the object, and we also show how a confocal chromatic distance sensor can be used to define a reference point on a smooth surface from which the integration can be started. The used integration algorithm works without symmetry constraints and is therefore suitable for free-form surfaces as well. Unlike null testing, deflectometry also determines radius of curvature (ROC) or focal lengths as a direct result of the 3D surface reconstruction. This is shown by the example of a 200 mm diameter telescope mirror, whose ROC measurements by coordinate measurement machine and deflectometry coincide to within 0.27 mm (or a sag error of 1.3μm). By the example of a diamond-turned off-axis parabolic mirror, we demonstrate that the figure measurement uncertainty comes close to a well-calibrated Fizeau interferometer.

  19. Optomechanics for absolute rotation detection

    NASA Astrophysics Data System (ADS)

    Davuluri, Sankar

    2016-07-01

    In this article, we present an application of optomechanical cavity for the absolute rotation detection. The optomechanical cavity is arranged in a Michelson interferometer in such a way that the classical centrifugal force due to rotation changes the length of the optomechanical cavity. The change in the cavity length induces a shift in the frequency of the cavity mode. The phase shift corresponding to the frequency shift in the cavity mode is measured at the interferometer output to estimate the angular velocity of absolute rotation. We derived an analytic expression to estimate the minimum detectable rotation rate in our scheme for a given optomechanical cavity. Temperature dependence of the rotation detection sensitivity is studied.

  20. The Absolute Spectrum Polarimeter (ASP)

    NASA Technical Reports Server (NTRS)

    Kogut, A. J.

    2010-01-01

    The Absolute Spectrum Polarimeter (ASP) is an Explorer-class mission to map the absolute intensity and linear polarization of the cosmic microwave background and diffuse astrophysical foregrounds over the full sky from 30 GHz to 5 THz. The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r much greater than 1O(raised to the power of { -3}) and Compton distortion y < 10 (raised to the power of{-6}). We describe the ASP instrument and mission architecture needed to detect the signature of an inflationary epoch in the early universe using only 4 semiconductor bolometers.

  1. Error Analysis of non-TLD HDR Brachytherapy Dosimetric Techniques

    NASA Astrophysics Data System (ADS)

    Amoush, Ahmad

    The American Association of Physicists in Medicine Task Group Report43 (AAPM-TG43) and its updated version TG-43U1 rely on the LiF TLD detector to determine the experimental absolute dose rate for brachytherapy. The recommended uncertainty estimates associated with TLD experimental dosimetry include 5% for statistical errors (Type A) and 7% for systematic errors (Type B). TG-43U1 protocol does not include recommendation for other experimental dosimetric techniques to calculate the absolute dose for brachytherapy. This research used two independent experimental methods and Monte Carlo simulations to investigate and analyze uncertainties and errors associated with absolute dosimetry of HDR brachytherapy for a Tandem applicator. An A16 MicroChamber* and one dose MOSFET detectors† were selected to meet the TG-43U1 recommendations for experimental dosimetry. Statistical and systematic uncertainty analyses associated with each experimental technique were analyzed quantitatively using MCNPX 2.6‡ to evaluate source positional error, Tandem positional error, the source spectrum, phantom size effect, reproducibility, temperature and pressure effects, volume averaging, stem and wall effects, and Tandem effect. Absolute dose calculations for clinical use are based on Treatment Planning System (TPS) with no corrections for the above uncertainties. Absolute dose and uncertainties along the transverse plane were predicted for the A16 microchamber. The generated overall uncertainties are 22%, 17%, 15%, 15%, 16%, 17%, and 19% at 1cm, 2cm, 3cm, 4cm, and 5cm, respectively. Predicting the dose beyond 5cm is complicated due to low signal-to-noise ratio, cable effect, and stem effect for the A16 microchamber. Since dose beyond 5cm adds no clinical information, it has been ignored in this study. The absolute dose was predicted for the MOSFET detector from 1cm to 7cm along the transverse plane. The generated overall uncertainties are 23%, 11%, 8%, 7%, 7%, 9%, and 8% at 1cm, 2cm, 3cm

  2. Phase Errors and the Capture Effect

    SciTech Connect

    Blair, J., and Machorro, E.

    2011-11-01

    This slide-show presents analysis of spectrograms and the phase error of filtered noise in a signal. When the filtered noise is smaller than the signal amplitude, the phase error can never exceed 90{deg}, so the average phase error over many cycles is zero: this is called the capture effect because the largest signal captures the phase and frequency determination.

  3. Flexible time domain averaging technique

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  4. On the Berdichevsky average

    NASA Astrophysics Data System (ADS)

    Rung-Arunwan, Tawat; Siripunvaraporn, Weerachai; Utada, Hisashi

    2016-04-01

    Through a large number of magnetotelluric (MT) observations conducted in a study area, one can obtain regional one-dimensional (1-D) features of the subsurface electrical conductivity structure simply by taking the geometric average of determinant invariants of observed impedances. This method was proposed by Berdichevsky and coworkers, which is based on the expectation that distortion effects due to near-surface electrical heterogeneities will be statistically smoothed out. A good estimation of a regional mean 1-D model is useful, especially in recent years, to be used as a priori (or a starting) model in 3-D inversion. However, the original theory was derived before the establishment of the present knowledge on galvanic distortion. This paper, therefore, reexamines the meaning of the Berdichevsky average by using the conventional formulation of galvanic distortion. A simple derivation shows that the determinant invariant of distorted impedance and its Berdichevsky average is always downward biased by the distortion parameters of shear and splitting. This means that the regional mean 1-D model obtained from the Berdichevsky average tends to be more conductive. As an alternative rotational invariant, the sum of the squared elements (ssq) invariant is found to be less affected by bias from distortion parameters; thus, we conclude that its geometric average would be more suitable for estimating the regional structure. We find that the combination of determinant and ssq invariants provides parameters useful in dealing with a set of distorted MT impedances.

  5. Averaging the inhomogeneous universe

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem

    2012-03-01

    A basic assumption of modern cosmology is that the universe is homogeneous and isotropic on the largest observable scales. This greatly simplifies Einstein's general relativistic field equations applied at these large scales, and allows a straightforward comparison between theoretical models and observed data. However, Einstein's equations should ideally be imposed at length scales comparable to, say, the solar system, since this is where these equations have been tested. We know that at these scales the universe is highly inhomogeneous. It is therefore essential to perform an explicit averaging of the field equations in order to apply them at large scales. It has long been known that due to the nonlinear nature of Einstein's equations, any explicit averaging scheme will necessarily lead to corrections in the equations applied at large scales. Estimating the magnitude and behavior of these corrections is a challenging task, due to difficulties associated with defining averages in the context of general relativity (GR). It has recently become possible to estimate these effects in a rigorous manner, and we will review some of the averaging schemes that have been proposed in the literature. A tantalizing possibility explored by several authors is that the corrections due to averaging may in fact account for the apparent acceleration of the expansion of the universe. We will explore this idea, reviewing some of the work done in the literature to date. We will argue however, that this rather attractive idea is in fact not viable as a solution of the dark energy problem, when confronted with observational constraints.

  6. Identifying Lattice, Orbit, And BPM Errors in PEP-II

    SciTech Connect

    Decker, F.-J.; /SLAC

    2005-05-09

    The PEP-II B-Factory is delivering peak luminosities of up to 9.2 {center_dot} 10{sup 33} 1/cm{sup 2} {center_dot} l/s. This is very impressive especially considering our poor understanding of the lattice, absolute orbit and beam position monitor system (BPM). A few simple MATLAB programs were written to get lattice information, like betatron functions in a coupled machine (four all together) and the two dispersions, from the current machine and compare it the design. Big orbit deviations in the Low Energy Ring (LER) could be explained not by bad BPMs (only 3), but by many strong correctors (one corrector to fix four BPMs on average). Additionally these programs helped to uncover a sign error in the third order correction of the BPM system. Further analysis of the current information of the BPMs (sum of all buttons) indicates that there might be still more problematic BPMs.

  7. The AFGL absolute gravity program

    NASA Technical Reports Server (NTRS)

    Hammond, J. A.; Iliff, R. L.

    1978-01-01

    A brief discussion of the AFGL's (Air Force Geophysics Laboratory) program in absolute gravity is presented. Support of outside work and in-house studies relating to gravity instrumentation are discussed. A description of the current transportable system is included and the latest results are presented. These results show good agreement with measurements at the AFGL site by an Italian system. The accuracy obtained by the transportable apparatus is better than 0.1 microns sq sec 10 microgal and agreement with previous measurements is within the combined uncertainties of the measurements.

  8. [Errors Analysis and Correction in Atmospheric Methane Retrieval Based on Greenhouse Gases Observing Satellite Data].

    PubMed

    Bu, Ting-ting; Wang, Xian-hua; Ye, Han-han; Jiang, Xin-hua

    2016-01-01

    High precision retrieval of atmospheric CH4 is influenced by a variety of factors. The uncertainties of ground properties and atmospheric conditions are important factors, such as surface reflectance, temperature profile, humidity profile and pressure profile. Surface reflectance is affected by many factors so that it is difficult to get the precise value. The uncertainty of surface reflectance will cause large error to retrieval result. The uncertainties of temperature profile, humidity profile and pressure profile are also important sources of retrieval error and they will cause unavoidable systematic error. This error is hard to eliminate only using CH4 band. In this paper, ratio spectrometry method and CO2 band correction method are proposed to reduce the error caused by these factors. Ratio spectrometry method can decrease the effect of surface reflectance in CH4 retrieval by converting absolute radiance spectrometry into ratio spectrometry. CO2 band correction method converts column amounts of CH4 into column averaged mixing ratio by using CO2 1.61 μm band and it can correct the systematic error caused by temperature profile, humidity profile and pressure profile. The combination of these two correction methods will decrease the effect caused by surface reflectance, temperature profile, humidity profile and pressure profile at the same time and reduce the retrieval error. GOSAT data were used to retrieve atmospheric CH4 to test and validate the two correction methods. The results showed that CH4 column averaged mixing ratio retrieved after correction was close to GOSAT Level2 product and the retrieval precision was up to -0.24%. The studies suggest that the error of CH4 retrieval caused by the uncertainties of ground properties and atmospheric conditions can be significantly reduced and the retrieval precision can be highly improved by using ratio spectrometry method and CO2 hand correction method. PMID:27228765

  9. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    NASA Technical Reports Server (NTRS)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a

  10. Statistical errors in Monte Carlo estimates of systematic errors

    NASA Astrophysics Data System (ADS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  11. Absolute calibration of forces in optical tweezers

    NASA Astrophysics Data System (ADS)

    Dutra, R. S.; Viana, N. B.; Maia Neto, P. A.; Nussenzveig, H. M.

    2014-07-01

    Optical tweezers are highly versatile laser traps for neutral microparticles, with fundamental applications in physics and in single molecule cell biology. Force measurements are performed by converting the stiffness response to displacement of trapped transparent microspheres, employed as force transducers. Usually, calibration is indirect, by comparison with fluid drag forces. This can lead to discrepancies by sizable factors. Progress achieved in a program aiming at absolute calibration, conducted over the past 15 years, is briefly reviewed. Here we overcome its last major obstacle, a theoretical overestimation of the peak stiffness, within the most employed range for applications, and we perform experimental validation. The discrepancy is traced to the effect of primary aberrations of the optical system, which are now included in the theory. All required experimental parameters are readily accessible. Astigmatism, the dominant effect, is measured by analyzing reflected images of the focused laser spot, adapting frequently employed video microscopy techniques. Combined with interface spherical aberration, it reveals a previously unknown window of instability for trapping. Comparison with experimental data leads to an overall agreement within error bars, with no fitting, for a broad range of microsphere radii, from the Rayleigh regime to the ray optics one, for different polarizations and trapping heights, including all commonly employed parameter domains. Besides signaling full first-principles theoretical understanding of optical tweezers operation, the results may lead to improved instrument design and control over experiments, as well as to an extended domain of applicability, allowing reliable force measurements, in principle, from femtonewtons to nanonewtons.

  12. A Brief Overview of the Absolute Proper motions Outside the Plane catalog (APOP)

    NASA Astrophysics Data System (ADS)

    Qi, Zhaoxiang; Yu, Yong; Smart, Richard L.; Lattanzi, Mario G.; Bucciarelli, Beatrice; Spagna, Alessandro; McLean, Brian J.; Tang, Zhenghong; Jones, Hugh R. A.; Morbidelli, Roberto; Nicastro, Luciano; Vecchiato, Alberto; Teixeira, Ramachrisna

    2015-10-01

    APOP is the first version of an absolute proper motion catalog achieved using the Digitized Sky Survey Schmidt plate material outside the galactic plane (|b|≥ 27(o) ). The resulting global zero point error is less than 0.6 mas/yr, and the precision better than 4.0 mas/yr for objects brighter than R_{F}=18.5, rising to 9.0 mas/yr for objects with magnitude in the range 18.5average position accuracy is about 150 mas (per coordinate) with a systematic deviation from the ICRS around 0.2 mas. The catalog covers 22,525 square degrees and lists 100,777,385 objects to the limiting magnitude of R_{F}˜ 20.8. Although the Gaia mission is poised to set the new standard in catalog astronomy, the methods and procedures used for APOP will be useful in other reductions to dispel astrometric magnitude- and color-dependent systematic errors from the next generation of ground-based surveys.

  13. Absolute Timing Calibration of the USA Experiment Using Pulsar Observations

    NASA Astrophysics Data System (ADS)

    Ray, P. S.; Wood, K. S.; Wolff, M. T.; Lovellette, M. N.; Sheikh, S.; Moon, D.-S.; Eikenberry, S. S.; Roberts, M.; Lyne, A.; Jordon, C.; Bloom, E. D.; Tournear, D.; Saz Parkinson, P.; Reilly, K.

    2003-03-01

    We update the status of the absolute time calibration of the USA Experiment as determined by observations of X-ray emitting rotation-powered pulsars. The brightest such source is the Crab Pulsar and we have obtained observations of the Crab at radio, IR, optical, and X-ray wavelengths. We directly compare arrival time determinations for 2--10 keV X-ray observations made contemporaneously with the PCA on the Rossi X-ray Timing Explorer and the USA Experiment on ARGOS. These two X-ray measurements employ very different means of measuring time and satellite position and thus have different systematic error budgets. The comparison with other wavelengths requires additional steps such as dispersion measure corrections and a precise definition of the ``peak'' of the light curve since the light curve shape varies with observing wavelength. We will describe each of these effects and quantify the magnitude of the systematic error that each may contribute. We will also include time comparison results for other pulsars, such as PSR B1509-58 and PSR B1821-24. Once the absolute time calibrations are well understood, comparing absolute arrival times at multiple energies can provide clues to the magnetospheric structure and emission region geometry. Basic research on X-ray Astronomy at NRL is funded by NRL/ONR.

  14. Average density in cosmology

    SciTech Connect

    Bonnor, W.B.

    1987-05-01

    The Einstein-Straus (1945) vacuole is here used to represent a bound cluster of galaxies embedded in a standard pressure-free cosmological model, and the average density of the cluster is compared with the density of the surrounding cosmic fluid. The two are nearly but not quite equal, and the more condensed the cluster, the greater the difference. A theoretical consequence of the discrepancy between the two densities is discussed. 25 references.

  15. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954

  16. Cosmology with negative absolute temperatures

    NASA Astrophysics Data System (ADS)

    Vieira, J. P. P.; Byrnes, Christian T.; Lewis, Antony

    2016-08-01

    Negative absolute temperatures (NAT) are an exotic thermodynamical consequence of quantum physics which has been known since the 1950's (having been achieved in the lab on a number of occasions). Recently, the work of Braun et al. [1] has rekindled interest in negative temperatures and hinted at a possibility of using NAT systems in the lab as dark energy analogues. This paper goes one step further, looking into the cosmological consequences of the existence of a NAT component in the Universe. NAT-dominated expanding Universes experience a borderline phantom expansion (w < ‑1) with no Big Rip, and their contracting counterparts are forced to bounce after the energy density becomes sufficiently large. Both scenarios might be used to solve horizon and flatness problems analogously to standard inflation and bouncing cosmologies. We discuss the difficulties in obtaining and ending a NAT-dominated epoch, and possible ways of obtaining density perturbations with an acceptable spectrum.

  17. Development of an absolute method for efficiency calibration of a coaxial HPGe detector for large volume sources

    NASA Astrophysics Data System (ADS)

    Ortiz-Ramírez, Pablo C.

    2015-09-01

    In this work an absolute method for the determination of the full energy peak efficiency of a gamma spectroscopy system for voluminous sources is presented. The method was tested for a high-resolution coaxial HPGe detector and cylindrical homogeneous volume source. The volume source is represented by a set of point sources filling its volume. We found that the absolute efficiency of a volume source can be determined as the average over its volume of the absolute efficiency of each point source. Experimentally, we measure the intrinsic efficiency as a function upon source-detector position. Then, considering the solid angle and the attenuations of the gamma rays emitted to the detector by each point source, considered as embedded in the source matrix, the absolute efficiency for each point source inside of the volume was determined. The factor associate with the solid angle and the self-attenuation of photons in the sample was deduced from first principles without any mathematical approximation. The method was tested by determining the specific activity of 137Cs in cylindrical homogeneous sources, using IAEA reference materials with specific activities between 14.2 Bq/kg and 9640 Bq/kg at the moment of the experimentation. The results obtained shown a good agreement with the expected values. The relative difference was less than 7% in most of the cases. The main advantage of this method is that it does not require of the use of expensive and hard to produce standard materials. In addition it does not require of matrix effect corrections, which are the main cause of error in this type of measurements, and it is easy to implement in any nuclear physics laboratory.

  18. The absolute radiometric calibration of the advanced very high resolution radiometer

    NASA Technical Reports Server (NTRS)

    Slater, P. N.; Teillet, P. M.; Ding, Y.

    1988-01-01

    The need for independent, redundant absolute radiometric calibration methods is discussed with reference to the Thematic Mapper. Uncertainty requirements for absolute calibration of between 0.5 and 4 percent are defined based on the accuracy of reflectance retrievals at an agricultural site. It is shown that even very approximate atmospheric corrections can reduce the error in reflectance retrieval to 0.02 over the reflectance range 0 to 0.4.

  19. Sounding rocket measurement of the absolute solar EUV flux utilizing a silicon photodiode

    NASA Technical Reports Server (NTRS)

    Ogawa, H. S.; Mcmullin, D.; Judge, D. L.; Canfield, L. R.

    1990-01-01

    A newly developed stable and high quantum efficiency silicon photodiode was used to obtain an accurate measurement of the integrated absolute magnitude of the solar extreme UV photon flux in the spectral region between 50 and 800 A. The adjusted daily 10.7-cm solar radio flux and sunspot number were 168.4 and 121, respectively. The unattenuated absolute value of the solar EUV flux at 1 AU in the specified wavelength region was 6.81 x 10 to the 10th photons/sq cm per s. Based on a nominal probable error of 7 percent for National Institute of Standards and Technology detector efficiency measurements in the 50- to 500-A region (5 percent on longer wavelength measurements between 500 and 1216 A), and based on experimental errors associated with the present rocket instrumentation and analysis, a conservative total error estimate of about 14 percent is assigned to the absolute integral solar flux obtained.

  20. Absolute magnitude calibration using trigonometric parallax - Incomplete, spectroscopic samples

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Casertano, Stefano

    1991-01-01

    A new numerical algorithm is used to calibrate the absolute magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum-likelihood estimation, can retrieve unbiased estimates of the intrinsic absolute magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate error estimates are derived for each of the fitted parameters. The algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. The procedure is described in general and applied to both real and simulated data.

  1. Flow rate calibration for absolute cell counting rationale and design.

    PubMed

    Walker, Clare; Barnett, David

    2006-05-01

    There is a need for absolute leukocyte enumeration in the clinical setting, and accurate, reliable (and affordable) technology to determine absolute leukocyte counts has been developed. Such technology includes single platform and dual platform approaches. Derivations of these counts commonly incorporate the addition of a known number of latex microsphere beads to a blood sample, although it has been suggested that the addition of beads to a sample may only be required to act as an internal quality control procedure for assessing the pipetting error. This unit provides the technical details for undertaking flow rate calibration that obviates the need to add reference beads to each sample. It is envisaged that this report will provide the basis for subsequent clinical evaluations of this novel approach. PMID:18770842

  2. Surveying implicit solvent models for estimating small molecule absolute hydration free energies

    PubMed Central

    Knight, Jennifer L.

    2011-01-01

    Implicit solvent models are powerful tools in accounting for the aqueous environment at a fraction of the computational expense of explicit solvent representations. Here, we compare the ability of common implicit solvent models (TC, OBC, OBC2, GBMV, GBMV2, GBSW, GBSW/MS, GBSW/MS2 and FACTS) to reproduce experimental absolute hydration free energies for a series of 499 small neutral molecules that are modeled using AMBER/GAFF parameters and AM1-BCC charges. Given optimized surface tension coefficients for scaling the surface area term in the nonpolar contribution, most implicit solvent models demonstrate reasonable agreement with extensive explicit solvent simulations (average difference 1.0-1.7 kcal/mol and R2=0.81-0.91) and with experimental hydration free energies (average unsigned errors=1.1-1.4 kcal/mol and R2=0.66-0.81). Chemical classes of compounds are identified that need further optimization of their ligand force field parameters and others that require improvement in the physical parameters of the implicit solvent models themselves. More sophisticated nonpolar models are also likely necessary to more effectively represent the underlying physics of solvation and take the quality of hydration free energies estimated from implicit solvent models to the next level. PMID:21735452

  3. Photometer calibration error using extended standard sources

    NASA Technical Reports Server (NTRS)

    Torr, M. R.; Hays, P. B.; Kennedy, B. C.; Torr, D. G.

    1976-01-01

    As part of a project to compare measurements of the night airglow made by the visible airglow experiment on the Atmospheric Explorer-C satellite, the standard light sources of several airglow observatories were compared with the standard source used in the absolute calibration of the satellite photometer. In the course of the comparison, it has been found that serious calibration errors (up to a factor of two) can arise when a calibration source with a reflecting surface is placed close to an interference filter. For reliable absolute calibration, the source should be located at a distance of at least five filter radii from the interference filter.

  4. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  5. Americans' Average Radiation Exposure

    SciTech Connect

    NA

    2000-08-11

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.

  6. Total-pressure averaging in pulsating flows.

    NASA Technical Reports Server (NTRS)

    Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.

    1972-01-01

    A number of total-pressure tubes were tested in a nonsteady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered with the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles.

  7. Geologic analysis of averaged magnetic satellite anomalies

    NASA Technical Reports Server (NTRS)

    Goyal, H. K.; Vonfrese, R. R. B.; Ridgway, J. R.; Hinze, W. J.

    1985-01-01

    To investigate relative advantages and limitations for quantitative geologic analysis of magnetic satellite scalar anomalies derived from arithmetic averaging of orbital profiles within equal-angle or equal-area parallelograms, the anomaly averaging process was simulated by orbital profiles computed from spherical-earth crustal magnetic anomaly modeling experiments using Gauss-Legendre quadrature integration. The results indicate that averaging can provide reasonable values at satellite elevations, where contributing error factors within a given parallelogram include the elevation distribution of the data, and orbital noise and geomagnetic field attributes. Various inversion schemes including the use of equivalent point dipoles are also investigated as an alternative to arithmetic averaging. Although inversion can provide improved spherical grid anomaly estimates, these procedures are problematic in practice where computer scaling difficulties frequently arise due to a combination of factors including large source-to-observation distances ( 400 km), high geographic latitudes, and low geomagnetic field inclinations.

  8. Total pressure averaging in pulsating flows

    NASA Technical Reports Server (NTRS)

    Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.

    1972-01-01

    A number of total-pressure tubes were tested in a non-steady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered within the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure, and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles. The experiments were performed at a pressure level of 1 bar, for Mach number up to near 1, and frequencies up to 3 kHz.

  9. Absolute Instability in Coupled-Cavity TWTs

    NASA Astrophysics Data System (ADS)

    Hung, D. M. H.; Rittersdorf, I. M.; Zhang, Peng; Lau, Y. Y.; Simon, D. H.; Gilgenbach, R. M.; Chernin, D.; Antonsen, T. M., Jr.

    2014-10-01

    This paper will present results of our analysis of absolute instability in a coupled-cavity traveling wave tube (TWT). The structure mode at the lower and upper band edges are respectively approximated by a hyperbola in the (omega, k) plane. When the Briggs-Bers criterion is applied, a threshold current for onset of absolute instability is observed at the upper band edge, but not the lower band edge. The nonexistence of absolute instability at the lower band edge is mathematically similar to the nonexistence of absolute instability that we recently demonstrated for a dielectric TWT. The existence of absolute instability at the upper band edge is mathematically similar to the existence of absolute instability in a gyroton traveling wave amplifier. These interesting observations will be discussed, and the practical implications will be explored. This work was supported by AFOSR, ONR, and L-3 Communications Electron Devices.

  10. Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements

    NASA Technical Reports Server (NTRS)

    Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.

    2014-01-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when

  11. Propagation of radiosonde pressure sensor errors to ozonesonde measurements

    NASA Astrophysics Data System (ADS)

    Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.

    2014-01-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2005 and 2013 from both longer term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.6 hPa in the free troposphere, with nearly a third > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~ 5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (~ 30 km), can approach greater than ±10% (> 25% of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is

  12. Testing and evaluation of thermal cameras for absolute temperature measurement

    NASA Astrophysics Data System (ADS)

    Chrzanowski, Krzysztof; Fischer, Joachim; Matyszkiel, Robert

    2000-09-01

    The accuracy of temperature measurement is the most important criterion for the evaluation of thermal cameras used in applications requiring absolute temperature measurement. All the main international metrological organizations currently propose a parameter called uncertainty as a measure of measurement accuracy. We propose a set of parameters for the characterization of thermal measurement cameras. It is shown that if these parameters are known, then it is possible to determine the uncertainty of temperature measurement due to only the internal errors of these cameras. Values of this uncertainty can be used as an objective criterion for comparisons of different thermal measurement cameras.

  13. Interobserver error involved in independent attempts to measure cusp base areas of Pan M1s.

    PubMed

    Bailey, Shara E; Pilbrow, Varsha C; Wood, Bernard A

    2004-10-01

    Cusp base areas measured from digitized images increase the amount of detailed quantitative information one can collect from post-canine crown morphology. Although this method is gaining wide usage for taxonomic analyses of extant and extinct hominoids, the techniques for digitizing images and taking measurements differ between researchers. The aim of this study was to investigate interobserver error in order to help assess the reliability of cusp base area measurement within extant and extinct hominoid taxa. Two of the authors measured individual cusp base areas and total cusp base area of 23 maxillary first molars (M(1)) of Pan. From these, relative cusp base areas were calculated. No statistically significant interobserver differences were found for either absolute or relative cusp base areas. On average the hypocone and paracone showed the least interobserver error (< 1%) whereas the protocone and metacone showed the most (2.6-4.5%). We suggest that the larger measurement error in the metacone/protocone is due primarily to either weakly defined fissure patterns and/or the presence of accessory occlusal features. Overall, levels of interobserver error are similar to those found for intraobserver error. The results of our study suggest that if certain prescribed standards are employed then cusp and crown base areas measured by different individuals can be pooled into a single database. PMID:15447691

  14. Absolute negative mobility of interacting Brownian particles

    NASA Astrophysics Data System (ADS)

    Ou, Ya-li; Hu, Cai-tian; Wu, Jian-chun; Ai, Bao-quan

    2015-12-01

    Transport of interacting Brownian particles in a periodic potential is investigated in the presence of an ac force and a dc force. From Brownian dynamic simulations, we find that both the interaction between particles and the thermal fluctuations play key roles in the absolute negative mobility (the particle noisily moves backwards against a small constant bias). When no the interaction acts, there is only one region where the absolute negative mobility occurs. In the presence of the interaction, the absolute negative mobility may appear in multiple regions. The weak interaction can be helpful for the absolute negative mobility, while the strong interaction has a destructive impact on it.

  15. Direct comparisons between absolute and relative geomagnetic paleointensities: Absolute calibration of a relative paleointensity stack

    NASA Astrophysics Data System (ADS)

    Mochizuki, N.; Yamamoto, Y.; Hatakeyama, T.; Shibuya, H.

    2013-12-01

    Absolute geomagnetic paleointensities (APIs) have been estimated from igneous rocks, while relative paleomagnetic intensities (RPIs) have been reported from sediment cores. These two datasets have been treated separately, as correlations between APIs and RPIs are difficult on account of age uncertainties. High-resolution RPI stacks have been constructed from globally distributed sediment cores with high sedimentation rates. Previous studies often assumed that the RPI stacks have a linear relationship with geomagnetic axial dipole moments, and calibrated the RPI values to API values. However, the assumption of a linear relationship between APIs and RPIs has not been evaluated. Also, a quantitative calibration method for the RPI is lacking. We present a procedure for directly comparing API and RPI stacks, thus allowing reliable calibrations of RPIs. Direct comparisons between APIs and RPIs were conducted with virtually no associated age errors using both tephrochronologic correlations and RPI minima. Using the stratigraphic positions of tephra layers in oxygen isotope stratigraphic records, we directly compared the RPIs and APIs reported from welded tuffs contemporaneously extruded with the tephra layers. In addition, RPI minima during geomagnetic reversals and excursions were compared with APIs corresponding to the reversals and excursions. The comparison of APIs and RPIs at these exact points allowed a reliable calibration of the RPI values. We applied this direct comparison procedure to the global RPI stack PISO-1500. For six independent calibration points, virtual axial dipole moments (VADMs) from the corresponding APIs and RPIs of the PISO-1500 stack showed a near-linear relationship. On the basis of the linear relationship, RPIs of the stack were successfully calibrated to the VADMs. The direct comparison procedure provides an absolute calibration method that will contribute to the recovery of temporal variations and distributions of geomagnetic axial dipole

  16. Impact of Measurement Error on Testing Genetic Association with Quantitative Traits

    PubMed Central

    Liao, Jiemin; Li, Xiang; Wong, Tien-Yin; Wang, Jie Jin; Khor, Chiea Chuen; Tai, E. Shyong; Aung, Tin; Teo, Yik-Ying; Cheng, Ching-Yu

    2014-01-01

    Measurement error of a phenotypic trait reduces the power to detect genetic associations. We examined the impact of sample size, allele frequency and effect size in presence of measurement error for quantitative traits. The statistical power to detect genetic association with phenotype mean and variability was investigated analytically. The non-centrality parameter for a non-central F distribution was derived and verified using computer simulations. We obtained equivalent formulas for the cost of phenotype measurement error. Effects of differences in measurements were examined in a genome-wide association study (GWAS) of two grading scales for cataract and a replication study of genetic variants influencing blood pressure. The mean absolute difference between the analytic power and simulation power for comparison of phenotypic means and variances was less than 0.005, and the absolute difference did not exceed 0.02. To maintain the same power, a one standard deviation (SD) in measurement error of a standard normal distributed trait required a one-fold increase in sample size for comparison of means, and a three-fold increase in sample size for comparison of variances. GWAS results revealed almost no overlap in the significant SNPs (p<10−5) for the two cataract grading scales while replication results in genetic variants of blood pressure displayed no significant differences between averaged blood pressure measurements and single blood pressure measurements. We have developed a framework for researchers to quantify power in the presence of measurement error, which will be applicable to studies of phenotypes in which the measurement is highly variable. PMID:24475218

  17. Remediating Common Math Errors.

    ERIC Educational Resources Information Center

    Wagner, Rudolph F.

    1981-01-01

    Explanations and remediation suggestions for five types of mathematics errors due either to perceptual or cognitive difficulties are given. Error types include directionality problems, mirror writing, visually misperceived signs, diagnosed directionality problems, and mixed process errors. (CL)

  18. Updated Absolute Flux Calibration of the COS FUV Modes

    NASA Astrophysics Data System (ADS)

    Massa, D.; Ely, J.; Osten, R.; Penton, S.; Aloisi, A.; Bostroem, A.; Roman-Duval, J.; Proffitt, C.

    2014-03-01

    We present newly derived point source absolute flux calibrations for the COS FUV modes at both the original and second lifetime positions. The analysis includes observa- tions through the Primary Science Aperture (PSA) of the standard stars WD0308-565, GD71, WD1057+729 and WD0947+857 obtained as part of two calibration programs. Data were were obtained for all of the gratings at all of the original CENWAVE settings at both the original and second lifetime positions and for the G130M CENWAVE = 1222 at the second lifetime position. Data were also obtained with the FUVB segment for the G130M CENWAVE = 1055 and 1096 setting at the second lifetime position. We also present the derivation of L-flats that were used in processing the data and show that the internal consistency of the primary standards is 1%. The accuracy of the absolute flux calibrations over the UV are estimated to be 1-2% for the medium resolution gratings, and 2-3% over most of the wavelength range of the G140L grating, although the uncertainty can be as large as 5% or more at some G140L wavelengths. We note that these errors are all relative to the optical flux near the V band and small additional errors may be present due to inaccuracies in the V band calibration. In addition, these error estimates are for the time at which the flux calibration data were obtained; the accuracy of the flux calibration at other times can be affected by errors in the time dependent sensitivity (TDS) correction.

  19. [Error factors in spirometry].

    PubMed

    Quadrelli, S A; Montiel, G C; Roncoroni, A J

    1994-01-01

    Spirometry is the more frequently used method to estimate pulmonary function in the clinical laboratory. It is important to comply with technical requisites to approximate the real values sought as well as adequate interpretation of results. Recommendations are made to establish: 1--quality control 2--define abnormality 3--classify the change from normal and its degree 4--define reversibility. In relation to quality control several criteria are pointed out such as end of the test, back-extrapolation and extrapolated volume in order to delineate most common errors. Daily calibration is advised. Inspection of graphical records of the test is mandatory. The limitations to the common use of 80% of predicted values to establish abnormality is stressed. The reasons for employing 95% confidence limits are detailed. It is important to select the reference values equation (in view of the differences in predicted values). It is advisable to validate the selection with local population normal values. In relation to the definition of the defect as restrictive or obstructive, the limitations of vital capacity (VC) to establish restriction, when obstruction is also present, are defined. Also the limitations of maximal mid-expiratory flow 25-75 (FMF 25-75) as an isolated marker of obstruction. Finally the qualities of forced expiratory volume in 1 sec (VEF1) and the difficulties with other indicators (CVF, FMF 25-75, VEF1/CVF) to estimate reversibility after bronchodilators are evaluated. The value of different methods used to define reversibility (% of change in initial value, absolute change or % of predicted), is commented. Clinical spirometric studies in order to be valuable should be performed with the same technical rigour as any other more complex studies. PMID:7990690

  20. Time-average-based Methods for Multi-angular Scale Analysis of Cosmic-Ray Data

    NASA Astrophysics Data System (ADS)

    Iuppa, R.; Di Sciascio, G.

    2013-04-01

    Over the past decade, a number of experiments dealt with the problem of measuring the arrival direction distribution of cosmic rays, looking for information on the propagation mechanisms and the identification of their sources. Any deviation from the isotropy may be regarded to as a signature of unforeseen or unknown phenomena, mostly if well localized in the sky and occurring at low rigidity. It induced experimenters to search for excesses down to angular scales as narrow as 10°, disclosing the issue of properly filtering contributions from wider structures. A solution commonly envisaged was based on time-average methods to determine the reference value of cosmic-ray flux. Such techniques are nearly insensitive to signals wider than the time window in use, thus allowing us to focus the analysis on medium- and small-scale signals. Nonetheless, the signal often cannot be excluded in the calculation of the reference value, which induces systematic errors. The use of time-average methods recently revealed important discoveries about the medium-scale cosmic-ray anisotropy, present both in the northern and southern hemispheres. It is known that the excess (or deficit) is observed as less intense than in reality and that fake deficit zones are rendered around true excesses because of the absolute lack of knowledge a priori of which signal is true and which is not. This work is an attempt to critically review the use of time-average-based methods for observing extended features in the cosmic-ray arrival distribution pattern.

  1. Inequalities, Absolute Value, and Logical Connectives.

    ERIC Educational Resources Information Center

    Parish, Charles R.

    1992-01-01

    Presents an approach to the concept of absolute value that alleviates students' problems with the traditional definition and the use of logical connectives in solving related problems. Uses a model that maps numbers from a horizontal number line to a vertical ray originating from the origin. Provides examples solving absolute value equations and…

  2. Absolute optical metrology : nanometers to kilometers

    NASA Technical Reports Server (NTRS)

    Dubovitsky, Serge; Lay, O. P.; Peters, R. D.; Liebe, C. C.

    2005-01-01

    We provide and overview of the developments in the field of high-accuracy absolute optical metrology with emphasis on space-based applications. Specific work on the Modulation Sideband Technology for Absolute Ranging (MSTAR) sensor is described along with novel applications of the sensor.

  3. Monolithically integrated absolute frequency comb laser system

    DOEpatents

    Wanke, Michael C.

    2016-07-12

    Rather than down-convert optical frequencies, a QCL laser system directly generates a THz frequency comb in a compact monolithically integrated chip that can be locked to an absolute frequency without the need of a frequency-comb synthesizer. The monolithic, absolute frequency comb can provide a THz frequency reference and tool for high-resolution broad band spectroscopy.

  4. Introducing the Mean Absolute Deviation "Effect" Size

    ERIC Educational Resources Information Center

    Gorard, Stephen

    2015-01-01

    This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…

  5. Investigating Absolute Value: A Real World Application

    ERIC Educational Resources Information Center

    Kidd, Margaret; Pagni, David

    2009-01-01

    Making connections between various representations is important in mathematics. In this article, the authors discuss the numeric, algebraic, and graphical representations of sums of absolute values of linear functions. The initial explanations are accessible to all students who have experience graphing and who understand that absolute value simply…

  6. Absolute Income, Relative Income, and Happiness

    ERIC Educational Resources Information Center

    Ball, Richard; Chernova, Kateryna

    2008-01-01

    This paper uses data from the World Values Survey to investigate how an individual's self-reported happiness is related to (i) the level of her income in absolute terms, and (ii) the level of her income relative to other people in her country. The main findings are that (i) both absolute and relative income are positively and significantly…

  7. Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average

    ERIC Educational Resources Information Center

    DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.

    2007-01-01

    Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…

  8. Set standard deviation, repeatability and offset of absolute gravimeter A10-008

    USGS Publications Warehouse

    Schmerge, D.; Francis, O.

    2006-01-01

    The set standard deviation, repeatability and offset of absolute gravimeter A10-008 were assessed at the Walferdange Underground Laboratory for Geodynamics (WULG) in Luxembourg. Analysis of the data indicates that the instrument performed within the specifications of the manufacturer. For A10-008, the average set standard deviation was (1.6 0.6) ??Gal (1Gal ??? 1 cm s -2), the average repeatability was (2.9 1.5) ??Gal, and the average offset compared to absolute gravimeter FG5-216 was (3.2 3.5) ??Gal. ?? 2006 BIPM and IOP Publishing Ltd.

  9. Absolute length measurement using manually decided stereo correspondence for endoscopy

    NASA Astrophysics Data System (ADS)

    Sasaki, M.; Koishi, T.; Nakaguchi, T.; Tsumura, N.; Miyake, Y.

    2009-02-01

    In recent years, various kinds of endoscope have been developed and widely used to endoscopic biopsy, endoscopic operation and endoscopy. The size of the inflammatory part is important to determine a method of medical treatment. However, it is not easy to measure absolute size of inflammatory part such as ulcer, cancer and polyp from the endoscopic image. Therefore, it is required measuring the size of those part in endoscopy. In this paper, we propose a new method to measure the absolute length in a straight line between arbitrary two points based on the photogrammetry using endoscope with magnetic tracking sensor which gives camera position and angle. In this method, the stereo-corresponding points between two endoscopic images are determined by the endoscopist without any apparatus of projection and calculation to find the stereo correspondences, then the absolute length can be calculated on the basis of the photogrammetry. The evaluation experiment using a checkerboard showed that the errors of the measurements are less than 2% of the target length when the baseline is sufficiently-long.

  10. Absolute instability of the Gaussian wake profile

    NASA Technical Reports Server (NTRS)

    Hultgren, Lennart S.; Aggarwal, Arun K.

    1987-01-01

    Linear parallel-flow stability theory has been used to investigate the effect of viscosity on the local absolute instability of a family of wake profiles with a Gaussian velocity distribution. The type of local instability, i.e., convective or absolute, is determined by the location of a branch-point singularity with zero group velocity of the complex dispersion relation for the instability waves. The effects of viscosity were found to be weak for values of the wake Reynolds number, based on the center-line velocity defect and the wake half-width, larger than about 400. Absolute instability occurs only for sufficiently large values of the center-line wake defect. The critical value of this parameter increases with decreasing wake Reynolds number, thereby indicating a shrinking region of absolute instability with decreasing wake Reynolds number. If backflow is not allowed, absolute instability does not occur for wake Reynolds numbers smaller than about 38.

  11. Absolute Hydration Free Energies of Blocked Amino Acids: Implications for Protein Solvation and Stability

    PubMed Central

    König, Gerhard; Bruckner, Stefan; Boresch, Stefan

    2013-01-01

    Most proteins perform their function in aqueous solution. The interactions with water determine the stability of proteins and the desolvation costs of ligand binding or membrane insertion. However, because of experimental restrictions, absolute solvation free energies of proteins or amino acids are not available. Instead, solvation free energies are estimated based on side chain analog data. This approach implies that the contributions to free energy differences are additive, and it has often been employed for estimating folding or binding free energies. However, it is not clear how much the additivity assumption affects the reliability of the resulting data. Here, we use molecular dynamics–based free energy simulations to calculate absolute hydration free energies for 15 N-acetyl-methylamide amino acids with neutral side chains. By comparing our results with solvation free energies for side chain analogs, we demonstrate that estimates of solvation free energies of full amino acids based on group-additive methods are systematically too negative and completely overestimate the hydrophobicity of glycine. The largest deviation of additive protocols using side chain analog data was 6.7 kcal/mol; on average, the deviation was 4 kcal/mol. We briefly discuss a simple way to alleviate the errors incurred by using side chain analog data and point out the implications of our findings for the field of biophysics and implicit solvent models. To support our results and conclusions, we calculate relative protein stabilities for selected point mutations, yielding a root-mean-square deviation from experimental results of 0.8 kcal/mol. PMID:23442867

  12. Picoliter Well Array Chip-Based Digital Recombinase Polymerase Amplification for Absolute Quantification of Nucleic Acids

    PubMed Central

    Li, Zhao; Liu, Yong; Wei, Qingquan; Liu, Yuanjie; Liu, Wenwen; Zhang, Xuelian; Yu, Yude

    2016-01-01

    Absolute, precise quantification methods expand the scope of nucleic acids research and have many practical applications. Digital polymerase chain reaction (dPCR) is a powerful method for nucleic acid detection and absolute quantification. However, it requires thermal cycling and accurate temperature control, which are difficult in resource-limited conditions. Accordingly, isothermal methods, such as recombinase polymerase amplification (RPA), are more attractive. We developed a picoliter well array (PWA) chip with 27,000 consistently sized picoliter reactions (314 pL) for isothermal DNA quantification using digital RPA (dRPA) at 39°C. Sample loading using a scraping liquid blade was simple, fast, and required small reagent volumes (i.e., <20 μL). Passivating the chip surface using a methoxy-PEG-silane agent effectively eliminated cross-contamination during dRPA. Our creative optical design enabled wide-field fluorescence imaging in situ and both end-point and real-time analyses of picoliter wells in a 6-cm2 area. It was not necessary to use scan shooting and stitch serial small images together. Using this method, we quantified serial dilutions of a Listeria monocytogenes gDNA stock solution from 9 × 10-1 to 4 × 10-3 copies per well with an average error of less than 11% (N = 15). Overall dRPA-on-chip processing required less than 30 min, which was a 4-fold decrease compared to dPCR, requiring approximately 2 h. dRPA on the PWA chip provides a simple and highly sensitive method to quantify nucleic acids without thermal cycling or precise micropump/microvalve control. It has applications in fast field analysis and critical clinical diagnostics under resource-limited settings. PMID:27074005

  13. Picoliter Well Array Chip-Based Digital Recombinase Polymerase Amplification for Absolute Quantification of Nucleic Acids.

    PubMed

    Li, Zhao; Liu, Yong; Wei, Qingquan; Liu, Yuanjie; Liu, Wenwen; Zhang, Xuelian; Yu, Yude

    2016-01-01

    Absolute, precise quantification methods expand the scope of nucleic acids research and have many practical applications. Digital polymerase chain reaction (dPCR) is a powerful method for nucleic acid detection and absolute quantification. However, it requires thermal cycling and accurate temperature control, which are difficult in resource-limited conditions. Accordingly, isothermal methods, such as recombinase polymerase amplification (RPA), are more attractive. We developed a picoliter well array (PWA) chip with 27,000 consistently sized picoliter reactions (314 pL) for isothermal DNA quantification using digital RPA (dRPA) at 39°C. Sample loading using a scraping liquid blade was simple, fast, and required small reagent volumes (i.e., <20 μL). Passivating the chip surface using a methoxy-PEG-silane agent effectively eliminated cross-contamination during dRPA. Our creative optical design enabled wide-field fluorescence imaging in situ and both end-point and real-time analyses of picoliter wells in a 6-cm(2) area. It was not necessary to use scan shooting and stitch serial small images together. Using this method, we quantified serial dilutions of a Listeria monocytogenes gDNA stock solution from 9 × 10(-1) to 4 × 10(-3) copies per well with an average error of less than 11% (N = 15). Overall dRPA-on-chip processing required less than 30 min, which was a 4-fold decrease compared to dPCR, requiring approximately 2 h. dRPA on the PWA chip provides a simple and highly sensitive method to quantify nucleic acids without thermal cycling or precise micropump/microvalve control. It has applications in fast field analysis and critical clinical diagnostics under resource-limited settings. PMID:27074005

  14. Measurement error analysis of taxi meter

    NASA Astrophysics Data System (ADS)

    He, Hong; Li, Dan; Li, Hang; Zhang, Da-Jian; Hou, Ming-Feng; Zhang, Shi-pu

    2011-12-01

    The error test of the taximeter is divided into two aspects: (1) the test about time error of the taximeter (2) distance test about the usage error of the machine. The paper first gives the working principle of the meter and the principle of error verification device. Based on JJG517 - 2009 "Taximeter Verification Regulation ", the paper focuses on analyzing the machine error and test error of taxi meter. And the detect methods of time error and distance error are discussed as well. In the same conditions, standard uncertainty components (Class A) are evaluated, while in different conditions, standard uncertainty components (Class B) are also evaluated and measured repeatedly. By the comparison and analysis of the results, the meter accords with JJG517-2009, "Taximeter Verification Regulation ", thereby it improves the accuracy and efficiency largely. In actual situation, the meter not only makes up the lack of accuracy, but also makes sure the deal between drivers and passengers fair. Absolutely it enriches the value of the taxi as a way of transportation.

  15. Absolute magnitudes and phase coefficients of trans-Neptunian objects

    NASA Astrophysics Data System (ADS)

    Alvarez-Candal, A.; Pinilla-Alonso, N.; Ortiz, J. L.; Duffard, R.; Morales, N.; Santos-Sanz, P.; Thirouin, A.; Silva, J. S.

    2016-02-01

    Context. Accurate measurements of diameters of trans-Neptunian objects (TNOs) are extremely difficult to obtain. Thermal modeling can provide good results, but accurate absolute magnitudes are needed to constrain the thermal models and derive diameters and geometric albedos. The absolute magnitude, HV, is defined as the magnitude of the object reduced to unit helio- and geocentric distances and a zero solar phase angle and is determined using phase curves. Phase coefficients can also be obtained from phase curves. These are related to surface properties, but only few are known. Aims: Our objective is to measure accurate V-band absolute magnitudes and phase coefficients for a sample of TNOs, many of which have been observed and modeled within the program "TNOs are cool", which is one of the Herschel Space Observatory key projects. Methods: We observed 56 objects using the V and R filters. These data, along with those available in the literature, were used to obtain phase curves and measure V-band absolute magnitudes and phase coefficients by assuming a linear trend of the phase curves and considering a magnitude variability that is due to the rotational light-curve. Results: We obtained 237 new magnitudes for the 56 objects, six of which were without previously reported measurements. Including the data from the literature, we report a total of 110 absolute magnitudes with their respective phase coefficients. The average value of HV is 6.39, bracketed by a minimum of 14.60 and a maximum of -1.12. For the phase coefficients we report a median value of 0.10 mag per degree and a very large dispersion, ranging from -0.88 up to 1.35 mag per degree.

  16. On the combination procedure of correlated errors

    NASA Astrophysics Data System (ADS)

    Erler, Jens

    2015-09-01

    When averages of different experimental determinations of the same quantity are computed, each with statistical and systematic error components, then frequently the statistical and systematic components of the combined error are quoted explicitly. These are important pieces of information since statistical errors scale differently and often more favorably with the sample size than most systematical or theoretical errors. In this communication we describe a transparent procedure by which the statistical and systematic error components of the combination uncertainty can be obtained. We develop a general method and derive a general formula for the case of Gaussian errors with or without correlations. The method can easily be applied to other error distributions, as well. For the case of two measurements, we also define disparity and misalignment angles, and discuss their relation to the combination weight factors.

  17. Measurement of absolute optical thickness of mask glass by wavelength-tuning Fourier analysis.

    PubMed

    Kim, Yangjin; Hbino, Kenichi; Sugita, Naohiko; Mitsuishi, Mamoru

    2015-07-01

    Optical thickness is a fundamental characteristic of an optical component. A measurement method combining discrete Fourier-transform (DFT) analysis and a phase-shifting technique gives an appropriate value for the absolute optical thickness of a transparent plate. However, there is a systematic error caused by the nonlinearity of the phase-shifting technique. In this research the absolute optical-thickness distribution of mask blank glass was measured using DFT and wavelength-tuning Fizeau interferometry without using sensitive phase-shifting techniques. The error occurring during the DFT analysis was compensated for by using the unwrapping correlation. The experimental results indicated that the absolute optical thickness of mask glass was measured with an accuracy of 5 nm. PMID:26125394

  18. Absolute Radiometric Calibration of KOMPSAT-3A

    NASA Astrophysics Data System (ADS)

    Ahn, H. Y.; Shin, D. Y.; Kim, J. S.; Seo, D. C.; Choi, C. U.

    2016-06-01

    This paper presents a vicarious radiometric calibration of the Korea Multi-Purpose Satellite-3A (KOMPSAT-3A) performed by the Korea Aerospace Research Institute (KARI) and the Pukyong National University Remote Sensing Group (PKNU RSG) in 2015.The primary stages of this study are summarized as follows: (1) A field campaign to determine radiometric calibrated target fields was undertaken in Mongolia and South Korea. Surface reflectance data obtained in the campaign were input to a radiative transfer code that predicted at-sensor radiance. Through this process, equations and parameters were derived for the KOMPSAT-3A sensor to enable the conversion of calibrated DN to physical units, such as at-sensor radiance or TOA reflectance. (2) To validate the absolute calibration coefficients for the KOMPSAT-3A sensor, we performed a radiometric validation with a comparison of KOMPSAT-3A and Landsat-8 TOA reflectance using one of the six PICS (Libya 4). Correlations between top-of-atmosphere (TOA) radiances and the spectral band responses of the KOMPSAT-3A sensors at the Zuunmod, Mongolia and Goheung, South Korea sites were significant for multispectral bands. The average difference in TOA reflectance between KOMPSAT-3A and Landsat-8 image over the Libya 4, Libya site in the red-green-blue (RGB) region was under 3%, whereas in the NIR band, the TOA reflectance of KOMPSAT-3A was lower than the that of Landsat-8 due to the difference in the band passes of two sensors. The KOMPSAT-3Aensor includes a band pass near 940 nm that can be strongly absorbed by water vapor and therefore displayed low reflectance. Toovercome this, we need to undertake a detailed analysis using rescale methods, such as the spectral bandwidth adjustment factor.

  19. Frequency-Tracking-Error Detector

    NASA Technical Reports Server (NTRS)

    Randall, Richard L.

    1990-01-01

    Frequency-tracking-error detector compares average period of output signal from band-pass tracking filter with average period of signal of frequency 100 f(sub 0) that controls center frequency f(sub 0) of tracking filter. Measures difference between f(sub 0) and frequency of one of periodic components in output of bearing sensor. Bearing sensor is accelerometer, strain gauge, or deflectometer mounted on bearing housing. Detector part of system of electronic equipment used to measure vibrations in bearings in rotating machinery.

  20. On the absolute calibration of SO2 cameras

    NASA Astrophysics Data System (ADS)

    Lübcke, P.; Bobrowski, N.; Illing, S.; Kern, C.; Alvarez Nieves, J. M.; Vogel, L.; Zielcke, J.; Delgado Granados, H.; Platt, U.

    2012-09-01

    results are compared with measurements from an IDOAS to verify the calibration curve over the spatial extend of the image. Our results show that calibration cells can lead to an overestimation of the SO2 CD by up to 60% compared with CDs from the DOAS measurements. Besides these errors of calibration, radiative transfer effects (e.g. light dilution, multiple scattering) can significantly influence the results of both instrument types. These effects can lead to an even more significant overestimation or, depending on the measurement conditions, an underestimation of the true CD. Previous investigations found that possible errors can be more than an order of magnitude. However, the spectral information from the DOAS measurements allows to correct for these radiative transfer effects. The measurement presented in this work were taken at Popocatépetl, Mexico, between 1 March 2011 and 4 March 2011. Average SO2 emission rates between 4.00 kg s-1 and 14.34 kg s-1 were observed.

  1. Absolute flatness testing of skip-flat interferometry by matrix analysis in polar coordinates.

    PubMed

    Han, Zhi-Gang; Yin, Lu; Chen, Lei; Zhu, Ri-Hong

    2016-03-20

    A new method utilizing matrix analysis in polar coordinates has been presented for absolute testing of skip-flat interferometry. The retrieval of the absolute profile mainly includes three steps: (1) transform the wavefront maps of the two cavity measurements into data in polar coordinates; (2) retrieve the profile of the reflective flat in polar coordinates by matrix analysis; and (3) transform the profile of the reflective flat back into data in Cartesian coordinates and retrieve the profile of the sample. Simulation of synthetic surface data has been provided, showing the capability of the approach to achieve an accuracy of the order of 0.01 nm RMS. The absolute profile can be retrieved by a set of closed mathematical formulas without polynomial fitting of wavefront maps or the iterative evaluation of an error function, making the new method more efficient for absolute testing. PMID:27140578

  2. Absolute optical instruments without spherical symmetry

    NASA Astrophysics Data System (ADS)

    Tyc, Tomáš; Dao, H. L.; Danner, Aaron J.

    2015-11-01

    Until now, the known set of absolute optical instruments has been limited to those containing high levels of symmetry. Here, we demonstrate a method of mathematically constructing refractive index profiles that result in asymmetric absolute optical instruments. The method is based on the analogy between geometrical optics and classical mechanics and employs Lagrangians that separate in Cartesian coordinates. In addition, our method can be used to construct the index profiles of most previously known absolute optical instruments, as well as infinitely many different ones.

  3. Determination and error analysis of emittance and spectral emittance measurements by remote sensing

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Kumar, R.

    1977-01-01

    The author has identified the following significant results. From the theory of remote sensing of surface temperatures, an equation of the upper bound of absolute error of emittance was determined. It showed that the absolute error decreased with an increase in contact temperature, whereas, it increased with an increase in environmental integrated radiant flux density. Change in emittance had little effect on the absolute error. A plot of the difference between temperature and band radiance temperature vs. emittance was provided for the wavelength intervals: 4.5 to 5.5 microns, 8 to 13.5 microns, and 10.2 to 12.5 microns.

  4. Propagation of radiosonde pressure sensor errors to ozonesonde measurements

    NASA Astrophysics Data System (ADS)

    Stauffer, R. M.; Morris, G. A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.

    2013-08-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this, a total of 501 radiosonde/ozonesonde launches from the Southern Hemisphere subtropics to northern mid-latitudes are considered, with launches between 2006-2013 from both historical and campaign-based intensive stations. Three types of electrochemical concentration cell (ECC) ozonesonde manufacturers (Science Pump Corporation; SPC and ENSCI/Droplet Measurement Technologies; DMT) and five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80 and RS92) are analyzed to determine the magnitude of the pressure offset and the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are > ±0.7 hPa in the free troposphere, with nearly a quarter > ±1.0 hPa at 26 km, where the 1.0 hPa error represents ~5% of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (98% of launches lie within ±5% O3MR error at 20 km). Ozone mixing ratio errors in the 7-15 hPa layer (29-32 km), a region critical for detection of long-term O3 trends, can approach greater than ±10% (>25% of launches that reach 30 km exceed this threshold). Comparisons of total column O3 yield average differences of +1.6 DU (-1.1 to +4.9 DU 10th to 90th percentiles) when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of +0.1 DU (-1.1 to +2.2 DU) when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are clearly distinguishable

  5. On-orbit absolute radiance standard for the next generation of IR remote sensing instruments

    NASA Astrophysics Data System (ADS)

    Best, Fred A.; Adler, Douglas P.; Pettersen, Claire; Revercomb, Henry E.; Gero, P. Jonathan; Taylor, Joseph K.; Knuteson, Robert O.; Perepezko, John H.

    2012-11-01

    The next generation of infrared remote sensing satellite instrumentation, including climate benchmark missions will require better absolute measurement accuracy than now available, and will most certainly rely on the emerging capability to fly SI traceable standards that provide irrefutable absolute measurement accuracy. As an example, instrumentation designed to measure spectrally resolved infrared radiances with an absolute brightness temperature error of better than 0.1 K will require high-emissivity (<0.999) calibration blackbodies with emissivity uncertainty of better than 0.06%, and absolute temperature uncertainties of better than 0.045K (k=3). Key elements of an On-Orbit Absolute Radiance Standard (OARS) meeting these stringent requirements have been demonstrated in the laboratory at the University of Wisconsin (UW) and refined under the NASA Instrument Incubator Program (IIP). This work recently culminated with an integrated subsystem that was used in the laboratory to demonstrate end-to-end radiometric accuracy verification for the UW Absolute Radiance Interferometer. Along with an overview of the design, we present details of a key underlying technology of the OARS that provides on-orbit absolute temperature calibration using the transient melt signatures of small quantities (<1g) of reference materials (gallium, water, and mercury) imbedded in the blackbody cavity. In addition we present performance data from the laboratory testing of the OARS.

  6. Clover: Compiler directed lightweight soft error resilience

    SciTech Connect

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; Tiwari, Devesh

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either the sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.

  7. Clover: Compiler directed lightweight soft error resilience

    DOE PAGESBeta

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; Tiwari, Devesh

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less

  8. Absolute magnitudes of trans-neptunian objects

    NASA Astrophysics Data System (ADS)

    Duffard, R.; Alvarez-candal, A.; Pinilla-Alonso, N.; Ortiz, J. L.; Morales, N.; Santos-Sanz, P.; Thirouin, A.

    2015-10-01

    Accurate measurements of diameters of trans- Neptunian objects are extremely complicated to obtain. Radiomatric techniques applied to thermal measurements can provide good results, but precise absolute magnitudes are needed to constrain diameters and albedos. Our objective is to measure accurate absolute magnitudes for a sample of trans- Neptunian objects, many of which have been observed, and modelled, by the "TNOs are cool" team, one of Herschel Space Observatory key projects grantes with ~ 400 hours of observing time. We observed 56 objects in filters V and R, if possible. These data, along with data available in the literature, was used to obtain phase curves and to measure absolute magnitudes by assuming a linear trend of the phase curves and considering magnitude variability due to rotational light-curve. In total we obtained 234 new magnitudes for the 56 objects, 6 of them with no reported previous measurements. Including the data from the literature we report a total of 109 absolute magnitudes.

  9. A New Gimmick for Assigning Absolute Configuration.

    ERIC Educational Resources Information Center

    Ayorinde, F. O.

    1983-01-01

    A five-step procedure is provided to help students in making the assignment absolute configuration less bothersome. Examples for both single (2-butanol) and multi-chiral carbon (3-chloro-2-butanol) molecules are included. (JN)

  10. Precision Absolute Beam Current Measurement of Low Power Electron Beam

    SciTech Connect

    Ali, M. M.; Bevins, M. E.; Degtiarenko, P.; Freyberger, A.; Krafft, G. A.

    2012-11-01

    Precise measurements of low power CW electron beam current for the Jefferson Lab Nuclear Physics program have been performed using a Tungsten calorimeter. This paper describes the rationale for the choice of the calorimeter technique, as well as the design and calibration of the device. The calorimeter is in use presently to provide a 1% absolute current measurement of CW electron beam with 50 to 500 nA of average beam current and 1-3 GeV beam energy. Results from these recent measurements will also be presented.

  11. Robust Morphological Averages in Three Dimensions for Anatomical Atlas Construction

    NASA Astrophysics Data System (ADS)

    Márquez, Jorge; Bloch, Isabelle; Schmitt, Francis

    2004-09-01

    We present original methods for obtaining robust, anatomical shape-based averages of features of the human head anatomy from a normal population. Our goals are computerized atlas construction with representative anatomical features and morphopometry for specific populations. A method for true-morphological averaging is proposed, consisting of a suitable blend of shape-related information for N objects to obtain a progressive average. It is made robust by penalizing, in a morphological sense, the contributions of features less similar to the current average. Morphological error and similarity, as well as penalization, are based on the same paradigm as the morphological averaging.

  12. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  13. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  14. Landsat-7 ETM+ radiometric stability and absolute calibration

    USGS Publications Warehouse

    Markham, B.L.; Barker, J.L.; Barsi, J.A.; Kaita, E.; Thome, K.J.; Helder, D.L.; Palluconi, Frank Don; Schott, J.R.; Scaramuzza, P.

    2002-01-01

    Launched in April 1999, the Landsat-7 ETM+ instrument is in its fourth year of operation. The quality of the acquired calibrated imagery continues to be high, especially with respect to its three most important radiometric performance parameters: reflective band instrument stability to better than ??1%, reflective band absolute calibration to better than ??5%, and thermal band absolute calibration to better than ??0.6 K. The ETM+ instrument has been the most stable of any of the Landsat instruments, in both the reflective and thermal channels. To date, the best on-board calibration source for the reflective bands has been the Full Aperture Solar Calibrator, which has indicated changes of at most -1.8% to -2.0% (95% C.I.) change per year in the ETM+ gain (band 4). However, this change is believed to be caused by changes in the solar diffuser panel, as opposed to a change in the instrument's gain. This belief is based partially on ground observations, which bound the changes in gain in band 4 at -0.7% to +1.5%. Also, ETM+ stability is indicated by the monitoring of desert targets. These image-based results for four Saharan and Arabian sites, for a collection of 35 scenes over the three years since launch, bound the gain change at -0.7% to +0.5% in band 4. Thermal calibration from ground observations revealed an offset error of +0.31 W/m 2 sr um soon after launch. This offset was corrected within the U. S. ground processing system at EROS Data Center on 21-Dec-00, and since then, the band 6 on-board calibration has indicated changes of at most +0.02% to +0.04% (95% C.I.) per year. The latest ground observations have detected no remaining offset error with an RMS error of ??0.6 K. The stability and absolute calibration of the Landsat-7 ETM+ sensor make it an ideal candidate to be used as a reference source for radiometric cross-calibrating to other land remote sensing satellite systems.

  15. Correction due to the finite speed of light in absolute gravimeters Correction due to the finite speed of light in absolute gravimeters

    NASA Astrophysics Data System (ADS)

    Nagornyi, V. D.; Zanimonskiy, Y. M.; Zanimonskiy, Y. Y.

    2011-06-01

    Equations (45) and (47) in our paper [1] in this issue have incorrect sign and should read \\tilde T_i=T_i+{b\\mp S_i\\over c},\\cr\\tilde T_i=T_i\\mp {S_i\\over c}. The error traces back to our formula (3), inherited from the paper [2]. According to the technical documentation [3, 4], the formula (3) is implemented by several commercially available instruments. An incorrect sign would cause a bias of about 20 µGal not known for these instruments, which probably indicates that the documentation incorrectly reflects the implemented measurement equation. Our attention to the error was drawn by the paper [5], also in this issue, where the sign is mentioned correctly. References [1] Nagornyi V D, Zanimonskiy Y M and Zanimonskiy Y Y 2011 Correction due to the finite speed of light in absolute gravimeters Metrologia 48 101-13 [2] Niebauer T M, Sasagawa G S, Faller J E, Hilt R and Klopping F 1995 A new generation of absolute gravimeters Metrologia 32 159-80 [3] Micro-g LaCoste, Inc. 2006 FG5 Absolute Gravimeter Users Manual [4] Micro-g LaCoste, Inc. 2007 g7 Users Manual [5] Niebauer T M, Billson R, Ellis B, Mason B, van Westrum D and Klopping F 2011 Simultaneous gravity and gradient measurements from a recoil-compensated absolute gravimeter Metrologia 48 154-63

  16. [Paradigm errors in the old biomedical science].

    PubMed

    Skurvydas, Albertas

    2008-01-01

    The aim of this article was to review the basic drawbacks of the deterministic and reductionistic thinking in biomedical science and to provide ways for dealing with them. The present paradigm of research in biomedical science has not got rid of the errors of the old science yet, i.e. the errors of absolute determinism and reductionism. These errors restrict the view and thinking of scholars engaged in the studies of complex and dynamic phenomena and mechanisms. Recently, discussions on science paradigm aimed at spreading the new science paradigm that of complex dynamic systems as well as chaos theory are in progress all over the world. It is for the nearest future to show which of the two, the old or the new science, will be the winner. We have come to the main conclusion that deterministic and reductionistic thinking applied in improper way can cause substantial damage rather than prove benefits for biomedicine science. PMID:18541951

  17. Programming Errors in APL.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  18. Jasminum flexile flower absolute from India--a detailed comparison with three other jasmine absolutes.

    PubMed

    Braun, Norbert A; Kohlenberg, Birgit; Sim, Sherina; Meier, Manfred; Hammerschmidt, Franz-Josef

    2009-09-01

    Jasminum flexile flower absolute from the south of India and the corresponding vacuum headspace (VHS) sample of the absolute were analyzed using GC and GC-MS. Three other commercially available Indian jasmine absolutes from the species: J. sambac, J. officinale subsp. grandiflorum, and J. auriculatum and the respective VHS samples were used for comparison purposes. One hundred and twenty-one compounds were characterized in J. flexile flower absolute, with methyl linolate, benzyl salicylate, benzyl benzoate, (2E,6E)-farnesol, and benzyl acetate as the main constituents. A detailed olfactory evaluation was also performed. PMID:19831037

  19. Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?

    SciTech Connect

    Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.

    2013-06-17

    Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.

  20. Do diurnal aerosol changes affect daily average radiative forcing?

    NASA Astrophysics Data System (ADS)

    Kassianov, Evgueni; Barnard, James; Pekour, Mikhail; Berg, Larry K.; Michalsky, Joseph; Lantz, Kathy; Hodges, Gary

    2013-06-01

    diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.

  1. Universal Cosmic Absolute and Modern Science

    NASA Astrophysics Data System (ADS)

    Kostro, Ludwik

    The official Sciences, especially all natural sciences, respect in their researches the principle of methodic naturalism i.e. they consider all phenomena as entirely natural and therefore in their scientific explanations they do never adduce or cite supernatural entities and forces. The purpose of this paper is to show that Modern Science has its own self-existent, self-acting, and self-sufficient Natural All-in Being or Omni-Being i.e. the entire Nature as a Whole that justifies the scientific methodic naturalism. Since this Natural All-in Being is one and only It should be considered as the own scientifically justified Natural Absolute of Science and should be called, in my opinion, the Universal Cosmic Absolute of Modern Science. It will be also shown that the Universal Cosmic Absolute is ontologically enormously stratified and is in its ultimate i.e. in its most fundamental stratum trans-reistic and trans-personal. It means that in its basic stratum. It is neither a Thing or a Person although It contains in Itself all things and persons with all other sentient and conscious individuals as well, On the turn of the 20th century the Science has begun to look for a theory of everything, for a final theory, for a master theory. In my opinion the natural Universal Cosmic Absolute will constitute in such a theory the radical all penetrating Ultimate Basic Reality and will substitute step by step the traditional supernatural personal Absolute.

  2. Estimating Ocean Middle-Depth Velocities from ARGO Floats: Error Estimation and Application to Pacific

    NASA Astrophysics Data System (ADS)

    Xie, J.; Zhu, J.; Yan, C.

    2006-07-01

    The Array for Real-time Geostrophic Oceanography (ARGO) project creates a unique opportunity to estimate the absolute velocity at mid-depths of the global oceans. However, the estimation can only be made based on float surface trajectories. The diving and resurfacing positions of the float are not available in its trajectory file. This surface drifting effect makes it difficult to estimate mid-depth current. Moreover, the vertical shear during decent or ascent between parking depth and the surface is another major error source. In this presentation, we first quantify the contributions of the two major error sources using the current estimates from Estimating the Climate and Circulation of the Ocean (ECCO) and find that the surface drifting is a primary error source. Then, a sequential surface trajectory prediction/estimation scheme based on Kalman Filter is introduced and implemented to reduce the surface drifting error in the Pacific during November 2001 to October 2004. On average, the error of the estimated velocities is greatly reduced from 2.7 to 0.2 cm s if neglecting the vertical shear. These velocities with relative error less than 25% are analyzed and compared with previous studies on mid-depth currents. The current system derived from ARGO floats in Pacific at 1000 and 2000 dB is comparable to other measured by ADCP (Reid, 1997; Firing et al., 1998). This presentation is based on two submitted manuscripts of the same authors (Xie and Zhu, 2006; Zhu et al., 2006). More detailed results can be found in the two manuscripts.

  3. Virtual Averaging Making Nonframe-Averaged Optical Coherence Tomography Images Comparable to Frame-Averaged Images

    PubMed Central

    Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.

    2016-01-01

    Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. Results All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t-test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. Conclusion The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Translational Relevance Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects. PMID:26835180

  4. Reformulation of Ensemble Averages via Coordinate Mapping.

    PubMed

    Schultz, Andrew J; Moustafa, Sabry G; Lin, Weisong; Weinstein, Steven J; Kofke, David A

    2016-04-12

    A general framework is established for reformulation of the ensemble averages commonly encountered in statistical mechanics. This "mapped-averaging" scheme allows approximate theoretical results that have been derived from statistical mechanics to be reintroduced into the underlying formalism, yielding new ensemble averages that represent exactly the error in the theory. The result represents a distinct alternative to perturbation theory for methodically employing tractable systems as a starting point for describing complex systems. Molecular simulation is shown to provide one appealing route to exploit this advance. Calculation of the reformulated averages by molecular simulation can proceed without contamination by noise produced by behavior that has already been captured by the approximate theory. Consequently, accurate and precise values of properties can be obtained while using less computational effort, in favorable cases, many orders of magnitude less. The treatment is demonstrated using three examples: (1) calculation of the heat capacity of an embedded-atom model of iron, (2) calculation of the dielectric constant of the Stockmayer model of dipolar molecules, and (3) calculation of the pressure of a Lennard-Jones fluid. It is observed that improvement in computational efficiency is related to the appropriateness of the underlying theory for the condition being simulated; the accuracy of the result is however not impacted by this. The framework opens many avenues for further development, both as a means to improve simulation methodology and as a new basis to develop theories for thermophysical properties. PMID:26950263

  5. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  6. Refractive errors in children.

    PubMed

    Tongue, A C

    1987-12-01

    Optical correction of refractive errors in infants and young children is indicated when the refractive errors are sufficiently large to cause unilateral or bilateral amblyopia, if they are impairing the child's ability to function normally, or if the child has accommodative strabismus. Screening for refractive errors is important and should be performed as part of the annual physical examination in all verbal children. Screening for significant refractive errors in preverbal children is more difficult; however, the red reflex test of Bruckner is useful for the detection of anisometropic refractive errors. The photorefraction test, which is an adaptation of Bruckner's red reflex test, may prove to be a useful screening device for detecting bilateral as well as unilateral refractive errors. Objective testing as well as subjective testing enables ophthalmologists to prescribe proper optical correction for refractive errors for infants and children of any age. PMID:3317238

  7. Error-prone signalling.

    PubMed

    Johnstone, R A; Grafen, A

    1992-06-22

    The handicap principle of Zahavi is potentially of great importance to the study of biological communication. Existing models of the handicap principle, however, make the unrealistic assumption that communication is error free. It seems possible, therefore, that Zahavi's arguments do not apply to real signalling systems, in which some degree of error is inevitable. Here, we present a general evolutionarily stable strategy (ESS) model of the handicap principle which incorporates perceptual error. We show that, for a wide range of error functions, error-prone signalling systems must be honest at equilibrium. Perceptual error is thus unlikely to threaten the validity of the handicap principle. Our model represents a step towards greater realism, and also opens up new possibilities for biological signalling theory. Concurrent displays, direct perception of quality, and the evolution of 'amplifiers' and 'attenuators' are all probable features of real signalling systems, yet handicap models based on the assumption of error-free communication cannot accommodate these possibilities. PMID:1354361

  8. Morphology and Absolute Magnitudes of the SDSS DR7 QSOs

    NASA Astrophysics Data System (ADS)

    Coelho, B.; Andrei, A. H.; Antón, S.

    2014-10-01

    The ESA mission Gaia will furnish a complete census of the Milky Way, delivering astrometrics, dynamics, and astrophysics information for 1 billion stars. Operating in all-sky repeated survey mode, Gaia will also provide measurements of extra-galactic objects. Among the later there will be at least 500,000 QSOs that will be used to build the reference frame upon which the several independent observations will be combined and interpreted. Not all the QSOs are equally suited to fulfill this role of fundamental, fiducial grid-points. Brightness, morphology, and variability define the astrometric error budget for each object. We made use of 3 morphological parameters based on the PSF sharpness, circularity and gaussianity, which enable us to distinguish the "real point-like" QSOs. These parameters are being explored on the spectroscopically certified QSOs of the SDSS DR7, to compare the performance against other morphology classification schemes, as well as to derive properties of the host galaxy. We present a new method, based on the Gaia quasar database, to derive absolute magnitudes, on the SDSS filters domain. The method can be extrapolated all over the optical window, including the Gaia filters. We discuss colors derived from SDSS apparent magnitudes and colors based on absolute magnitudes that we obtained tanking into account corrections for dust extinction, either intergalactic or from the QSO host, and for the Lyman α forest. In the future we want to further discuss properties of the host galaxies, comparing for e.g. the obtained morphological classification with the color, the apparent and absolute magnitudes, and the redshift distributions.

  9. Absolute blood velocity measured with a modified fundus camera

    NASA Astrophysics Data System (ADS)

    Duncan, Donald D.; Lemaillet, Paul; Ibrahim, Mohamed; Nguyen, Quan Dong; Hiller, Matthias; Ramella-Roman, Jessica

    2010-09-01

    We present a new method for the quantitative estimation of blood flow velocity, based on the use of the Radon transform. The specific application is for measurement of blood flow velocity in the retina. Our modified fundus camera uses illumination from a green LED and captures imagery with a high-speed CCD camera. The basic theory is presented, and typical results are shown for an in vitro flow model using blood in a capillary tube. Subsequently, representative results are shown for representative fundus imagery. This approach provides absolute velocity and flow direction along the vessel centerline or any lateral displacement therefrom. We also provide an error analysis allowing estimation of confidence intervals for the estimated velocity.

  10. Measured and modelled absolute gravity changes in Greenland

    NASA Astrophysics Data System (ADS)

    Nielsen, J. Emil; Forsberg, Rene; Strykowski, Gabriel

    2014-01-01

    In glaciated areas, the Earth is responding to the ongoing changes of the ice sheets, a response known as glacial isostatic adjustment (GIA). GIA can be investigated through observations of gravity change. For the ongoing assessment of the ice sheets mass balance, where satellite data are used, the study of GIA is important since it acts as an error source. GIA consists of three signals as seen by a gravimeter on the surface of the Earth. These signals are investigated in this study. The ICE-5G ice history and recently developed ice models of present day changes are used to model the gravity change in Greenland. The result is compared with the initial measurements of absolute gravity (AG) change at selected Greenland Network (GNET) sites.

  11. Full field imaging based instantaneous hyperspectral absolute refractive index measurement

    SciTech Connect

    Baba, Justin S; Boudreaux, Philip R

    2012-01-01

    Multispectral refractometers typically measure refractive index (RI) at discrete monochromatic wavelengths via a serial process. We report on the demonstration of a white light full field imaging based refractometer capable of instantaneous multispectral measurement of absolute RI of clear liquid/gel samples across the entire visible light spectrum. The broad optical bandwidth refractometer is capable of hyperspectral measurement of RI in the range 1.30 1.70 between 400nm 700nm with a maximum error of 0.0036 units (0.24% of actual) at 414nm for a = 1.50 sample. We present system design and calibration method details as well as results from a system validation sample.

  12. Absolute isotopic abundances of TI in meteorites

    NASA Astrophysics Data System (ADS)

    Niederer, F. R.; Papanastassiou, D. A.; Wasserburg, G. J.

    1985-03-01

    The absolute isotope abundance of Ti has been determined in Ca-Al-rich inclusions from the Allende and Leoville meteorites and in samples of whole meteorites. The absolute Ti isotope abundances differ by a significant mass dependent isotope fractionation transformation from the previously reported abundances, which were normalized for fractionation using 46Ti/48Ti. Therefore, the absolute compositions define distinct nucleosynthetic components from those previously identified or reflect the existence of significant mass dependent isotope fractionation in nature. The authors provide a general formalism for determining the possible isotope compositions of the exotic Ti from the measured composition, for different values of isotope fractionation in nature and for different mixing ratios of the exotic and normal components.

  13. Molecular iodine absolute frequencies. Final report

    SciTech Connect

    Sansonetti, C.J.

    1990-06-25

    Fifty specified lines of {sup 127}I{sub 2} were studied by Doppler-free frequency modulation spectroscopy. For each line the classification of the molecular transition was determined, hyperfine components were identified, and one well-resolved component was selected for precise determination of its absolute frequency. In 3 cases, a nearby alternate line was selected for measurement because no well-resolved component was found for the specified line. Absolute frequency determinations were made with an estimated uncertainty of 1.1 MHz by locking a dye laser to the selected hyperfine component and measuring its wave number with a high-precision Fabry-Perot wavemeter. For each line results of the absolute measurement, the line classification, and a Doppler-free spectrum are given.

  14. Stimulus probability effects in absolute identification.

    PubMed

    Kent, Christopher; Lamberts, Koen

    2016-05-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of presentation probability on both proportion correct and response times. The effects were moderated by the ubiquitous stimulus position effect. The accuracy and response time data were predicted by an exemplar-based model of perceptual cognition (Kent & Lamberts, 2005). The bow in discriminability was also attenuated when presentation probability for middle items was relatively high, an effect that will constrain future model development. The study provides evidence for item-specific learning in absolute identification. Implications for other theories of absolute identification are discussed. (PsycINFO Database Record PMID:26478959

  15. Absolute calibration in vivo measurement systems

    SciTech Connect

    Kruchten, D.A.; Hickman, D.P.

    1991-02-01

    Lawrence Livermore National Laboratory (LLNL) is currently investigating a new method for obtaining absolute calibration factors for radiation measurement systems used to measure internally deposited radionuclides in vivo. Absolute calibration of in vivo measurement systems will eliminate the need to generate a series of human surrogate structures (i.e., phantoms) for calibrating in vivo measurement systems. The absolute calibration of in vivo measurement systems utilizes magnetic resonance imaging (MRI) to define physiological structure, size, and composition. The MRI image provides a digitized representation of the physiological structure, which allows for any mathematical distribution of radionuclides within the body. Using Monte Carlo transport codes, the emission spectrum from the body is predicted. The in vivo measurement equipment is calibrated using the Monte Carlo code and adjusting for the intrinsic properties of the detection system. The calibration factors are verified using measurements of existing phantoms and previously obtained measurements of human volunteers. 8 refs.

  16. Closed-loop step motor control using absolute encoders

    SciTech Connect

    Hicks, J.S.; Wright, M.C.

    1997-08-01

    A multi-axis, step motor control system was developed to accurately position and control the operation of a triple axis spectrometer at the High Flux Isotope Reactor (HFIR) located at Oak Ridge National Laboratory. Triple axis spectrometers are used in neutron scattering and diffraction experiments and require highly accurate positioning. This motion control system can handle up to 16 axes of motion. Four of these axes are outfitted with 17-bit absolute encoders. These four axes are controlled with a software feedback loop that terminates the move based on real-time position information from the absolute encoders. Because the final position of the actuator is used to stop the motion of the step motors, the moves can be made accurately in spite of the large amount of mechanical backlash from a chain drive between the motors and the spectrometer arms. A modified trapezoidal profile, custom C software, and an industrial PC, were used to achieve a positioning accuracy of 0.00275 degrees of rotation. A form of active position maintenance ensures that the angles are maintained with zero error or drift.

  17. Stitching interferometry and absolute surface shape metrology: similarities

    NASA Astrophysics Data System (ADS)

    Bray, Michael

    2001-12-01

    Stitching interferometry is a method of analysing large optical components using a standard small interferometer. This result is obtained by taking multiple overlapping images of the large component, and numerically stitching these sub-apertures together by computing a correcting Tip- Tilt-Piston correction for each sub-aperture. All real-life measurement techniques require a calibration phase. By definition, a perfect surface does not exist. Methods abound for the accurate measurement of diameters (viz., the Three Flat Test). However, we need total surface knowledge of the reference surface, because the stitched overlap areas will suffer from the slightest deformation. One must not be induced into thinking that Stitching is the cause of this error: it simply highlights the lack of absolute knowledge of the reference surface, or the lack of adequate thermal control, issues which are often sidetracked... The goal of this paper is to highlight the above-mentioned calibration problems in interferometry in general, and in stitching interferometry in particular, and show how stitching hardware and software can be conveniently used to provide the required absolute surface shape metrology. Some measurement figures will illustrate this article.

  18. A method to evaluate dose errors introduced by dose mapping processes for mass conserving deformations

    PubMed Central

    Yan, C.; Hugo, G.; Salguero, F. J.; Saleh-Sayah, N.; Weiss, E.; Sleeman, W. C.; Siebers, J. V.

    2012-01-01

    Purpose: To present a method to evaluate the dose mapping error introduced by the dose mapping process. In addition, apply the method to evaluate the dose mapping error introduced by the 4D dose calculation process implemented in a research version of commercial treatment planning system for a patient case. Methods: The average dose accumulated in a finite volume should be unchanged when the dose delivered to one anatomic instance of that volume is mapped to a different anatomic instance—provided that the tissue deformation between the anatomic instances is mass conserving. The average dose to a finite volume on image S is defined as dS¯=es/mS, where eS is the energy deposited in the mass mS contained in the volume. Since mass and energy should be conserved, when dS¯ is mapped to an image R(dS→R¯=dR¯), the mean dose mapping error is defined as Δdm¯=|dR¯-dS¯|=|eR/mR-eS/mS|, where the eR and eS are integral doses (energy deposited), and mR and mS are the masses within the region of interest (ROI) on image R and the corresponding ROI on image S, where R and S are the two anatomic instances from the same patient. Alternatively, application of simple differential propagation yields the differential dose mapping error, Δdd¯=|∂d¯∂e*Δe+∂d¯∂m*Δm|=|(eS-eR)mR-(mS-mR)mR2*eR|=α|dR¯-dS¯| with α=mS/mR. A 4D treatment plan on a ten-phase 4D-CT lung patient is used to demonstrate the dose mapping error evaluations for a patient case, in which the accumulated dose, DR¯=∑S=09dS→R¯, and associated error values (ΔDm¯ and ΔDd¯) are calculated for a uniformly spaced set of ROIs. Results: For the single sample patient dose distribution, the average accumulated differential dose mapping error is 4.3%, the average absolute differential dose mapping error is 10.8%, and the average accumulated mean dose mapping error is 5.0%. Accumulated differential dose mapping errors within the gross tumor volume (GTV) and planning target volume (PTV) are lower, 0

  19. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  20. Precise Measurement of the Absolute Fluorescence Yield

    NASA Astrophysics Data System (ADS)

    Ave, M.; Bohacova, M.; Daumiller, K.; Di Carlo, P.; di Giulio, C.; San Luis, P. Facal; Gonzales, D.; Hojvat, C.; Hörandel, J. R.; Hrabovsky, M.; Iarlori, M.; Keilhauer, B.; Klages, H.; Kleifges, M.; Kuehn, F.; Monasor, M.; Nozka, L.; Palatka, M.; Petrera, S.; Privitera, P.; Ridky, J.; Rizi, V.; D'Orfeuil, B. Rouille; Salamida, F.; Schovanek, P.; Smida, R.; Spinka, H.; Ulrich, A.; Verzi, V.; Williams, C.

    2011-09-01

    We present preliminary results of the absolute yield of fluorescence emission in atmospheric gases. Measurements were performed at the Fermilab Test Beam Facility with a variety of beam particles and gases. Absolute calibration of the fluorescence yield to 5% level was achieved by comparison with two known light sources--the Cherenkov light emitted by the beam particles, and a calibrated nitrogen laser. The uncertainty of the energy scale of current Ultra-High Energy Cosmic Rays experiments will be significantly improved by the AIRFLY measurement.

  1. Absolutely relative or relatively absolute: violations of value invariance in human decision making.

    PubMed

    Teodorescu, Andrei R; Moran, Rani; Usher, Marius

    2016-02-01

    Making decisions based on relative rather than absolute information processing is tied to choice optimality via the accumulation of evidence differences and to canonical neural processing via accumulation of evidence ratios. These theoretical frameworks predict invariance of decision latencies to absolute intensities that maintain differences and ratios, respectively. While information about the absolute values of the choice alternatives is not necessary for choosing the best alternative, it may nevertheless hold valuable information about the context of the decision. To test the sensitivity of human decision making to absolute values, we manipulated the intensities of brightness stimuli pairs while preserving either their differences or their ratios. Although asked to choose the brighter alternative relative to the other, participants responded faster to higher absolute values. Thus, our results provide empirical evidence for human sensitivity to task irrelevant absolute values indicating a hard-wired mechanism that precedes executive control. Computational investigations of several modelling architectures reveal two alternative accounts for this phenomenon, which combine absolute and relative processing. One account involves accumulation of differences with activation dependent processing noise and the other emerges from accumulation of absolute values subject to the temporal dynamics of lateral inhibition. The potential adaptive role of such choice mechanisms is discussed. PMID:26022836

  2. Averaging Internal Consistency Reliability Coefficients

    ERIC Educational Resources Information Center

    Feldt, Leonard S.; Charter, Richard A.

    2006-01-01

    Seven approaches to averaging reliability coefficients are presented. Each approach starts with a unique definition of the concept of "average," and no approach is more correct than the others. Six of the approaches are applicable to internal consistency coefficients. The seventh approach is specific to alternate-forms coefficients. Although the…

  3. [Comparison on the methods for spatial interpolation of the annual average precipitation in the Loess Plateau region].

    PubMed

    Yu, Yang; Wei, Wei; Chen, Li-ding; Yang, Lei; Zhang, Han-dan

    2015-04-01

    Based on 57 years (1957-2013) daily precipitation datasets of the 85 meteorological stations in the Loess Plateau region, different spatial interpolation methods, including ordinary kriging (OK), inverse distance weighting (IDW) and radial-based function (RBF), were conducted to analyze the spatial variation of annual average precipitation regionally. Meanwhile, the mean absolute error (MAE), the root mean square error (RMSE), the accuracy (AC) and the Pearson correlation coefficient (R) were compared among the interpolation results in order to quantify the effects of different interpolation methods on spatial variation of the annual average precipitation. The results showed that the Moran's I index was 0.67 for the 57 years annual average precipitation in the Loess Plateau region. Meteorological stations exhibited strong spatial correlation. The validation results of the 63 training stations and 22 test stations indicated that there were significant correlations between the training and test values among different interpolation methods. However, the RMSE (IDW = 51.49, RBF = 43.79) and MAE (IDW = 38.98, RBF = 34.61) of the IDW and the RBF showed higher values than the OK. In addition, the comparison of the four semi-variagram models (Circular, Spherical, Exponential and Gaussian) for the OK indicated that the circular model had the lowest MAE (32.34) and the highest accuracy (0.976), while the MAE of the exponential model was the highest (33.24). In conclusion, comparing the validation between the training data and test results of the different spatial interpolation methods, the circular model of the OK method was the best one for obtaining accurate spatial interpolation of annual average precipitation in the Loess Plateau region. PMID:26259439

  4. Two-stage model of African absolute motion during the last 30 million years

    NASA Astrophysics Data System (ADS)

    Pollitz, Fred F.

    1991-07-01

    The absolute motion of Africa (relative to the hotspots) for the past 30 My is modeled with two Euler vectors, with a change occurring at 6 Ma. Because of the high sensitivity of African absolute motions to errors in the absolute motions of the North America and Pacific plates, both the pre-6 Ma and post-6 Ma African absolute motions are determined simultaneously with North America and Pacific absolute motions for various epochs. Geologic data from the northern Atlantic and hotspot tracks from the African plate are used to augment previous data sets for the North America and Pacific plates. The difference between the pre-6 Ma and post-6 Ma absolute plate motions may be represented as a counterclockwise rotation about a pole at 48 °S, 84 °E, with angular velocity 0.085 °/My. This change is supported by geologic evidence along a large portion of the African plate boundary, including the Red Sea and Gulf of Aden spreading systems, the Alpine deformation zone, and the central and southern mid-Atlantic Ridge. Although the change is modeled as one abrupt transition at 6 Ma, it was most likely a gradual change spanning the period 8-4 Ma. As a likely mechanism for the change, we favor strong asthenospheric return flow from the Afar hotspot towards the southwest; this could produce the uniform southwesterly shift in absolute motion which we have inferred as well as provide a mechanism for the opening of the East African Rift. Comparing the absolute motions of the North America and Pacific plates with earlier estimates, the pole positions are revised by up to 5° and the angular velocities are decreased by 10-20%.

  5. The Average of Rates and the Average Rate.

    ERIC Educational Resources Information Center

    Lindstrom, Peter

    1988-01-01

    Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)

  6. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  7. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  8. Spin glasses and error-correcting codes

    NASA Technical Reports Server (NTRS)

    Belongie, M. L.

    1994-01-01

    In this article, we study a model for error-correcting codes that comes from spin glass theory and leads to both new codes and a new decoding technique. Using the theory of spin glasses, it has been proven that a simple construction yields a family of binary codes whose performance asymptotically approaches the Shannon bound for the Gaussian channel. The limit is approached as the number of information bits per codeword approaches infinity while the rate of the code approaches zero. Thus, the codes rapidly become impractical. We present simulation results that show the performance of a few manageable examples of these codes. In the correspondence that exists between spin glasses and error-correcting codes, the concept of a thermal average leads to a method of decoding that differs from the standard method of finding the most likely information sequence for a given received codeword. Whereas the standard method corresponds to calculating the thermal average at temperature zero, calculating the thermal average at a certain optimum temperature results instead in the sequence of most likely information bits. Since linear block codes and convolutional codes can be viewed as examples of spin glasses, this new decoding method can be used to decode these codes in a way that minimizes the bit error rate instead of the codeword error rate. We present simulation results that show a small improvement in bit error rate by using the thermal average technique.

  9. Spectral Approach to Optimal Estimation of the Global Average Temperature.

    NASA Astrophysics Data System (ADS)

    Shen, Samuel S. P.; North, Gerald R.; Kim, Kwang-Y.

    1994-12-01

    Making use of EOF analysis and statistical optimal averaging techniques, the problem of random sampling error in estimating the global average temperature by a network of surface stations has been investigated. The EOF representation makes it unnecessary to use simplified empirical models of the correlation structure of temperature anomalies. If an adjustable weight is assigned to each station according to the criterion of minimum mean-square error, a formula for this error can be derived that consists of a sum of contributions from successive EOF modes. The EOFs were calculated from both observed data and a noise-forced EBM for the problem of one-year and five-year averages. The mean square statistical sampling error depends on the spatial distribution of the stations, length of the averaging interval, and the choice of the weight for each station data stream. Examples used here include four symmetric configurations of 4 × 4, 6 × 4, 9 × 7, and 20 × 10 stations and the Angell-Korshover configuration. Comparisons with the 100-yr U.K. dataset show that correlations for the time series of the global temperature anomaly average between the full dataset and this study's sparse configurations are rather high. For example, the 63-station Angell-Korshover network with uniform weighting explains 92.7% of the total variance, whereas the same network with optimal weighting can lead to 97.8% explained total variance of the U.K. dataset.

  10. Spectral approach to optimal estimation of the global average temperature

    SciTech Connect

    Shen, S.S.P.; North, G.R.; Kim, K.Y.

    1994-12-01

    Making use of EOF analysis and statistical optimal averaging techniques, the problem of random sampling error in estimating the global average temperature by a network of surface stations has been investigated. The EOF representation makes it unnecessary to use simplified empirical models of the correlation structure of temperature anomalies. If an adjustable weight is assigned to each station according to the criterion of minimum mean-square error, a formula for this error can be derived that consists of a sum of contributions from successive EOF modes. The EOFs were calculated from both observed data a noise-forced EBM for the problem of one-year and five-year averages. The mean square statistical sampling error depends on the spatial distribution of the stations, length of the averaging interval, and the choice of the weight for each station data stream. Examples used here include four symmetric configurations of 4 X 4, 5 X 4, 9 X 7, and 20 X 10 stations and the Angell-Korshover configuration. Comparisons with the 100-yr U.K. dataset show that correlations for the time series of the global temperature anomaly average between the full dataset and this study`s sparse configurations are rather high. For example, the 63-station Angell-Korshover network with uniform weighting explains 92.7% of the total variance, whereas the same network with optimal weighting can lead to 97.8% explained total variance of the U.K. dataset. 27 refs., 5 figs., 4 tabs.

  11. Absolute partial photoionization cross sections of ozone.

    SciTech Connect

    Berkowitz, J.; Chemistry

    2008-04-01

    Despite the current concerns about ozone, absolute partial photoionization cross sections for this molecule in the vacuum ultraviolet (valence) region have been unavailable. By eclectic re-evaluation of old/new data and plausible assumptions, such cross sections have been assembled to fill this void.

  12. Solving Absolute Value Equations Algebraically and Geometrically

    ERIC Educational Resources Information Center

    Shiyuan, Wei

    2005-01-01

    The way in which students can improve their comprehension by understanding the geometrical meaning of algebraic equations or solving algebraic equation geometrically is described. Students can experiment with the conditions of the absolute value equation presented, for an interesting way to form an overall understanding of the concept.

  13. Teaching Absolute Value Inequalities to Mature Students

    ERIC Educational Resources Information Center

    Sierpinska, Anna; Bobos, Georgeana; Pruncut, Andreea

    2011-01-01

    This paper gives an account of a teaching experiment on absolute value inequalities, whose aim was to identify characteristics of an approach that would realize the potential of the topic to develop theoretical thinking in students enrolled in prerequisite mathematics courses at a large, urban North American university. The potential is…

  14. Increasing Capacity: Practice Effects in Absolute Identification

    ERIC Educational Resources Information Center

    Dodds, Pennie; Donkin, Christopher; Brown, Scott D.; Heathcote, Andrew

    2011-01-01

    In most of the long history of the study of absolute identification--since Miller's (1956) seminal article--a severe limit on performance has been observed, and this limit has resisted improvement even by extensive practice. In a startling result, Rouder, Morey, Cowan, and Pfaltz (2004) found substantially improved performance with practice in the…

  15. On Relative and Absolute Conviction in Mathematics

    ERIC Educational Resources Information Center

    Weber, Keith; Mejia-Ramos, Juan Pablo

    2015-01-01

    Conviction is a central construct in mathematics education research on justification and proof. In this paper, we claim that it is important to distinguish between absolute conviction and relative conviction. We argue that researchers in mathematics education frequently have not done so and this has lead to researchers making unwarranted claims…

  16. Absolute Points for Multiple Assignment Problems

    ERIC Educational Resources Information Center

    Adlakha, V.; Kowalski, K.

    2006-01-01

    An algorithm is presented to solve multiple assignment problems in which a cost is incurred only when an assignment is made at a given cell. The proposed method recursively searches for single/group absolute points to identify cells that must be loaded in any optimal solution. Unlike other methods, the first solution is the optimal solution. The…

  17. Nonequilibrium equalities in absolutely irreversible processes

    NASA Astrophysics Data System (ADS)

    Murashita, Yuto; Funo, Ken; Ueda, Masahito

    2015-03-01

    Nonequilibrium equalities have attracted considerable attention in the context of statistical mechanics and information thermodynamics. Integral nonequilibrium equalities reveal an ensemble property of the entropy production σ as = 1 . Although nonequilibrium equalities apply to rather general nonequilibrium situations, they break down in absolutely irreversible processes, where the forward-path probability vanishes and the entropy production diverges. We identify the mathematical origins of this inapplicability as the singularity of probability measure. As a result, we generalize conventional integral nonequilibrium equalities to absolutely irreversible processes as = 1 -λS , where λS is the probability of the singular part defined based on Lebesgue's decomposition theorem. The acquired equality contains two physical quantities related to irreversibility: σ characterizing ordinary irreversibility and λS describing absolute irreversibility. An inequality derived from the obtained equality demonstrates the absolute irreversibility leads to the fundamental lower bound on the entropy production. We demonstrate the validity of the obtained equality for a simple model.

  18. Stimulus Probability Effects in Absolute Identification

    ERIC Educational Resources Information Center

    Kent, Christopher; Lamberts, Koen

    2016-01-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…

  19. Precision absolute positional measurement of laser beams.

    PubMed

    Fitzsimons, Ewan D; Bogenstahl, Johanna; Hough, James; Killow, Christian J; Perreur-Lloyd, Michael; Robertson, David I; Ward, Henry

    2013-04-20

    We describe an instrument which, coupled with a suitable coordinate measuring machine, facilitates the absolute measurement within the machine frame of the propagation direction of a millimeter-scale laser beam to an accuracy of around ±4 μm in position and ±20 μrad in angle. PMID:23669658

  20. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  1. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  2. Error coding simulations

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1993-11-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  3. TU-A-12A-09: Absolute Blood Flow Measurement in a Cardiac Phantom Using Low Dose CT

    SciTech Connect

    Ziemer, B; Hubbard, L; Lipinski, J; Molloi, S

    2014-06-15

    Purpose: To investigate a first pass analysis technique to measure absolute flow from low dose CT images in a cardiac phantom. This technique can be combined with a myocardial mass assignment to yield absolute perfusion using only two volume scans and reduce the radiation dose to the patient. Methods: A four-chamber cardiac phantom and perfusion chamber were constructed from poly-acrylic and connected with tubing to approximate anatomical features. The system was connected to a pulsatile pump, input/output reservoirs and power contrast injector. Flow was varied in the range of 1-2.67 mL/s with the pump operating at 60 beats/min. The system was imaged once a second for 14 seconds with a 320-row scanner (Toshiba Medical Systems) using a contrast-enhanced, prospective-gated cardiac perfusion protocol. Flow was calculated by the following steps: subsequent images of the perfusion volume were subtracted to find the contrast entering the volume; this was normalized by an upstream, known volume region to convert Hounsfield (HU) values to concentration; this was divided by the subtracted images time difference. The technique requires a relatively stable input contrast concentration and no contrast can leave the perfusion volume before the flow measurement is completed. Results: The flow calculated from the images showed an excellent correlation with the known rates. The data was fit to a linear function with slope 1.03, intercept 0.02 and an R{sup 2} value of 0.99. The average root mean square (RMS) error was 0.15 mL/s and the average standard deviation was 0.14 mL/s. The flow rate was stable within 7.7% across the full scan and served to validate model assumptions. Conclusion: Accurate, absolute flow rates were measured from CT images using a conservation of mass model. Measurements can be made using two volume scans which can substantially reduce the radiation dose compared with current dynamic perfusion techniques.

  4. The Averaging Problem in Cosmology

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem

    2009-06-01

    This thesis deals with the averaging problem in cosmology, which has gained considerable interest in recent years, and is concerned with correction terms (after averaging inhomogeneities) that appear in the Einstein equations when working on the large scales appropriate for cosmology. It has been claimed in the literature that these terms may account for the phenomenon of dark energy which causes the late time universe to accelerate. We investigate the nature of these terms by using averaging schemes available in the literature and further developed to be applicable to the problem at hand. We show that the effect of these terms when calculated carefully, remains negligible and cannot explain the late time acceleration.

  5. Absolute phase retrieval for defocused fringe projection three-dimensional measurement

    NASA Astrophysics Data System (ADS)

    Zheng, Dongliang; Da, Feipeng

    2014-02-01

    Defocused fringe projection three-dimensional technique based on pulse-width modulation (PWM) can generate high-quality sinusoidal fringe patterns. It only uses slightly defocused binary structured patterns which can eliminate the gamma problem (i.e. nonlinear response), and the phase error can be significantly reduced. However, when the projector is defocused, it is difficult to retrieve the absolute phase from the wrapped phase. A recently proposed phase coding method is efficient for absolute phase retrieval, but the gamma problem leads this method not so reliable. In this paper, we use the PWM technique to generate fringe patterns for the phase coding method. The gamma problem of the projector can be eliminated, and correct absolute phase can be retrieved. The proposed method only uses two grayscale values (0's and 255's), which can be used for real-time 3D shape measurement. Both simulation and experiment demonstrate the performance of the proposed method.

  6. The correction of vibration in frequency scanning interferometry based absolute distance measurement system for dynamic measurements

    NASA Astrophysics Data System (ADS)

    Lu, Cheng; Liu, Guodong; Liu, Bingguo; Chen, Fengdong; Zhuang, Zhitao; Xu, Xinke; Gan, Yu

    2015-10-01

    Absolute distance measurement systems are of significant interest in the field of metrology, which could improve the manufacturing efficiency and accuracy of large assemblies in fields such as aircraft construction, automotive engineering, and the production of modern windmill blades. Frequency scanning interferometry demonstrates noticeable advantages as an absolute distance measurement system which has a high precision and doesn't depend on a cooperative target. In this paper , the influence of inevitable vibration in the frequency scanning interferometry based absolute distance measurement system is analyzed. The distance spectrum is broadened as the existence of Doppler effect caused by vibration, which will bring in a measurement error more than 103 times bigger than the changes of optical path difference. In order to decrease the influence of vibration, the changes of the optical path difference are monitored by a frequency stabilized laser, which runs parallel to the frequency scanning interferometry. The experiment has verified the effectiveness of this method.

  7. Compensating user position for GPS ephemeris error

    NASA Technical Reports Server (NTRS)

    Wu, J. T.

    1990-01-01

    A method for canceling the effect of GPS ephemeris error on user position is proposed. In this method, the baseline vectors from the reference stations to the user are estimated without adjusting the GPS ephemeris. The user position is computed by adjustment using differenced data from the user and each station separately and averaging the results with weights inversely proportional to the lengths of the baselines. Alternatively, the differenced data can be averaged in a similar manner before the user position is estimated. The averaging procedure cancels most of the ephemeris error because the error is proportional to the length of the baseline. A numerical simulation is performed to demonstrate and evaluate the method. Two reference stations with perfectly known locations are assumed to be placed several hundred kilometers apart. A user receiver with a poorly known location is located between the stations. The user positions are first estimated separately using data from the user and each station and then averaged. The averaging reduces the error by about one order of magnitude.

  8. Application of a Combined Model with Autoregressive Integrated Moving Average (ARIMA) and Generalized Regression Neural Network (GRNN) in Forecasting Hepatitis Incidence in Heng County, China

    PubMed Central

    Liang, Hao; Gao, Lian; Liang, Bingyu; Huang, Jiegang; Zang, Ning; Liao, Yanyan; Yu, Jun; Lai, Jingzhen; Qin, Fengxiang; Su, Jinming; Ye, Li; Chen, Hui

    2016-01-01

    Background Hepatitis is a serious public health problem with increasing cases and property damage in Heng County. It is necessary to develop a model to predict the hepatitis epidemic that could be useful for preventing this disease. Methods The autoregressive integrated moving average (ARIMA) model and the generalized regression neural network (GRNN) model were used to fit the incidence data from the Heng County CDC (Center for Disease Control and Prevention) from January 2005 to December 2012. Then, the ARIMA-GRNN hybrid model was developed. The incidence data from January 2013 to December 2013 were used to validate the models. Several parameters, including mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE) and mean square error (MSE), were used to compare the performance among the three models. Results The morbidity of hepatitis from Jan 2005 to Dec 2012 has seasonal variation and slightly rising trend. The ARIMA(0,1,2)(1,1,1)12 model was the most appropriate one with the residual test showing a white noise sequence. The smoothing factor of the basic GRNN model and the combined model was 1.8 and 0.07, respectively. The four parameters of the hybrid model were lower than those of the two single models in the validation. The parameters values of the GRNN model were the lowest in the fitting of the three models. Conclusions The hybrid ARIMA-GRNN model showed better hepatitis incidence forecasting in Heng County than the single ARIMA model and the basic GRNN model. It is a potential decision-supportive tool for controlling hepatitis in Heng County. PMID:27258555

  9. Absolute Cavity Pyrgeometer to Measure the Absolute Outdoor Longwave Irradiance with Traceability to International System of Units, SI

    SciTech Connect

    Reda, I.; Zeng, J.; Scheuch, J.; Hanssen, L.; Wilthan, B.; Myers, D.; Stoffel, T.

    2012-03-01

    This article describes a method of measuring the absolute outdoor longwave irradiance using an absolute cavity pyrgeometer (ACP), U.S. Patent application no. 13/049, 275. The ACP consists of domeless thermopile pyrgeometer, gold-plated concentrator, temperature controller, and data acquisition. The dome was removed from the pyrgeometer to remove errors associated with dome transmittance and the dome correction factor. To avoid thermal convection and wind effect errors resulting from using a domeless thermopile, the gold-plated concentrator was placed above the thermopile. The concentrator is a dual compound parabolic concentrator (CPC) with 180{sup o} view angle to measure the outdoor incoming longwave irradiance from the atmosphere. The incoming irradiance is reflected from the specular gold surface of the CPC and concentrated on the 11 mm diameter of the pyrgeometer's blackened thermopile. The CPC's interior surface design and the resulting cavitation result in a throughput value that was characterized by the National Institute of Standards and Technology. The ACP was installed horizontally outdoor on an aluminum plate connected to the temperature controller to control the pyrgeometer's case temperature. The responsivity of the pyrgeometer's thermopile detector was determined by lowering the case temperature and calculating the rate of change of the thermopile output voltage versus the changing net irradiance. The responsivity is then used to calculate the absolute atmospheric longwave irradiance with an uncertainty estimate (U{sub 95}) of {+-}3.96 W m{sup 02} with traceability to the International System of Units, SI. The measured irradiance was compared with the irradiance measured by two pyrgeometers calibrated by the World Radiation Center with traceability to the Interim World Infrared Standard Group, WISG. A total of 408 readings were collected over three different nights. The calculated irradiance measured by the ACP was 1.5 W/m{sup 2} lower than that

  10. An absolute cavity pyrgeometer to measure the absolute outdoor longwave irradiance with traceability to international system of units, SI

    NASA Astrophysics Data System (ADS)

    Reda, Ibrahim; Zeng, Jinan; Scheuch, Jonathan; Hanssen, Leonard; Wilthan, Boris; Myers, Daryl; Stoffel, Tom

    2012-03-01

    This article describes a method of measuring the absolute outdoor longwave irradiance using an absolute cavity pyrgeometer (ACP), U.S. Patent application no. 13/049, 275. The ACP consists of domeless thermopile pyrgeometer, gold-plated concentrator, temperature controller, and data acquisition. The dome was removed from the pyrgeometer to remove errors associated with dome transmittance and the dome correction factor. To avoid thermal convection and wind effect errors resulting from using a domeless thermopile, the gold-plated concentrator was placed above the thermopile. The concentrator is a dual compound parabolic concentrator (CPC) with 180° view angle to measure the outdoor incoming longwave irradiance from the atmosphere. The incoming irradiance is reflected from the specular gold surface of the CPC and concentrated on the 11 mm diameter of the pyrgeometer's blackened thermopile. The CPC's interior surface design and the resulting cavitation result in a throughput value that was characterized by the National Institute of Standards and Technology. The ACP was installed horizontally outdoor on an aluminum plate connected to the temperature controller to control the pyrgeometer's case temperature. The responsivity of the pyrgeometer's thermopile detector was determined by lowering the case temperature and calculating the rate of change of the thermopile output voltage versus the changing net irradiance. The responsivity is then used to calculate the absolute atmospheric longwave irradiance with an uncertainty estimate (U95) of ±3.96 W m-2 with traceability to the International System of Units, SI. The measured irradiance was compared with the irradiance measured by two pyrgeometers calibrated by the World Radiation Center with traceability to the Interim World Infrared Standard Group, WISG. A total of 408 readings were collected over three different nights. The calculated irradiance measured by the ACP was 1.5 W/m2 lower than that measured by the two

  11. High average power pockels cell

    DOEpatents

    Daly, Thomas P.

    1991-01-01

    A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.

  12. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  13. Medical error and disclosure.

    PubMed

    White, Andrew A; Gallagher, Thomas H

    2013-01-01

    Errors occur commonly in healthcare and can cause significant harm to patients. Most errors arise from a combination of individual, system, and communication failures. Neurologists may be involved in harmful errors in any practice setting and should familiarize themselves with tools to prevent, report, and examine errors. Although physicians, patients, and ethicists endorse candid disclosure of harmful medical errors to patients, many physicians express uncertainty about how to approach these conversations. A growing body of research indicates physicians often fail to meet patient expectations for timely and open disclosure. Patients desire information about the error, an apology, and a plan for preventing recurrence of the error. To meet these expectations, physicians should participate in event investigations and plan thoroughly for each disclosure conversation, preferably with a disclosure coach. Physicians should also anticipate and attend to the ongoing medical and emotional needs of the patient. A cultural change towards greater transparency following medical errors is in motion. Substantial progress is still required, but neurologists can further this movement by promoting policies and environments conducive to open reporting, respectful disclosure to patients, and support for the healthcare workers involved. PMID:24182370

  14. Application of an autoregressive integrated moving average model for predicting injury mortality in Xiamen, China

    PubMed Central

    Lin, Yilan; Chen, Min; Chen, Guowei; Wu, Xiaoqing; Lin, Tianquan

    2015-01-01

    Objective Injury is currently an increasing public health problem in China. Reducing the loss due to injuries has become a main priority of public health policies. Early warning of injury mortality based on surveillance information is essential for reducing or controlling the disease burden of injuries. We conducted this study to find the possibility of applying autoregressive integrated moving average (ARIMA) models to predict mortality from injuries in Xiamen. Method The monthly mortality data on injuries in Xiamen (1 January 2002 to 31 December 2013) were used to fit the ARIMA model with the conditional least-squares method. The values p, q and d in the ARIMA (p, d, q) model refer to the numbers of autoregressive lags, moving average lags and differences, respectively. The Ljung–Box test was used to measure the ‘white noise’ and residuals. The mean absolute percentage error (MAPE) between observed and fitted values was used to evaluate the predicted accuracy of the constructed models. Results A total of 8274 injury-related deaths in Xiamen were identified during the study period; the average annual mortality rate was 40.99/100 000 persons. Three models, ARIMA (0, 1, 1), ARIMA (4, 1, 0) and ARIMA (1, 1, (2)), passed the parameter (p<0.01) and residual (p>0.05) tests, with MAPE 11.91%, 11.96% and 11.90%, respectively. We chose ARIMA (0, 1, 1) as the optimum model, the MAPE value for which was similar to that of other models but with the fewest parameters. According to the model, there would be 54 persons dying from injuries each month in Xiamen in 2014. Conclusion The ARIMA (0, 1, 1) model could be applied to predict mortality from injuries in Xiamen. PMID:26656013

  15. TIME-AVERAGE-BASED METHODS FOR MULTI-ANGULAR SCALE ANALYSIS OF COSMIC-RAY DATA

    SciTech Connect

    Iuppa, R.; Di Sciascio, G. E-mail: giuseppe.disciascio@roma2.infn.it

    2013-04-01

    Over the past decade, a number of experiments dealt with the problem of measuring the arrival direction distribution of cosmic rays, looking for information on the propagation mechanisms and the identification of their sources. Any deviation from the isotropy may be regarded to as a signature of unforeseen or unknown phenomena, mostly if well localized in the sky and occurring at low rigidity. It induced experimenters to search for excesses down to angular scales as narrow as 10 Degree-Sign , disclosing the issue of properly filtering contributions from wider structures. A solution commonly envisaged was based on time-average methods to determine the reference value of cosmic-ray flux. Such techniques are nearly insensitive to signals wider than the time window in use, thus allowing us to focus the analysis on medium- and small-scale signals. Nonetheless, the signal often cannot be excluded in the calculation of the reference value, which induces systematic errors. The use of time-average methods recently revealed important discoveries about the medium-scale cosmic-ray anisotropy, present both in the northern and southern hemispheres. It is known that the excess (or deficit) is observed as less intense than in reality and that fake deficit zones are rendered around true excesses because of the absolute lack of knowledge a priori of which signal is true and which is not. This work is an attempt to critically review the use of time-average-based methods for observing extended features in the cosmic-ray arrival distribution pattern.

  16. Absolute testing of flats in sub-stitching interferometer by rotation-shift method

    NASA Astrophysics Data System (ADS)

    Jia, Xin; Xu, Fuchao; Xie, Weimin; Li, Yun; Xing, Tingwen

    2015-09-01

    Most of the commercial available sub-aperture stitching interferometers measure the surface with a standard lens that produces a reference wavefront, and the precision of the interferometer is generally limited by the standard lens. The test accuracy can be achieved by removing the error of reference surface by the absolute testing method. When the testing accuracy (repeatability and reproducibility) is close to 1nm, in addition to the reference surface, other factors will also affect the measuring accuracy such as environment, zoom magnification, stitching precision, tooling and fixture, the characteristics of optical materials and so on. We establish a stitching system in the thousand level cleanroom. The stitching system is including the Zygo interferometer, the motion system with Bilz active isolation system at level VC-F. We review the traditional absolute flat testing methods and emphasize the method of rotation-shift functions. According to the rotation-shift method we get the profile of the reference lens and the testing lens. The problem of the rotation-shift method is the tilt error. In the motion system, we control the tilt error no more than 4 second to reduce the error. In order to obtain higher testing accuracy, we analyze the influence surface shape measurement accuracy by recording the environment error with the fluke testing equipment.

  17. Global Rotation Estimation Using Weighted Iterative Lie Algebraic Averaging

    NASA Astrophysics Data System (ADS)

    Reich, M.; Heipke, C.

    2015-08-01

    In this paper we present an approach for a weighted rotation averaging to estimate absolute rotations from relative rotations between two images for a set of multiple overlapping images. The solution does not depend on initial values for the unknown parameters and is robust against outliers. Our approach is one part of a solution for a global image orientation. Often relative rotations are not free from outliers, thus we use the redundancy in available pairwise relative rotations and present a novel graph-based algorithm to detect and eliminate inconsistent rotations. The remaining relative rotations are input to a weighted least squares adjustment performed in the Lie algebra of the rotation manifold SO(3) to obtain absolute orientation parameters for each image. Weights are determined using the prior information we derived from the estimation of the relative rotations. Because we use the Lie algebra of SO(3) for averaging no subsequent adaptation of the results has to be performed but the lossless projection to the manifold. We evaluate our approach on synthetic and real data. Our approach often is able to detect and eliminate all outliers from the relative rotations even if very high outlier rates are present. We show that we improve the quality of the estimated absolute rotations by introducing individual weights for the relative rotations based on various indicators. In comparison with the state-of-the-art in recent publications to global image orientation we achieve best results in the examined datasets.

  18. Absolute and relative dosimetry for ELIMED

    SciTech Connect

    Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Cuttone, G.; Candiano, G.; Musumarra, A.; Pisciotta, P.; Romano, F.; Carpinelli, M.; Presti, D. Lo; Raffaele, L.; Tramontana, A.; Cirio, R.; Sacchi, R.; Monaco, V.; Marchetto, F.; Giordanengo, S.

    2013-07-26

    The definition of detectors, methods and procedures for the absolute and relative dosimetry of laser-driven proton beams is a crucial step toward the clinical use of this new kind of beams. Hence, one of the ELIMED task, will be the definition of procedures aiming to obtain an absolute dose measure at the end of the transport beamline with an accuracy as close as possible to the one required for clinical applications (i.e. of the order of 5% or less). Relative dosimetry procedures must be established, as well: they are necessary in order to determine and verify the beam dose distributions and to monitor the beam fluence and the energetic spectra during irradiations. Radiochromic films, CR39, Faraday Cup, Secondary Emission Monitor (SEM) and transmission ionization chamber will be considered, designed and studied in order to perform a fully dosimetric characterization of the ELIMED proton beam.

  19. Probing absolute spin polarization at the nanoscale.

    PubMed

    Eltschka, Matthias; Jäck, Berthold; Assig, Maximilian; Kondrashov, Oleg V; Skvortsov, Mikhail A; Etzkorn, Markus; Ast, Christian R; Kern, Klaus

    2014-12-10

    Probing absolute values of spin polarization at the nanoscale offers insight into the fundamental mechanisms of spin-dependent transport. Employing the Zeeman splitting in superconducting tips (Meservey-Tedrow-Fulde effect), we introduce a novel spin-polarized scanning tunneling microscopy that combines the probing capability of the absolute values of spin polarization with precise control at the atomic scale. We utilize our novel approach to measure the locally resolved spin polarization of magnetic Co nanoislands on Cu(111). We find that the spin polarization is enhanced by 65% when increasing the width of the tunnel barrier by only 2.3 Å due to the different decay of the electron orbitals into vacuum. PMID:25423049

  20. Absolute-magnitude distributions of supernovae

    SciTech Connect

    Richardson, Dean; Wright, John; Jenkins III, Robert L.; Maddox, Larry

    2014-05-01

    The absolute-magnitude distributions of seven supernova (SN) types are presented. The data used here were primarily taken from the Asiago Supernova Catalogue, but were supplemented with additional data. We accounted for both foreground and host-galaxy extinction. A bootstrap method is used to correct the samples for Malmquist bias. Separately, we generate volume-limited samples, restricted to events within 100 Mpc. We find that the superluminous events (M{sub B} < –21) make up only about 0.1% of all SNe in the bias-corrected sample. The subluminous events (M{sub B} > –15) make up about 3%. The normal Ia distribution was the brightest with a mean absolute blue magnitude of –19.25. The IIP distribution was the dimmest at –16.75.

  1. Absolute radiometry and the solar constant

    NASA Technical Reports Server (NTRS)

    Willson, R. C.

    1974-01-01

    A series of active cavity radiometers (ACRs) are described which have been developed as standard detectors for the accurate measurement of irradiance in absolute units. It is noted that the ACR is an electrical substitution calorimeter, is designed for automatic remote operation in any environment, and can make irradiance measurements in the range from low-level IR fluxes up to 30 solar constants with small absolute uncertainty. The instrument operates in a differential mode by chopping the radiant flux to be measured at a slow rate, and irradiance is determined from two electrical power measurements together with the instrumental constant. Results are reported for measurements of the solar constant with two types of ACRs. The more accurate measurement yielded a value of 136.6 plus or minus 0.7 mW/sq cm (1.958 plus or minus 0.010 cal/sq cm per min).

  2. Asteroid absolute magnitudes and slope parameters

    NASA Technical Reports Server (NTRS)

    Tedesco, Edward F.

    1991-01-01

    A new listing of absolute magnitudes (H) and slope parameters (G) has been created and published in the Minor Planet Circulars; this same listing will appear in the 1992 Ephemerides of Minor Planets. Unlike previous listings, the values of the current list were derived from fits of data at the V band. All observations were reduced in the same fashion using, where appropriate, a single basis default value of 0.15 for the slope parameter. Distances and phase angles were computed for each observation. The data for 113 asteroids was of sufficiently high quality to permit derivation of their H and G. These improved absolute magnitudes and slope parameters will be used to deduce the most reliable bias-corrected asteroid size-frequency distribution yet made.

  3. Absolute calibration of TFTR helium proportional counters

    SciTech Connect

    Strachan, J.D.; Diesso, M.; Jassby, D.; Johnson, L.; McCauley, S.; Munsat, T.; Roquemore, A.L.; Barnes, C.W. |; Loughlin, M. |

    1995-06-01

    The TFTR helium proportional counters are located in the central five (5) channels of the TFTR multichannel neutron collimator. These detectors were absolutely calibrated using a 14 MeV neutron generator positioned at the horizontal midplane of the TFTR vacuum vessel. The neutron generator position was scanned in centimeter steps to determine the collimator aperture width to 14 MeV neutrons and the absolute sensitivity of each channel. Neutron profiles were measured for TFTR plasmas with time resolution between 5 msec and 50 msec depending upon count rates. The He detectors were used to measure the burnup of 1 MeV tritons in deuterium plasmas, the transport of tritium in trace tritium experiments, and the residual tritium levels in plasmas following 50:50 DT experiments.

  4. Absolute enantioselective separation: optical activity ex machina.

    PubMed

    Bielski, Roman; Tencer, Michal

    2005-11-01

    The paper describes methodology of using three independent macroscopic factors affecting molecular orientation to accomplish separation of a racemic mixture without the presence of any other chiral compounds, i. e., absolute enantioselective separation (AES) which is an extension of a concept of applying these factors to absolute asymmetric synthesis. The three factors may be applied simultaneously or, if their effects can be retained, consecutively. The resulting three mutually orthogonal or near orthogonal directors constitute a true chiral influence and their scalar triple product is the measure of the chirality of the system. AES can be executed in a chromatography-like microfluidic process in the presence of an electric field. It may be carried out on a chemically modified flat surface, a monolithic polymer column made of a mesoporous material, each having imparted directional properties. Separation parameters were estimated for these media and possible implications for the natural homochirality are discussed. PMID:16342798

  5. An absolute measure for a key currency

    NASA Astrophysics Data System (ADS)

    Oya, Shunsuke; Aihara, Kazuyuki; Hirata, Yoshito

    It is generally considered that the US dollar and the euro are the key currencies in the world and in Europe, respectively. However, there is no absolute general measure for a key currency. Here, we investigate the 24-hour periodicity of foreign exchange markets using a recurrence plot, and define an absolute measure for a key currency based on the strength of the periodicity. Moreover, we analyze the time evolution of this measure. The results show that the credibility of the US dollar has not decreased significantly since the Lehman shock, when the Lehman Brothers bankrupted and influenced the economic markets, and has increased even relatively better than that of the euro and that of the Japanese yen.

  6. From Hubble's NGSL to Absolute Fluxes

    NASA Technical Reports Server (NTRS)

    Heap, Sara R.; Lindler, Don

    2012-01-01

    Hubble's Next Generation Spectral Library (NGSL) consists of R-l000 spectra of 374 stars of assorted temperature, gravity, and metallicity. Each spectrum covers the wavelength range, 0.18-1.00 microns. The library can be viewed and/or downloaded from the website, http://archive.stsci.edu/prepds/stisngsll. Stars in the NGSL are now being used as absolute flux standards at ground-based observatories. However, the uncertainty in the absolute flux is about 2%, which does not meet the requirements of dark-energy surveys. We are therefore developing an observing procedure that should yield fluxes with uncertainties less than 1 % and will take part in an HST proposal to observe up to 15 stars using this new procedure.

  7. Vocal attractiveness increases by averaging.

    PubMed

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. PMID:20129047

  8. Determining GPS average performance metrics

    NASA Technical Reports Server (NTRS)

    Moore, G. V.

    1995-01-01

    Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.

  9. Metallic Magnetic Calorimeters for Absolute Activity Measurement

    NASA Astrophysics Data System (ADS)

    Loidl, M.; Leblanc, E.; Rodrigues, M.; Bouchard, J.; Censier, B.; Branger, T.; Lacour, D.

    2008-05-01

    We present a prototype of metallic magnetic calorimeters that we are developing for absolute activity measurements of low energy emitting radionuclides. We give a detailed description of the realization of the prototype, containing an 55Fe source inside the detector absorber. We present the analysis of first data taken with this detector and compare the result of activity measurement with liquid scintillation counting. We also propose some ways for reducing the uncertainty on the activity determination with this new technique.

  10. Absolute photoionization cross sections of atomic oxygen

    NASA Technical Reports Server (NTRS)

    Samson, J. A. R.; Pareek, P. N.

    1985-01-01

    The absolute values of photoionization cross sections of atomic oxygen were measured from the ionization threshold to 120 A. An auto-ionizing resonance belonging to the 2S2P4(4P)3P(3Do, 3So) transition was observed at 479.43 A and another line at 389.97 A. The experimental data is in excellent agreement with rigorous close-coupling calculations that include electron correlations in both the initial and final states.

  11. Absolute photoionization cross sections of atomic oxygen

    NASA Technical Reports Server (NTRS)

    Samson, J. A. R.; Pareek, P. N.

    1982-01-01

    The absolute values of photoionization cross sections of atomic oxygen were measured from the ionization threshold to 120 A. An auto-ionizing resonance belonging to the 2S2P4(4P)3P(3Do, 3So) transition was observed at 479.43 A and another line at 389.97 A. The experimental data is in excellent agreement with rigorous close-coupling calculations that include electron correlations in both the initial and final states.

  12. Blood pressure targets and absolute cardiovascular risk.

    PubMed

    Odutayo, Ayodele; Rahimi, Kazem; Hsiao, Allan J; Emdin, Connor A

    2015-08-01

    In the Eighth Joint National Committee guideline on hypertension, the threshold for the initiation of blood pressure-lowering treatment for elderly adults (≥60 years) without chronic kidney disease or diabetes mellitus was raised from 140/90 mm Hg to 150/90 mm Hg. However, the committee was not unanimous in this decision, particularly because a large proportion of adults ≥60 years may be at high cardiovascular risk. On the basis of Eighth Joint National Committee guideline, we sought to determine the absolute 10-year risk of cardiovascular disease among these adults through analyzing the National Health and Nutrition Examination Survey (2005-2012). The primary outcome measure was the proportion of adults who were at ≥20% predicted absolute cardiovascular risk and above goals for the Seventh Joint National Committee guideline but reclassified as at target under the Eighth Joint National Committee guideline (reclassified). The Framingham General Cardiovascular Disease Risk Score was used. From 2005 to 2012, the surveys included 12 963 adults aged 30 to 74 years with blood pressure measurements, of which 914 were reclassified based on the guideline. Among individuals reclassified as not in need of additional treatment, the proportion of adults 60 to 74 years without chronic kidney disease or diabetes mellitus at ≥20% absolute risk was 44.8%. This corresponds to 0.8 million adults. The proportion at high cardiovascular risk remained sizable among adults who were not receiving blood pressure-lowering treatment. Taken together, a sizable proportion of reclassified adults 60 to 74 years without chronic kidney disease or diabetes mellitus was at ≥20% absolute cardiovascular risk. PMID:26056340

  13. Absolute distance measurements by variable wavelength interferometry

    NASA Astrophysics Data System (ADS)

    Bien, F.; Camac, M.; Caulfield, H. J.; Ezekiel, S.

    1981-02-01

    This paper describes a laser interferometer which provides absolute distance measurements using tunable lasers. An active feedback loop system, in which the laser frequency is locked to the optical path length difference of the interferometer, is used to tune the laser wavelengths. If the two wavelengths are very close, electronic frequency counters can be used to measure the beat frequency between the two laser frequencies and thus to determine the optical path difference between the two legs of the interferometer.

  14. Uncorrected refractive errors

    PubMed Central

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  15. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  16. Absolute dosimetry for extreme-ultraviolet lithography

    NASA Astrophysics Data System (ADS)

    Berger, Kurt W.; Campiotti, Richard H.

    2000-06-01

    The accurate measurement of an exposure dose reaching the wafer on an extreme ultraviolet (EUV) lithographic system has been a technical challenge directly applicable to the evaluation of candidate EUV resist materials and calculating lithography system throughputs. We have developed a dose monitoring sensor system that can directly measure EUV intensities at the wafer plane of a prototype EUV lithographic system. This sensor system, located on the wafer stage adjacent to the electrostatic chuck used to grip wafers, operates by translating the sensor into the aerial image, typically illuminating an 'open' (unpatterned) area on the reticle. The absolute signal strength can be related to energy density at the wafer, and thus used to determine resist sensitivity, and the signal as a function of position can be used to determine illumination uniformity at the wafer plane. Spectral filtering to enhance the detection of 13.4 nm radiation was incorporated into the sensor. Other critical design parameters include the packaging and amplification technologies required to place this device into the space and vacuum constraints of a EUV lithography environment. We describe two approaches used to determine the absolute calibration of this sensor. The first conventional approach requires separate characterization of each element of the sensor. A second novel approach uses x-ray emission from a mildly radioactive iron source to calibrate the absolute response of the entire sensor system (detector and electronics) in a single measurement.

  17. Evaluations of average level spacings

    SciTech Connect

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.

  18. On generalized averaged Gaussian formulas

    NASA Astrophysics Data System (ADS)

    Spalevic, Miodrag M.

    2007-09-01

    We present a simple numerical method for constructing the optimal (generalized) averaged Gaussian quadrature formulas which are the optimal stratified extensions of Gauss quadrature formulas. These extensions exist in many cases in which real positive Kronrod formulas do not exist. For the Jacobi weight functions w(x)equiv w^{(alpha,beta)}(x)D(1-x)^alpha(1+x)^beta ( alpha,beta>-1 ) we give a necessary and sufficient condition on the parameters alpha and beta such that the optimal averaged Gaussian quadrature formulas are internal.

  19. Insulin use: preventable errors.

    PubMed

    2014-01-01

    Insulin is vital for patients with type 1 diabetes and useful for certain patients with type 2 diabetes. The serious consequences of insulin-related medication errors are overdose, resulting in severe hypoglycaemia, causing seizures, coma and even death; or underdose, resulting in hyperglycaemia and sometimes ketoacidosis. Errors associated with the preparation and administration of insulin are often reported, both outside and inside the hospital setting. These errors are preventable. By analysing reports from organisations devoted to medication error prevention and from poison control centres, as well as a few studies and detailed case reports of medication errors, various types of error associated with insulin use have been identified, especially in the hospital setting. Generally, patients know more about the practicalities of their insulin treatment than healthcare professionals with intermittent involvement. Medication errors involving insulin can occur at each step of the medication-use process: prescribing, data entry, preparation, dispensing and administration. When prescribing insulin, wrong-dose errors have been caused by the use of abbreviations, especially "U" instead of the word "units" (often resulting in a 10-fold overdose because the "U" is read as a zero), or by failing to write the drug's name correctly or in full. In electronic prescribing, the sheer number of insulin products is a source of confusion and, ultimately, wrong-dose errors, and often overdose. Prescribing, dispensing or administration software is rarely compatible with insulin prescriptions in which the dose is adjusted on the basis of the patient's subsequent capillary blood glucose readings, and can therefore generate errors. When preparing and dispensing insulin, a tuberculin syringe is sometimes used instead of an insulin syringe, leading to overdose. Other errors arise from confusion created by similar packaging, between different insulin products or between insulin and other

  20. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  1. Polyhedral Painting with Group Averaging

    ERIC Educational Resources Information Center

    Farris, Frank A.; Tsao, Ryan

    2016-01-01

    The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…

  2. Averaged Electroencephalic Audiometry in Infants

    ERIC Educational Resources Information Center

    Lentz, William E.; McCandless, Geary A.

    1971-01-01

    Normal, preterm, and high-risk infants were tested at 1, 3, 6, and 12 months of age using averaged electroencephalic audiometry (AEA) to determine the usefulness of AEA as a measurement technique for assessing auditory acuity in infants, and to delineate some of the procedural and technical problems often encountered. (KW)

  3. Averaging inhomogeneous cosmologies - a dialogue.

    NASA Astrophysics Data System (ADS)

    Buchert, T.

    The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.

  4. Averaging inhomogenous cosmologies - a dialogue

    NASA Astrophysics Data System (ADS)

    Buchert, T.

    The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.

  5. Averaging facial expression over time

    PubMed Central

    Haberman, Jason; Harp, Tom; Whitney, David

    2010-01-01

    The visual system groups similar features, objects, and motion (e.g., Gestalt grouping). Recent work suggests that the computation underlying perceptual grouping may be one of summary statistical representation. Summary representation occurs for low-level features, such as size, motion, and position, and even for high level stimuli, including faces; for example, observers accurately perceive the average expression in a group of faces (J. Haberman & D. Whitney, 2007, 2009). The purpose of the present experiments was to characterize the time-course of this facial integration mechanism. In a series of three experiments, we measured observers’ abilities to recognize the average expression of a temporal sequence of distinct faces. Faces were presented in sets of 4, 12, or 20, at temporal frequencies ranging from 1.6 to 21.3 Hz. The results revealed that observers perceived the average expression in a temporal sequence of different faces as precisely as they perceived a single face presented repeatedly. The facial averaging was independent of temporal frequency or set size, but depended on the total duration of exposed faces, with a time constant of ~800 ms. These experiments provide evidence that the visual system is sensitive to the ensemble characteristics of complex objects presented over time. PMID:20053064

  6. Average Cost of Common Schools.

    ERIC Educational Resources Information Center

    White, Fred; Tweeten, Luther

    The paper shows costs of elementary and secondary schools applicable to Oklahoma rural areas, including the long-run average cost curve which indicates the minimum per student cost for educating various numbers of students and the application of the cost curves determining the optimum school district size. In a stratified sample, the school…

  7. Facts about Refractive Errors

    MedlinePlus

    ... the lens can cause refractive errors. What is refraction? Refraction is the bending of light as it passes ... rays entering the eye, causing a more precise refraction or focus. In many cases, contact lenses provide ...

  8. Errors in prenatal diagnosis.

    PubMed

    Anumba, Dilly O C

    2013-08-01

    Prenatal screening and diagnosis are integral to antenatal care worldwide. Prospective parents are offered screening for common fetal chromosomal and structural congenital malformations. In most developed countries, prenatal screening is routinely offered in a package that includes ultrasound scan of the fetus and the assay in maternal blood of biochemical markers of aneuploidy. Mistakes can arise at any point of the care pathway for fetal screening and diagnosis, and may involve individual or corporate systemic or latent errors. Special clinical circumstances, such as maternal size, fetal position, and multiple pregnancy, contribute to the complexities of prenatal diagnosis and to the chance of error. Clinical interventions may lead to adverse outcomes not caused by operator error. In this review I discuss the scope of the errors in prenatal diagnosis, and highlight strategies for their prevention and diagnosis, as well as identify areas for further research and study to enhance patient safety. PMID:23725900

  9. Error mode prediction.

    PubMed

    Hollnagel, E; Kaarstad, M; Lee, H C

    1999-11-01

    The study of accidents ('human errors') has been dominated by efforts to develop 'error' taxonomies and 'error' models that enable the retrospective identification of likely causes. In the field of Human Reliability Analysis (HRA) there is, however, a significant practical need for methods that can predict the occurrence of erroneous actions--qualitatively and quantitatively. The present experiment tested an approach for qualitative performance prediction based on the Cognitive Reliability and Error Analysis Method (CREAM). Predictions of possible erroneous actions were made for operators using different types of alarm systems. The data were collected as part of a large-scale experiment using professional nuclear power plant operators in a full scope simulator. The analysis showed that the predictions were correct in more than 70% of the cases, and also that the coverage of the predictions depended critically on the comprehensiveness of the preceding task analysis. PMID:10582035

  10. Pronominal Case-Errors

    ERIC Educational Resources Information Center

    Kaper, Willem

    1976-01-01

    Contradicts a previous assertion by C. Tanz that children commit substitution errors usually using objective pronoun forms for nominative ones. Examples from Dutch and German provide evidence that substitutions are made in both directions. (CHK)

  11. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  12. Exact averaging of laminar dispersion

    NASA Astrophysics Data System (ADS)

    Ratnakar, Ram R.; Balakotaiah, Vemuri

    2011-02-01

    We use the Liapunov-Schmidt (LS) technique of bifurcation theory to derive a low-dimensional model for laminar dispersion of a nonreactive solute in a tube. The LS formalism leads to an exact averaged model, consisting of the governing equation for the cross-section averaged concentration, along with the initial and inlet conditions, to all orders in the transverse diffusion time. We use the averaged model to analyze the temporal evolution of the spatial moments of the solute and show that they do not have the centroid displacement or variance deficit predicted by the coarse-grained models derived by other methods. We also present a detailed analysis of the first three spatial moments for short and long times as a function of the radial Peclet number and identify three clearly defined time intervals for the evolution of the solute concentration profile. By examining the skewness in some detail, we show that the skewness increases initially, attains a maximum for time scales of the order of transverse diffusion time, and the solute concentration profile never attains the Gaussian shape at any finite time. Finally, we reason that there is a fundamental physical inconsistency in representing laminar (Taylor) dispersion phenomena using truncated averaged models in terms of a single cross-section averaged concentration and its large scale gradient. Our approach evaluates the dispersion flux using a local gradient between the dominant diffusive and convective modes. We present and analyze a truncated regularized hyperbolic model in terms of the cup-mixing concentration for the classical Taylor-Aris dispersion that has a larger domain of validity compared to the traditional parabolic model. By analyzing the temporal moments, we show that the hyperbolic model has no physical inconsistencies that are associated with the parabolic model and can describe the dispersion process to first order accuracy in the transverse diffusion time.

  13. Error-Compensated Telescope

    NASA Technical Reports Server (NTRS)

    Meinel, Aden B.; Meinel, Marjorie P.; Stacy, John E.

    1989-01-01

    Proposed reflecting telescope includes large, low-precision primary mirror stage and small, precise correcting mirror. Correcting mirror machined under computer control to compensate for error in primary mirror. Correcting mirror machined by diamond cutting tool. Computer analyzes interferometric measurements of primary mirror to determine shape of surface of correcting mirror needed to compensate for errors in wave front reflected from primary mirror and commands position and movement of cutting tool accordingly.

  14. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  15. LANDSAT-4 horizon scanner full orbit data averages

    NASA Technical Reports Server (NTRS)

    Stanley, J. P.; Bilanow, S.

    1983-01-01

    Averages taken over full orbit data spans of the pitch and roll residual measurement errors of the two conical Earth sensors operating on the LANDSAT 4 spacecraft are described. The variability of these full orbit averages over representative data throughtout the year is analyzed to demonstrate the long term stability of the sensor measurements. The data analyzed consist of 23 segments of sensor measurements made at 2 to 4 week intervals. Each segment is roughly 24 hours in length. The variation of full orbit average as a function of orbit within a day as a function of day of year is examined. The dependence on day of year is based on association the start date of each segment with the mean full orbit average for the segment. The peak-to-peak and standard deviation values of the averages for each data segment are computed and their variation with day of year are also examined.

  16. SAR image quality effects of damped phase and amplitude errors

    NASA Astrophysics Data System (ADS)

    Zelenka, Jerry S.; Falk, Thomas

    The effects of damped multiplicative, amplitude, or phase errors on the image quality of synthetic-aperture radar systems are considered. These types of errors can result from aircraft maneuvers or the mechanical steering of an antenna. The proper treatment of damped multiplicative errors can lead to related design specifications and possibly an enhanced collection capability. Only small, high-frequency errors are considered. Expressions for the average intensity and energy associated with a damped multiplicative error are presented and used to derive graphic results. A typical example is used to show how to apply the results of this effort.

  17. Achieving Climate Change Absolute Accuracy in Orbit

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; Bowman, K.; Brindley, H.; Butler, J. J.; Collins, W.; Dykema, J. A.; Doelling, D. R.; Feldman, D. R.; Fox, N.; Huang, X.; Holz, R.; Huang, Y.; Jennings, D.; Jin, Z.; Johnson, D. G.; Jucks, K.; Kato, S.; Kratz, D. P.; Liu, X.; Lukashin, C.; Mannucci, A. J.; Phojanamongkolkij, N.; Roithmayr, C. M.; Sandford, S.; Taylor, P. C.; Xiong, X.

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  18. Averaging Robertson-Walker cosmologies

    NASA Astrophysics Data System (ADS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-04-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ωeff0 approx 4 × 10-6, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10-8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state weff < -1/3 can be found for strongly phantom models.

  19. Human error in aviation operations

    NASA Technical Reports Server (NTRS)

    Nagel, David C.

    1988-01-01

    The role of human error in commercial and general aviation accidents and the techniques used to evaluate it are reviewed from a human-factors perspective. Topics addressed include the general decline in accidents per million departures since the 1960s, the increase in the proportion of accidents due to human error, methods for studying error, theoretical error models, and the design of error-resistant systems. Consideration is given to information acquisition and processing errors, visually guided flight, disorientation, instrument-assisted guidance, communication errors, decision errors, debiasing, and action errors.

  20. The National Geodetic Survey absolute gravity program

    NASA Astrophysics Data System (ADS)

    Peter, George; Moose, Robert E.; Wessells, Claude W.

    1989-03-01

    The National Geodetic Survey absolute gravity program will utilize the high precision afforded by the JILAG-4 instrument to support geodetic and geophysical research, which involves studies of vertical motions, identification and modeling of other temporal variations, and establishment of reference values. The scientific rationale of these objectives is given, the procedures used to collect gravity and environmental data in the field are defined, and the steps necessary to correct and remove unwanted environmental effects are stated. In addition, site selection criteria, methods of concomitant environmental data collection and relative gravity observations, and schedule and logistics are discussed.

  1. Characterization of the DARA solar absolute radiometer

    NASA Astrophysics Data System (ADS)

    Finsterle, W.; Suter, M.; Fehlmann, A.; Kopp, G.

    2011-12-01

    The Davos Absolute Radiometer (DARA) prototype is an Electrical Substitution Radiometer (ESR) which has been developed as a successor of the PMO6 type on future space missions and ground based TSI measurements. The DARA implements an improved thermal design of the cavity detector and heat sink assembly to minimize air-vacuum differences and to maximize thermal symmetry of measuring and compensating cavity. The DARA also employs an inverted viewing geometry to reduce internal stray light. We will report on the characterization and calibration experiments which were carried out at PMOD/WRC and LASP (TRF).

  2. Absolute calibration of the Auger fluorescence detectors

    SciTech Connect

    Bauleo, P.; Brack, J.; Garrard, L.; Harton, J.; Knapik, R.; Meyhandan, R.; Rovero, A.C.; Tamashiro, A.; Warner, D.

    2005-07-01

    Absolute calibration of the Pierre Auger Observatory fluorescence detectors uses a light source at the telescope aperture. The technique accounts for the combined effects of all detector components in a single measurement. The calibrated 2.5 m diameter light source fills the aperture, providing uniform illumination to each pixel. The known flux from the light source and the response of the acquisition system give the required calibration for each pixel. In the lab, light source uniformity is studied using CCD images and the intensity is measured relative to NIST-calibrated photodiodes. Overall uncertainties are presently 12%, and are dominated by systematics.

  3. Absolute angular positioning in ultrahigh vacuum

    SciTech Connect

    Schief, H.; Marsico, V.; Kern, K.

    1996-05-01

    Commercially available angular resolvers, which are routinely used in machine tools and robotics, are modified and adapted to be used under ultrahigh-vacuum (UHV) conditions. They provide straightforward and reliable measurements of angular positions for any kind of UHV sample manipulators. The corresponding absolute reproducibility is on the order of 0.005{degree}, whereas the relative resolution is better than 0.001{degree}, as demonstrated by high-resolution helium-reflectivity measurements. The mechanical setup and possible applications are discussed. {copyright} {ital 1996 American Institute of Physics.}

  4. Absolute Priority for a Vehicle in VANET

    NASA Astrophysics Data System (ADS)

    Shirani, Rostam; Hendessi, Faramarz; Montazeri, Mohammad Ali; Sheikh Zefreh, Mohammad

    In today's world, traffic jams waste hundreds of hours of our life. This causes many researchers try to resolve the problem with the idea of Intelligent Transportation System. For some applications like a travelling ambulance, it is important to reduce delay even for a second. In this paper, we propose a completely infrastructure-less approach for finding shortest path and controlling traffic light to provide absolute priority for an emergency vehicle. We use the idea of vehicular ad-hoc networking to reduce the imposed travelling time. Then, we simulate our proposed protocol and compare it with a centrally controlled traffic light system.

  5. Boosting with Averaged Weight Vectors

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    AdaBoost is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the previous base model in the sequence. The idea is to make the next base model's errors uncorrelated with those of the previous model. Some researchers have pointed out the intuition that it is probably better to construct a distribution that is orthogonal to the mistake vectors of all the previous base models, but that this is not always possible. We present an algorithm that attempts to come as close as possible to this goal in an efficient manner. We present experimental results demonstrating significant improvement over AdaBoost and the Totally Corrective boosting algorithm, which also attempts to satisfy this goal.

  6. Contouring error compensation on a micro coordinate measuring machine

    NASA Astrophysics Data System (ADS)

    Fan, Kuang-Chao; Wang, Hung-Yu; Ye, Jyun-Kuan

    2011-12-01

    In recent years, three-dimensional measurements of nano-technology researches have received a great attention in the world. Based on the high accuracy demand, the error compensation of measurement machine is very important. In this study, a high precision Micro-CMM (coordinate measuring machine) has been developed which is composed of a coplanar stage for reducing the Abbé error in the vertical direction, the linear diffraction grating interferometer (LDGI) as the position feedback sensor in nanometer resolution, and ultrasonic motors for position control. This paper presents the error compensation strategy including "Home accuracy" and "Position accuracy" in both axes. For the home error compensation, we utilize a commercial DVD pick-up head and its S-curve principle to accurately search the origin of each axis. For the positioning error compensation, the absolute positions relative to the home are calibrated by laser interferometer and the error budget table is stored for feed forward error compensation. Contouring error can thus be compensated if both the compensation of both X and Y positioning errors are applied. Experiments show the contouring accuracy can be controlled to within 50nm after compensation.

  7. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  8. Absolute photometric calibration of detectors to 0.3 mmag using amplitude-stabilized lasers and a helium-cooled absolute radiometer

    NASA Technical Reports Server (NTRS)

    Miller, Peter J.

    1988-01-01

    Laser sources whose intensity is determined with a cryogenic electrical substitution radiometer are described. Detectors are then calibrated against this known flux, with an overall error of 0.028 percent (0.3 mmag). Ongoing research has produced laser intensity stabilizers with flicker and drift of less than 0.01 percent. Recently, the useful wavelength limit of these stabilizers have been extended to 1.65 microns by using a new modular technology and InGaAs detector systems. Data from Si photodiode calibration using the method of Zalewski and Geist are compared against an absolute cavity radiometer calibration as an internal check on the calibration system.

  9. SU-E-T-152: Error Sensitivity and Superiority of a Protocol for 3D IMRT Quality Assurance

    SciTech Connect

    Gueorguiev, G; Cotter, C; Turcotte, J; Sharp, G; Crawford, B; Mah'D, M

    2014-06-01

    Purpose: To test if the parameters included in our 3D QA protocol with current tolerance levels are able to detect certain errors and show the superiority of 3D QA method over single ion chamber measurements and 2D gamma test by detecting most of the introduced errors. The 3D QA protocol parameters are: TPS and measured average dose difference, 3D gamma test with 3mmDTA/3% test parameters, and structure volume for which the TPS predicted and measured absolute dose difference is greater than 6%. Methods: Two prostate and two thoracic step-and-shoot IMRT patients were investigated. The following errors were introduced to each original treatment plan: energy switched from 6MV to 10MV, linac jaws retracted to 15cmx15cm, 1,2,3 central MLC leaf pairs retracted behind the jaws, single central MLC leaf put in or out of the treatment field, Monitor Units (MU) increased and decreased by 1 and 3%, collimator off by 5 and 15 degrees, detector shifted by 5mm to the left and right, gantry treatment angle off by 5 and 15 degrees. QA was performed on each plan using single ion chamber, 2D ion chamber array for 2D gamma analysis and using IBA's COMPASS system for 3D QA. Results: Out of the three tested QA methods single ion chamber performs the worst not detecting subtle errors. 3D QA proves to be the superior out of the three methods detecting all of introduced errors, except 10MV and 1% MU change, and MLC rotated (those errors were not detected by any QA methods tested). Conclusion: As the way radiation is delivered evolves, so must the QA. We believe a diverse set of 3D statistical parameters applied both to OAR and target plan structures provides the highest level of QA.

  10. Determination of the absolute contours of optical flats

    NASA Technical Reports Server (NTRS)

    Primak, W.

    1969-01-01

    Emersons procedure is used to determine true absolute contours of optical flats. Absolute contours of standard flats are determined and a comparison is then made between standard and unknown flats. Contour differences are determined by deviation of Fizeau fringe.

  11. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  12. Standardization of the cumulative absolute velocity

    SciTech Connect

    O'Hara, T.F.; Jacobson, J.P. )

    1991-12-01

    EPRI NP-5930, A Criterion for Determining Exceedance of the Operating Basis Earthquake,'' was published in July 1988. As defined in that report, the Operating Basis Earthquake (OBE) is exceeded when both a response spectrum parameter and a second damage parameter, referred to as the Cumulative Absolute Velocity (CAV), are exceeded. In the review process of the above report, it was noted that the calculation of CAV could be confounded by time history records of long duration containing low (nondamaging) acceleration. Therefore, it is necessary to standardize the method of calculating CAV to account for record length. This standardized methodology allows consistent comparisons between future CAV calculations and the adjusted CAV threshold value based upon applying the standardized methodology to the data set presented in EPRI NP-5930. The recommended method to standardize the CAV calculation is to window its calculation on a second-by-second basis for a given time history. If the absolute acceleration exceeds 0.025g at any time during each one second interval, the earthquake records used in EPRI NP-5930 have been reanalyzed and the adjusted threshold of damage for CAV was found to be 0.16g-set.

  13. Absolute rates of hole transfer in DNA.

    PubMed

    Senthilkumar, Kittusamy; Grozema, Ferdinand C; Guerra, Célia Fonseca; Bickelhaupt, F Matthias; Lewis, Frederick D; Berlin, Yuri A; Ratner, Mark A; Siebbeles, Laurens D A

    2005-10-26

    Absolute rates of hole transfer between guanine nucleobases separated by one or two A:T base pairs in stilbenedicarboxamide-linked DNA hairpins were obtained by improved kinetic analysis of experimental data. The charge-transfer rates in four different DNA sequences were calculated using a density-functional-based tight-binding model and a semiclassical superexchange model. Site energies and charge-transfer integrals were calculated directly as the diagonal and off-diagonal matrix elements of the Kohn-Sham Hamiltonian, respectively, for all possible combinations of nucleobases. Taking into account the Coulomb interaction between the negative charge on the stilbenedicarboxamide linker and the hole on the DNA strand as well as effects of base pair twisting, the relative order of the experimental rates for hole transfer in different hairpins could be reproduced by tight-binding calculations. To reproduce quantitatively the absolute values of the measured rate constants, the effect of the reorganization energy was taken into account within the semiclassical superexchange model for charge transfer. The experimental rates could be reproduced with reorganization energies near 1 eV. The quantum chemical data obtained were used to discuss charge carrier mobility and hole-transport equilibria in DNA. PMID:16231945

  14. Transient absolute robustness in stochastic biochemical networks.

    PubMed

    Enciso, German A

    2016-08-01

    Absolute robustness allows biochemical networks to sustain a consistent steady-state output in the face of protein concentration variability from cell to cell. This property is structural and can be determined from the topology of the network alone regardless of rate parameters. An important question regarding these systems is the effect of discrete biochemical noise in the dynamical behaviour. In this paper, a variable freezing technique is developed to show that under mild hypotheses the corresponding stochastic system has a transiently robust behaviour. Specifically, after finite time the distribution of the output approximates a Poisson distribution, centred around the deterministic mean. The approximation becomes increasingly accurate, and it holds for increasingly long finite times, as the total protein concentrations grow to infinity. In particular, the stochastic system retains a transient, absolutely robust behaviour corresponding to the deterministic case. This result contrasts with the long-term dynamics of the stochastic system, which eventually must undergo an extinction event that eliminates robustness and is completely different from the deterministic dynamics. The transiently robust behaviour may be sufficient to carry out many forms of robust signal transduction and cellular decision-making in cellular organisms. PMID:27581485

  15. Absolute Electron Extraction Efficiency of Liquid Xenon

    NASA Astrophysics Data System (ADS)

    Kamdin, Katayun; Mizrachi, Eli; Morad, James; Sorensen, Peter

    2016-03-01

    Dual phase liquid/gas xenon time projection chambers (TPCs) currently set the world's most sensitive limits on weakly interacting massive particles (WIMPs), a favored dark matter candidate. These detectors rely on extracting electrons from liquid xenon into gaseous xenon, where they produce proportional scintillation. The proportional scintillation from the extracted electrons serves to internally amplify the WIMP signal; even a single extracted electron is detectable. Credible dark matter searches can proceed with electron extraction efficiency (EEE) lower than 100%. However, electrons systematically left at the liquid/gas boundary are a concern. Possible effects include spontaneous single or multi-electron proportional scintillation signals in the gas, or charging of the liquid/gas interface or detector materials. Understanding EEE is consequently a serious concern for this class of rare event search detectors. Previous EEE measurements have mostly been relative, not absolute, assuming efficiency plateaus at 100%. I will present an absolute EEE measurement with a small liquid/gas xenon TPC test bed located at Lawrence Berkeley National Laboratory.

  16. Sentinel-2/MSI absolute calibration: first results

    NASA Astrophysics Data System (ADS)

    Lonjou, V.; Lachérade, S.; Fougnie, B.; Gamet, P.; Marcq, S.; Raynaud, J.-L.; Tremas, T.

    2015-10-01

    Sentinel-2 is an optical imaging mission devoted to the operational monitoring of land and coastal areas. It is developed in partnership between the European Commission and the European Space Agency. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. It will offer a unique combination of global coverage with a wide field of view (290km), a high revisit (5 days with two satellites), a high resolution (10m, 20m and 60m) and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains). CNES is involved in the instrument commissioning in collaboration with ESA. This paper reviews all the techniques that will be used to insure an absolute calibration of the 13 spectral bands better than 5% (target 3%), and will present the first results if available. First, the nominal calibration technique, based on an on-board sun diffuser, is detailed. Then, we show how vicarious calibration methods based on acquisitions over natural targets (oceans, deserts, and Antarctica during winter) will be used to check and improve the accuracy of the absolute calibration coefficients. Finally, the verification scheme, exploiting photometer in-situ measurements over Lacrau plain, is described. A synthesis, including spectral coherence, inter-methods agreement and temporal evolution, will conclude the paper.

  17. Absolute Spectrophotometry of 237 Open Cluster Stars

    NASA Astrophysics Data System (ADS)

    Clampitt, L.; Burstein, D.

    1994-12-01

    We present absolute spectrophotometry of 237 stars in 7 nearby open clusters: Hyades, Pleiades, Alpha Persei, Praesepe, Coma Berenices, IC 4665, and M 39. The observations were taken using the Wampler single-channel scanner (Wampler 1966) on the Crossley 0.9m telescope at Lick Observatory from July 1973 through December 1974. 21 bandpasses spanning the spectral range 3500 Angstroms to 7780 Angstroms were observed for each star, with bandwiths ranging from 32Angstroms to 64 Angstroms. Data are standardized to the Hayes--Latham (1975) system. Our measurements are compared to filter colors on the Johnson BV, Stromgren ubvy, and Geneva U V B_1 B_2 V_1 G systems, as well as to spectrophotometry of a few stars published by Gunn, Stryker & Tinsley and in the Spectrophotometric Standards Catalog (Adelman; as distributed by the NSSDC). Both internal and external comparisons to the filter systems indicate a formal statistical accuracy per bandpass of 0.01 to 0.02 mag, with apparent larger ( ~ 0.03 mag) differences in absolute calibration between this data set and existing spectrophotometry. These data will comprise part of the spectrophotometry that will be used to calibrate the Beijing-Arizona-Taipei-Connecticut Color Survey of the Sky (see separate paper by Burstein et al. at this meeting).

  18. Scientific Impacts of Wind Direction Errors

    NASA Technical Reports Server (NTRS)

    Liu, W. Timothy; Kim, Seung-Bum; Lee, Tong; Song, Y. Tony; Tang, Wen-Qing; Atlas, Robert

    2004-01-01

    An assessment on the scientific impact of random errors in wind direction (less than 45 deg) retrieved from space-based observations under weak wind (less than 7 m/s ) conditions was made. averages, and these weak winds cover most of the tropical, sub-tropical, and coastal oceans. Introduction of these errors in the semi-daily winds causes, on average, 5% changes of the yearly mean Ekman and Sverdrup volume transports computed directly from the winds, respectively. These poleward movements of water are the main mechanisms to redistribute heat from the warmer tropical region to the colder high- latitude regions, and they are the major manifestations of the ocean's function in modifying Earth's climate. Simulation by an ocean general circulation model shows that the wind errors introduce a 5% error in the meridional heat transport at tropical latitudes. The simulation also shows that the erroneous winds cause a pile-up of warm surface water in the eastern tropical Pacific, similar to the conditions during El Nino episode. Similar wind directional errors cause significant change in sea-surface temperature and sea-level patterns in coastal oceans in a coastal model simulation. Previous studies have shown that assimilation of scatterometer winds improves 3-5 day weather forecasts in the Southern Hemisphere. When directional information below 7 m/s was withheld, approximately 40% of the improvement was lost

  19. SAR image registration in absolute coordinates using GPS carrier phase position and velocity information

    SciTech Connect

    Burgett, S.; Meindl, M.

    1994-09-01

    It is useful in a variety of military and commercial application to accurately register the position of synthetic aperture radar (SAR) imagery in absolute coordinates. The two basic SAR measurements, range and doppler, can be used to solve for the position of the SAR image. Imprecise knowledge of the SAR collection platform`s position and velocity vectors introduce errors in the range and doppler measurements and can cause the apparent location of the SAR image on the ground to be in error by tens of meters. Recent advances in carrier phase GPS techniques can provide an accurate description of the collection vehicle`s trajectory during the image formation process. In this paper, highly accurate carrier phase GPS trajectory information is used in conjunction with SAR imagery to demonstrate a technique for accurate registration of SAR images in WGS-84 coordinates. Flight test data will be presented that demonstrates SAR image registration errors of less than 4 meters.

  20. Absolute paleointensity from Hawaiian lavas younger than 35 ka

    USGS Publications Warehouse

    Valet, J.-P.; Tric, E.; Herrero-Bervera, E.; Meynadier, L.; Lockwood, J.P.

    1998-01-01

    Paleointensity studies have been conducted in air and in argon atmosphere on nine lava flows with radiocarbon ages distributed between 3.3 and 28.2 ka from the Mauna Loa volcano in the big island of Hawaii. Determinations of paleointensity obtained at eight sites depict the same overall pattern as the previous results for the same period in Hawaii, although the overall average field intensity appears to be lower. Since the present results were determined at higher temperatures than in the previous studies, this discrepancy raises questions regarding the selection of low versus high-temperature segments that are usually made for absolute paleointensity. The virtual dipole moments are similar to those displayed by the worldwide data set obtained from dated lava flows. When averaged within finite time intervals, the worldwide values match nicely the variations of the Sint-200 synthetic record of relative paleointensity and confirm the overall decrease of the dipole field intensity during most of this period. The convergence between the existing records at Hawaii and the rest of the world does not favour the presence of persistent strong non-dipole components beneath Hawaii for this period.

  1. Absolute and relative bioavailability of oral acetaminophen preparations.

    PubMed

    Ameer, B; Divoll, M; Abernethy, D R; Greenblatt, D J; Shargel, L

    1983-08-01

    Eighteen healthy volunteers received single 650-mg doses of acetaminophen by 5-min intravenous infusion, in tablet form by mouth in the fasting state, and in elixir form orally in the fasting state in a three-way crossover study. An additional eight subjects received two 325-mg tablets from two commercial vendors in a randomized crossover fashion. Concentrations of acetaminophen in multiple plasma samples collected during the 12-hr period after each dose were determined by high-performance liquid chromatography. Following a lag time averaging 3-4 min, absorption of oral acetaminophen was first order, with apparent absorption half-life values averaging 8.4 (elixir) and 11.4 (tablet) min. The mean time-to-peak concentration was significantly longer after tablet (0.75 hr) than after elixir (0.48 hr) administration. Peak plasma concentrations and elimination half-lives were similar following both preparations. Absolute systemic availability of the elixir (87%) was significantly greater than for the tablets (79%). Two commercially available tablet formulations did not differ significantly in peak plasma concentrations, time-to-peak, or total area under the plasma concentration curve and therefore were judged to be bioequivalent. PMID:6688635

  2. Experimental Quantum Error Detection

    PubMed Central

    Jin, Xian-Min; Yi, Zhen-Huan; Yang, Bin; Zhou, Fei; Yang, Tao; Peng, Cheng-Zhi

    2012-01-01

    Faithful transmission of quantum information is a crucial ingredient in quantum communication networks. To overcome the unavoidable decoherence in a noisy channel, to date, many efforts have been made to transmit one state by consuming large numbers of time-synchronized ancilla states. However, such huge demands of quantum resources are hard to meet with current technology and this restricts practical applications. Here we experimentally demonstrate quantum error detection, an economical approach to reliably protecting a qubit against bit-flip errors. Arbitrary unknown polarization states of single photons and entangled photons are converted into time bins deterministically via a modified Franson interferometer. Noise arising in both 10 m and 0.8 km fiber, which induces associated errors on the reference frame of time bins, is filtered when photons are detected. The demonstrated resource efficiency and state independence make this protocol a promising candidate for implementing a real-world quantum communication network. PMID:22953047

  3. Measurement error revisited

    NASA Astrophysics Data System (ADS)

    Henderson, Robert K.

    1999-12-01

    It is widely accepted in the electronics industry that measurement gauge error variation should be no larger than 10% of the related specification window. In a previous paper, 'What Amount of Measurement Error is Too Much?', the author used a framework from the process industries to evaluate the impact of measurement error variation in terms of both customer and supplier risk (i.e., Non-conformance and Yield Loss). Application of this framework in its simplest form suggested that in many circumstances the 10% criterion might be more stringent than is reasonably necessary. This paper reviews the framework and results of the earlier work, then examines some of the possible extensions to this framework suggested in that paper, including variance component models and sampling plans applicable in the photomask and semiconductor businesses. The potential impact of imperfect process control practices will be examined as well.

  4. A Conceptual Approach to Absolute Value Equations and Inequalities

    ERIC Educational Resources Information Center

    Ellis, Mark W.; Bryson, Janet L.

    2011-01-01

    The absolute value learning objective in high school mathematics requires students to solve far more complex absolute value equations and inequalities. When absolute value problems become more complex, students often do not have sufficient conceptual understanding to make any sense of what is happening mathematically. The authors suggest that the…

  5. Using, Seeing, Feeling, and Doing Absolute Value for Deeper Understanding

    ERIC Educational Resources Information Center

    Ponce, Gregorio A.

    2008-01-01

    Using sticky notes and number lines, a hands-on activity is shared that anchors initial student thinking about absolute value. The initial point of reference should help students successfully evaluate numeric problems involving absolute value. They should also be able to solve absolute value equations and inequalities that are typically found in…

  6. 20 CFR 404.1205 - Absolute coverage groups.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Absolute coverage groups. 404.1205 Section... INSURANCE (1950- ) Coverage of Employees of State and Local Governments What Groups of Employees May Be Covered § 404.1205 Absolute coverage groups. (a) General. An absolute coverage group is a...

  7. Averaging Robertson-Walker cosmologies

    SciTech Connect

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane E-mail: G.Robbers@thphys.uni-heidelberg.de

    2009-04-15

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the {Lambda}CDM concordance model, the backreaction is of the order of {Omega}{sub eff}{sup 0} Almost-Equal-To 4 Multiplication-Sign 10{sup -6}, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10{sup -8} and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w{sub eff} < -1/3 can be found for strongly phantom models.

  8. Ensemble averaging of acoustic data

    NASA Technical Reports Server (NTRS)

    Stefanski, P. K.

    1982-01-01

    A computer program called Ensemble Averaging of Acoustic Data is documented. The program samples analog data, analyzes the data, and displays them in the time and frequency domains. Hard copies of the displays are the program's output. The documentation includes a description of the program and detailed user instructions for the program. This software was developed for use on the Ames 40- by 80-Foot Wind Tunnel's Dynamic Analysis System consisting of a PDP-11/45 computer, two RK05 disk drives, a tektronix 611 keyboard/display terminal, and FPE-4 Fourier Processing Element, and an analog-to-digital converter.

  9. On-Orbit Absolute Radiance Standard for Future IR Remote Sensing Instruments

    NASA Astrophysics Data System (ADS)

    Best, F. A.; Adler, D. P.; Pettersen, C.; Revercomb, H. E.; Gero, P. J.; Taylor, J. K.; Knuteson, R. O.; Perepezko, J. H.

    2010-12-01

    Future NASA infrared remote sensing missions, including the climate benchmark CLARREO mission will require better absolute measurement accuracy than now available, and will most certainly rely on the emerging capability to fly SI traceable standards that provide irrefutable absolute measurement accuracy. As an example, instrumentation designed to measure spectrally resolved infrared radiances with an absolute brightness temperature error of better than 0.1 K will require high-emissivity (>0.999) calibration blackbodies with emissivity uncertainty of better than 0.06%, and absolute temperature uncertainties of better than 0.045K (3 sigma). Key elements of an On-Orbit Absolute Radiance Standard (OARS) meeting these stringent requirements have been demonstrated in the laboratory at the University of Wisconsin and are undergoing Technology Readiness Level (TRL) advancement under the NASA Instrument Incubator Program (IIP). We present the new technologies that underlie the OARS and the results of laboratory testing that demonstrate the required accuracy is being met. The underlying technologies include on-orbit absolute temperature calibration using the transient melt signatures of small quantities (<1g) of reference materials (gallium, water, and mercury) imbedded in the blackbody cavity; and on-orbit cavity spectral emissivity measurement using a heated halo. For these emissivity measurements, a carefully baffled heated cylinder is placed in front of a blackbody in the infrared spectrometer system, and the combined radiance of the blackbody and Heated Halo reflection is observed. Knowledge of key temperatures and the viewing geometry allow the blackbody cavity spectral emissivity to be calculated. This work will culminate with an integrated subsystem that can provide on-orbit end-to-end radiometric accuracy validation for infrared remote sensing instruments.

  10. On-Orbit Absolute Radiance Standard for the Next Generation of IR Remote Sensing Instruments

    NASA Astrophysics Data System (ADS)

    Best, F. A.; Adler, D. P.; Pettersen, C.; Revercomb, H. E.; Gero, P.; Taylor, J. K.; Knuteson, R. O.; Perepezko, J. H.

    2011-12-01

    The next generation of infrared remote sensing satellite instrumentation, including climate benchmark missions will require better absolute measurement accuracy than now available, and will most certainly rely on the emerging capability to fly SI traceable standards that provide irrefutable absolute measurement accuracy. As an example, instrumentation designed to measure spectrally resolved infrared radiances with an absolute brightness temperature error of better than 0.1 K will require high-emissivity (>0.999) calibration blackbodies with emissivity uncertainty of better than 0.06%, and absolute temperature uncertainties of better than 0.045K (k=3). Key elements of an On-Orbit Absolute Radiance Standard (OARS) meeting these stringent requirements have been demonstrated in the laboratory at the University of Wisconsin and are undergoing further refinement under the NASA Instrument Incubator Program (IIP). This work will culminate with an integrated subsystem that can provide on-orbit end-to-end radiometric accuracy validation for infrared remote sensing instruments. We present the new technologies that underlie the OARS and updated results of laboratory testing that demonstrate the required accuracy. The underlying technologies include on-orbit absolute temperature calibration using the transient melt signatures of small quantities (<1g) of reference materials (gallium, water, and mercury) imbedded in the blackbody cavity; and on-orbit cavity spectral emissivity measurement using a heated halo. For these emissivity measurements, a carefully baffled heated cylinder is placed in front of a blackbody in the infrared spectrometer system, and the combined radiance of the blackbody and Heated Halo reflection is observed. Knowledge of key temperatures and the viewing geometry allow the blackbody cavity spectral emissivity to be calculated.

  11. On-Orbit Absolute Radiance Standard for the Next Generation of IR Remote Sensing Instruments

    NASA Astrophysics Data System (ADS)

    Best, F. A.; Adler, D. P.; Pettersen, C.; Revercomb, H. E.; Gero, P. J.; Taylor, J. K.; Knuteson, R. O.; Perepezko, J. H.

    2012-12-01

    The next generation of infrared remote sensing satellite instrumentation, including climate benchmark missions will require better absolute measurement accuracy than now available, and will most certainly rely on the emerging capability to fly SI traceable standards that provide irrefutable absolute measurement accuracy. As an example, instrumentation designed to measure spectrally resolved infrared radiances with an absolute brightness temperature error of better than 0.1 K will require high-emissivity (>0.999) calibration blackbodies with emissivity uncertainty of better than 0.06%, and absolute temperature uncertainties of better than 0.045K (k=3). Key elements of an On-Orbit Absolute Radiance Standard (OARS) meeting these stringent requirements have been demonstrated in the laboratory at the University of Wisconsin and are undergoing further refinement under the NASA Instrument Incubator Program (IIP). This work will culminate with an integrated subsystem that can provide on-orbit end-to-end radiometric accuracy validation for infrared remote sensing instruments. We present the new technologies that underlie the OARS and updated results of laboratory testing that demonstrate the required accuracy. The underlying technologies include on-orbit absolute temperature calibration using the transient melt signatures of small quantities (<1g) of reference materials (gallium, water, and mercury) imbedded in the blackbody cavity; and on-orbit cavity spectral emissivity measurement using a heated halo. For these emissivity measurements, a carefully baffled heated cylinder is placed in front of a blackbody in the infrared spectrometer system, and the combined radiance of the blackbody and Heated Halo reflection is observed. Knowledge of key temperatures and the viewing geometry allow the blackbody cavity spectral emissivity to be calculated.

  12. Error Representation in Time For Compressible Flow Calculations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2010-01-01

    Time plays an essential role in most real world fluid mechanics problems, e.g. turbulence, combustion, acoustic noise, moving geometries, blast waves, etc. Time dependent calculations now dominate the computational landscape at the various NASA Research Centers but the accuracy of these computations is often not well understood. In this presentation, we investigate error representation (and error control) for time-periodic problems as a prelude to the investigation of feasibility of error control for stationary statistics and space-time averages. o These statistics and averages (e.g. time-averaged lift and drag forces) are often the output quantities sought by engineers. o For systems such as the Navier-Stokes equations, pointwise error estimates deteriorate rapidly which increasing Reynolds number while statistics and averages may remain well behaved.

  13. Surprise beyond prediction error

    PubMed Central

    Chumbley, Justin R; Burke, Christopher J; Stephan, Klaas E; Friston, Karl J; Tobler, Philippe N; Fehr, Ernst

    2014-01-01

    Surprise drives learning. Various neural “prediction error” signals are believed to underpin surprise-based reinforcement learning. Here, we report a surprise signal that reflects reinforcement learning but is neither un/signed reward prediction error (RPE) nor un/signed state prediction error (SPE). To exclude these alternatives, we measured surprise responses in the absence of RPE and accounted for a host of potential SPE confounds. This new surprise signal was evident in ventral striatum, primary sensory cortex, frontal poles, and amygdala. We interpret these findings via a normative model of surprise. PMID:24700400

  14. Evolution of error diffusion

    NASA Astrophysics Data System (ADS)

    Knox, Keith T.

    1999-10-01

    As we approach the new millennium, error diffusion is approaching the 25th anniversary of its invention. Because of its exceptionally high image quality, it continues to be a popular choice among digital halftoning algorithms. Over the last 24 years, many attempts have been made to modify and improve the algorithm--to eliminate unwanted textures and to extend it to printing media and color. Some of these modifications have been very successful and are in use today. This paper will review the history of the algorithm and its modifications. Three watershed events in the development of error diffusion will be described, together with the lessons learned along the way.

  15. Evolution of error diffusion

    NASA Astrophysics Data System (ADS)

    Knox, Keith T.

    1998-12-01

    As we approach the new millennium, error diffusion is approaching the 25th anniversary of its invention. Because of its exceptionally high image quality, it continues to be a popular choice among digital halftoning algorithms. Over the last 24 years, many attempts have been made to modify and improve the algorithm - to eliminate unwanted textures and to extend it to printing media and color. Some of these modifications have been very successful and are in use today. This paper will review the history of the algorithm and its modifications. Three watershed events in the development of error diffusion will be described, together with the lesions learned along the way.

  16. Error Free Software

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  17. Absolute absorption on the potassium D lines: theory and experiment

    NASA Astrophysics Data System (ADS)

    Hanley, Ryan K.; Gregory, Philip D.; Hughes, Ifan G.; Cornish, Simon L.

    2015-10-01

    We present a detailed study of the absolute Doppler-broadened absorption of a probe beam scanned across the potassium D lines in a thermal vapour. Spectra using a weak probe were measured on the 4S \\to 4P transition and compared to the theoretical model of the electric susceptibility detailed by Zentile et al (2015 Comput. Phys. Commun. 189 162-74) in the code named ElecSus. Comparisons were also made on the 4S \\to 5P transition with an adapted version of ElecSus. This is the first experimental test of ElecSus on an atom with a ground state hyperfine splitting smaller than that of the Doppler width. An excellent agreement was found between ElecSus and experimental measurements at a variety of temperatures with rms errors ˜ {10}-3. We have also demonstrated the use of ElecSus as an atomic vapour thermometry tool, and present a possible new measurement technique of transition decay rates which we predict to have a precision of ˜3 {kHz}.

  18. Using absolute gravimeter data to determine vertical gravity gradients

    USGS Publications Warehouse

    Robertson, D.S.

    2001-01-01

    The position versus time data from a free-fall absolute gravimeter can be used to estimate the vertical gravity gradient in addition to the gravity value itself. Hipkin has reported success in estimating the vertical gradient value using a data set of unusually good quality. This paper explores techniques that may be applicable to a broader class of data that may be contaminated with "system response" errors of larger magnitude than were evident in the data used by Hipkin. This system response function is usually modelled as a sum of exponentially decaying sinusoidal components. The technique employed here involves combining the x0, v0 and g parameters from all the drops made during a site occupation into a single least-squares solution, and including the value of the vertical gradient and the coefficients of system response function in the same solution. The resulting non-linear equations must be solved iteratively and convergence presents some difficulties. Sparse matrix techniques are used to make the least-squares problem computationally tractable.

  19. Absolute Quantification of Individual Biomass Concentrations in a Methanogenic Coculture

    PubMed Central

    2014-01-01

    Identification of individual biomass concentrations is a crucial step towards an improved understanding of anaerobic digestion processes and mixed microbial conversions in general. The knowledge of individual biomass concentrations allows for the calculation of biomass specific conversion rates which form the basis of anaerobic digestion models. Only few attempts addressed the absolute quantification of individual biomass concentrations in methanogenic microbial ecosystems which has so far impaired the calculation of biomass specific conversion rates and thus model validation. This study proposes a quantitative PCR (qPCR) approach for the direct determination of individual biomass concentrations in methanogenic microbial associations by correlating the native qPCR signal (cycle threshold, Ct) to individual biomass concentrations (mg dry matter/L). Unlike existing methods, the proposed approach circumvents error-prone conversion factors that are typically used to convert gene copy numbers or cell concentrations into actual biomass concentrations. The newly developed method was assessed and deemed suitable for the determination of individual biomass concentrations in a defined coculture of Desulfovibrio sp. G11 and Methanospirillum hungatei JF1. The obtained calibration curves showed high accuracy, indicating that the new approach is well suited for any engineering applications where the knowledge of individual biomass concentrations is required. PMID:24949269

  20. Weighted Wilcoxon-type Smoothly Clipped Absolute Deviation Method

    PubMed Central

    Wang, Lan; Li, Runze

    2009-01-01

    Summary Shrinkage-type variable selection procedures have recently seen increasing applications in biomedical research. However, their performance can be adversely influenced by outliers in either the response or the covariate space. This paper proposes a weighted Wilcoxon-type smoothly clipped absolute deviation (WW-SCAD) method, which deals with robust variable selection and robust estimation simultaneously. The new procedure can be conveniently implemented with the statistical software R. We establish that the WW-SCAD correctly identifies the set of zero coefficients with probability approaching one and estimates the nonzero coefficients with the rate n−1/2. Moreover, with appropriately chosen weights the WW-SCAD is robust with respect to outliers in both the x and y directions. The important special case with constant weights yields an oracle-type estimator with high efficiency at the presence of heavier-tailed random errors. The robustness of the WW-SCAD is partly justified by its asymptotic performance under local shrinking contamination. We propose a BIC-type tuning parameter selector for the WW-SCAD. The performance of the WW-SCAD is demonstrated via simulations and by an application to a study that investigates the effects of personal characteristics and dietary factors on plasma beta-carotene level. PMID:18647294

  1. Absolute position total internal reflection microscopy with an optical tweezer

    PubMed Central

    Liu, Lulu; Woolf, Alexander; Rodriguez, Alejandro W.; Capasso, Federico

    2014-01-01

    A noninvasive, in situ calibration method for total internal reflection microscopy (TIRM) based on optical tweezing is presented, which greatly expands the capabilities of this technique. We show that by making only simple modifications to the basic TIRM sensing setup and procedure, a probe particle’s absolute position relative to a dielectric interface may be known with better than 10 nm precision out to a distance greater than 1 μm from the surface. This represents an approximate 10× improvement in error and 3× improvement in measurement range over conventional TIRM methods. The technique’s advantage is in the direct measurement of the probe particle’s scattering intensity vs. height profile in situ, rather than relying on assumptions, inexact system analogs, or detailed knowledge of system parameters for calibration. To demonstrate the improved versatility of the TIRM method in terms of tunability, precision, and range, we show our results for the hindered near-wall diffusion coefficient for a spherical dielectric particle. PMID:25512542

  2. Comparison of Using Relative and Absolute PCV Corrections in Short Baseline GNSS Observation Processing

    NASA Astrophysics Data System (ADS)

    Dawidowicz, Karol

    2011-01-01

    GNSS antenna phase center variations (PCV) are defined as shifts in positions depending on the observed elevation angle and azimuth to the satellite. When identical antennae are used in relative measurement the phase center variations will cancel out, particularly over short baselines. When different antennae are used, even on short baselines, ignoring these phase center variations can lead to serious (up to 10 cm) vertical errors. The only way to avoid these errors, when mixing different antenna types, is by applying antenna phase center variation models in processing. Till the 6th November 2006, the International GNSS Service used relative phase center models for GNSS antenna receivers. Then absolute calibration models, developed by the company "Geo++", started to be used. These models involved significant differences on the scale of GNSS networks compared to the VLBI and SLR measurements. The differences were due to the lack of the GNSS satellite antenna calibration models. When this problem was sufficiently resolved, the IGS decided to switch from relative to absolute models for both satellites and receivers. This decision caused significant variations to the results of the GNSS network solutions. The aim of this paper is to study the height differences in short baseline GNSS observations processing when different calibration models are used. The analysis was done using GNSS data collected at short baselines moved with different receiver antennas. The results of calculations show, that switching from relative to absolute receiver antenna PCV models has a significant effect on GNSS network solutions, particularly in high accuracy applications.

  3. VizieR Online Data Catalog: Absolute Proper motions Outside the Plane (APOP) (Qi+, 2015)

    NASA Astrophysics Data System (ADS)

    Qi, Z. X.; Yu, Y.; Bucciasrelli, B.; Lattanzi, M. G.; Smart, R. L.; Spagna, A.; McLean, B. J.; Tang, Z. H.; Jones, H. R. A.; Morbidelli, R.; Nicastro, L.; Vacchiato, A.

    2015-09-01

    The APOP is a absolute proper motion catalog achieved on the Digitized Sky Survey Schmidt plates data established by GSC-II project that outside the galactic plane (|b|>27°). The sky cover of this catalog is 22,525 square degree, the mean density is 4473 objects/sq.deg. and the magnitude limit is around R=20.8mag. The systematic errors of absolute proper motions related to the position, magnitude and color are practically all removed by using the extragalactic objects. The zero point error of absolute proper motions is less than 0.6mas/yr, and the accuracy is better than 4.0mas/yr for objects bright than R=18.5, and rises to 9.0mas/yr for objects with magnitude 18.5-30 degree and is not very well for others, the reason is that the epoch difference is large for Declination>-30° (45 years) but South than that is only around 12 years. It is fine for statistical studies for objects with Declination<-30° that people could find and remove obviously incorrect entries. (1 data file).

  4. Use of Absolute and Comparative Performance Feedback in Absolute and Comparative Judgments and Decisions

    ERIC Educational Resources Information Center

    Moore, Don A.; Klein, William M. P.

    2008-01-01

    Which matters more--beliefs about absolute ability or ability relative to others? This study set out to compare the effects of such beliefs on satisfaction with performance, self-evaluations, and bets on future performance. In Experiment 1, undergraduate participants were told they had answered 20% correct, 80% correct, or were not given their…

  5. We need to talk about error: causes and types of error in veterinary practice.

    PubMed

    Oxtoby, C; Ferguson, E; White, K; Mossop, L

    2015-10-31

    Patient safety research in human medicine has identified the causes and common types of medical error and subsequently informed the development of interventions which mitigate harm, such as the WHO's safe surgery checklist. There is no such evidence available to the veterinary profession. This study therefore aims to identify the causes and types of errors in veterinary practice, and presents an evidence based system for their classification. Causes of error were identified from retrospective record review of 678 claims to the profession's leading indemnity insurer and nine focus groups (average N per group=8) with vets, nurses and support staff were performed using critical incident technique. Reason's (2000) Swiss cheese model of error was used to inform the interpretation of the data. Types of error were extracted from 2978 claims records reported between the years 2009 and 2013. The major classes of error causation were identified with mistakes involving surgery the most common type of error. The results were triangulated with findings from the medical literature and highlight the importance of cognitive limitations, deficiencies in non-technical skills and a systems approach to veterinary error. PMID:26489997

  6. Help prevent hospital errors

    MedlinePlus

    ... A.D.A.M. Editorial team. Related MedlinePlus Health Topics Medication Errors Patient Safety Browse the Encyclopedia A.D.A.M., Inc. is accredited by URAC, also known as the American Accreditation HealthCare Commission ... for online health information and services. Learn more about A.D. ...

  7. Orwell's Instructive Errors

    ERIC Educational Resources Information Center

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  8. Novel method for computing reference wave error in optical surface metrology

    NASA Astrophysics Data System (ADS)

    Murphy, Paul E.; Fleig, Jon; Forbes, Greg; Dumas, Paul

    2003-05-01

    Despite advances in various metrology tools, interferometry remains the method of choice for measurements of optical surfaces. Fizeau interferometers can achieve precisions of λ/100 PV (and better) with proper environmental control. The quality of the reference surface, however, usually limits the uncalibrated accuracy to merely λ/10 PV or so. Various methods have been developed for "absolute" (unbiased) surface testing, including the N-position, 3-flat, 2-sphere, and random average tests. The basic principle of these tests is that the reference wave error remains invariant when the part is moved. These tests as a rule require multiple parts and/or measurements at different positions. Sub-aperture stitching requires measurements at multiple positions, and thus in principle can measure reference wave error. QED"s stitching algorithm exploits this possibility to produce a measurement of the reference surface along with the stitched full-aperture phase. The precision mechanics of QED"s stitching workstation make it an excellent platform for performing conventional reference wave calibrations as well. Results obtained from the QED stitching algorithm are compared with other calibration methods performed on the same workstation. The mean results and uncertainties of the various methods are evaluated, and limitations discussed.

  9. Effects of confining pressure, pore pressure and temperature on absolute permeability. SUPRI TR-27

    SciTech Connect

    Gobran, B.D.; Ramey, H.J. Jr.; Brigham, W.E.

    1981-10-01

    This study investigates absolute permeability of consolidated sandstone and unconsolidated sand cores to distilled water as a function of the confining pressure on the core, the pore pressure of the flowing fluid and the temperature of the system. Since permeability measurements are usually made in the laboratory under conditions very different from those in the reservoir, it is important to know the effect of various parameters on the measured value of permeability. All studies on the effect of confining pressure on absolute permeability have found that when the confining pressure is increased, the permeability is reduced. The studies on the effect of temperature have shown much less consistency. This work contradicts the past Stanford studies by finding no effect of temperature on the absolute permeability of unconsolidated sand or sandstones to distilled water. The probable causes of the past errors are discussed. It has been found that inaccurate measurement of temperature at ambient conditions and non-equilibrium of temperature in the core can lead to a fictitious permeability reduction with temperature increase. The results of this study on the effect of confining pressure and pore pressure support the theory that as confining pressure is increased or pore pressure decreased, the permeability is reduced. The effects of confining pressure and pore pressure changes on absolute permeability are given explicitly so that measurements made under one set of confining pressure/pore pressure conditions in the laboratory can be extrapolated to conditions more representative of the reservoir.

  10. Absolute surface metrology by differencing spatially shifted maps from a phase-shifting interferometer.

    PubMed

    Bloemhof, E E

    2010-07-15

    Surface measurements of precision optics are commonly made with commercially available phase-shifting Fizeau interferometers that provide data relative to flat or spherical reference surfaces whose unknown errors are comparable to those of the surface being tested. A number of ingenious techniques provide surface measurements that are "absolute," rather than relative to any reference surface. Generally, these techniques require numerous measurements and the introduction of additional surfaces, but still yield absolute information only along certain lines over the surface of interest. A very simple alternative is presented here, in which no additional optics are required beyond the surface under test and the transmission flat (or sphere) defining the interferometric reference surface. The optic under test is measured in three positions, two of which have small lateral shifts along orthogonal directions, nominally comparable to the transverse spatial resolution of the interferometer. The phase structure in the reference surface then cancels out when these measurements are subtracted in pairs, providing a grid of absolute surface height differences between neighboring resolution elements of the surface under test. The full absolute surface, apart from overall phase and tip/tilt, is then recovered by standard wavefront reconstruction techniques. PMID:20634825

  11. In situ measurement of leaf chlorophyll concentration: analysis of the optical/absolute relationship.

    PubMed

    Parry, Christopher; Blonquist, J Mark; Bugbee, Bruce

    2014-11-01

    In situ optical meters are widely used to estimate leaf chlorophyll concentration, but non-uniform chlorophyll distribution causes optical measurements to vary widely among species for the same chlorophyll concentration. Over 30 studies have sought to quantify the in situ/in vitro (optical/absolute) relationship, but neither chlorophyll extraction nor measurement techniques for in vitro analysis have been consistent among studies. Here we: (1) review standard procedures for measurement of chlorophyll; (2) estimate the error associated with non-standard procedures; and (3) implement the most accurate methods to provide equations for conversion of optical to absolute chlorophyll for 22 species grown in multiple environments. Tests of five Minolta (model SPAD-502) and 25 Opti-Sciences (model CCM-200) meters, manufactured from 1992 to 2013, indicate that differences among replicate models are less than 5%. We thus developed equations for converting between units from these meter types. There was no significant effect of environment on the optical/absolute chlorophyll relationship. We derive the theoretical relationship between optical transmission ratios and absolute chlorophyll concentration and show how non-uniform distribution among species causes a variable, non-linear response. These results link in situ optical measurements with in vitro chlorophyll concentration and provide insight to strategies for radiation capture among diverse species. PMID:24635697

  12. AN ACCURATE NEW METHOD OF CALCULATING ABSOLUTE MAGNITUDES AND K-CORRECTIONS APPLIED TO THE SLOAN FILTER SET

    SciTech Connect

    Beare, Richard; Brown, Michael J. I.; Pimbblet, Kevin

    2014-12-20

    We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range.

  13. Challenge and Error: Critical Events and Attention-Related Errors

    ERIC Educational Resources Information Center

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  14. Absolute calibration of ultraviolet filter photometry

    NASA Technical Reports Server (NTRS)

    Bless, R. C.; Fairchild, T.; Code, A. D.

    1972-01-01

    The essential features of the calibration procedure can be divided into three parts. First, the shape of the bandpass of each photometer was determined by measuring the transmissions of the individual optical components and also by measuring the response of the photometer as a whole. Secondly, each photometer was placed in the essentially-collimated synchrotron radiation bundle maintained at a constant intensity level, and the output signal was determined from about 100 points on the objective. Finally, two or three points on the objective were illuminated by synchrotron radiation at several different intensity levels covering the dynamic range of the photometers. The output signals were placed on an absolute basis by the electron counting technique described earlier.

  15. MAGSAT: Vector magnetometer absolute sensor alignment determination

    NASA Technical Reports Server (NTRS)

    Acuna, M. H.

    1981-01-01

    A procedure is described for accurately determining the absolute alignment of the magnetic axes of a triaxial magnetometer sensor with respect to an external, fixed, reference coordinate system. The method does not require that the magnetic field vector orientation, as generated by a triaxial calibration coil system, be known to better than a few degrees from its true position, and minimizes the number of positions through which a sensor assembly must be rotated to obtain a solution. Computer simulations show that accuracies of better than 0.4 seconds of arc can be achieved under typical test conditions associated with existing magnetic test facilities. The basic approach is similar in nature to that presented by McPherron and Snare (1978) except that only three sensor positions are required and the system of equations to be solved is considerably simplified. Applications of the method to the case of the MAGSAT Vector Magnetometer are presented and the problems encountered discussed.

  16. Absolute Measurement of Electron Cloud Density

    SciTech Connect

    Covo, M K; Molvik, A W; Cohen, R H; Friedman, A; Seidl, P A; Logan, G; Bieniosek, F; Baca, D; Vay, J; Orlando, E; Vujic, J L

    2007-06-21

    Beam interaction with background gas and walls produces ubiquitous clouds of stray electrons that frequently limit the performance of particle accelerator and storage rings. Counterintuitively we obtained the electron cloud accumulation by measuring the expelled ions that are originated from the beam-background gas interaction, rather than by measuring electrons that reach the walls. The kinetic ion energy measured with a retarding field analyzer (RFA) maps the depressed beam space-charge potential and provides the dynamic electron cloud density. Clearing electrode current measurements give the static electron cloud background that complements and corroborates with the RFA measurements, providing an absolute measurement of electron cloud density during a 5 {micro}s duration beam pulse in a drift region of the magnetic transport section of the High-Current Experiment (HCX) at LBNL.

  17. Absolute instability of a viscous hollow jet

    NASA Astrophysics Data System (ADS)

    Gañán-Calvo, Alfonso M.

    2007-02-01

    An investigation of the spatiotemporal stability of hollow jets in unbounded coflowing liquids, using a general dispersion relation previously derived, shows them to be absolutely unstable for all physical values of the Reynolds and Weber numbers. The roots of the symmetry breakdown with respect to the liquid jet case, and the validity of asymptotic models are here studied in detail. Asymptotic analyses for low and high Reynolds numbers are provided, showing that old and well-established limiting dispersion relations [J. W. S. Rayleigh, The Theory of Sound (Dover, New York, 1945); S. Chandrasekhar, Hydrodynamic and Hydromagnetic Stability (Dover, New York, 1961)] should be used with caution. In the creeping flow limit, the analysis shows that, if the hollow jet is filled with any finite density and viscosity fluid, a steady jet could be made arbitrarily small (compatible with the continuum hypothesis) if the coflowing liquid moves faster than a critical velocity.

  18. Swarm's Absolute Scalar Magnetometer metrological performances

    NASA Astrophysics Data System (ADS)

    Leger, J.; Fratter, I.; Bertrand, F.; Jager, T.; Morales, S.

    2012-12-01

    The Absolute Scalar Magnetometer (ASM) has been developed for the ESA Earth Observation Swarm mission, planned for launch in November 2012. As its Overhauser magnetometers forerunners flown on Oersted and Champ satellites, it will deliver high resolution scalar measurements for the in-flight calibration of the Vector Field Magnetometer manufactured by the Danish Technical University. Latest results of the ground tests carried out to fully characterize all parameters that may affect its accuracy, both at instrument and satellite level, will be presented. In addition to its baseline function, the ASM can be operated either at a much higher sampling rate (burst mode at 250 Hz) or in a dual mode where it also delivers vector field measurements as a by-product. The calibration procedure and the relevant vector performances will be discussed.

  19. Absolute nonlocality via distributed computing without communication

    NASA Astrophysics Data System (ADS)

    Czekaj, Ł.; Pawłowski, M.; Vértesi, T.; Grudka, A.; Horodecki, M.; Horodecki, R.

    2015-09-01

    Understanding the role that quantum entanglement plays as a resource in various information processing tasks is one of the crucial goals of quantum information theory. Here we propose an alternative perspective for studying quantum entanglement: distributed computation of functions without communication between nodes. To formalize this approach, we propose identity games. Surprisingly, despite no signaling, we obtain that nonlocal quantum strategies beat classical ones in terms of winning probability for identity games originating from certain bipartite and multipartite functions. Moreover we show that, for a majority of functions, access to general nonsignaling resources boosts success probability two times in comparison to classical ones for a number of large enough outputs. Because there are no constraints on the inputs and no processing of the outputs in the identity games, they detect very strong types of correlations: absolute nonlocality.

  20. Absolute and Trend Accuracy of a New Regional Oximeter in Healthy Volunteers During Controlled Hypoxia

    PubMed Central

    Paidy, Samata; Kashif, Faisal

    2014-01-01

    BACKGROUND: Traditional patient monitoring may not detect cerebral tissue hypoxia, and typical interventions may not improve tissue oxygenation. Therefore, monitoring cerebral tissue oxygen status with regional oximetry is being increasingly used by anesthesiologists and perfusionists during surgery. In this study, we evaluated absolute and trend accuracy of a new regional oximetry technology in healthy volunteers. METHODS: A near-infrared spectroscopy sensor connected to a regional oximetry system (O3TM, Masimo, Irvine, CA) was placed on the subject’s forehead, to provide continuous measurement of regional oxygen saturation (rSo2). Reference blood samples were taken from the radial artery and internal jugular bulb vein, at baseline and after a series of increasingly hypoxic states induced by altering the inspired oxygen concentration while maintaining normocapnic arterial carbon dioxide pressure (Paco2). Absolute and trend accuracy of the regional oximetry system was determined by comparing rSo2 against reference cerebral oxygen saturation (Savo2), that is calculated by combining arterial and venous saturations of oxygen in the blood samples. RESULTS: Twenty-seven subjects were enrolled. Bias (test method mean error), standard deviation of error, standard error of the mean, and root mean square accuracy (ARMS) of rSo2 compared to Savo2 were 0.4%, 4.0%, 0.3%, and 4.0%, respectively. The limits of agreement were 8.4% (95% confidence interval, 7.6%–9.3%) to −7.6% (95% confidence interval, −8.4% to −6.7%). Trend accuracy analysis yielded a relative mean error of 0%, with a standard deviation of 2.1%, a standard error of 0.1%, and an ARMS of 2.1%. Multiple regression analysis showed that age and skin color did not affect the bias (all P > 0.1). CONCLUSIONS: Masimo O3 regional oximetry provided absolute root-mean-squared error of 4% and relative root-mean-squared error of 2.1% in healthy volunteers undergoing controlled hypoxia. PMID:25405692

  1. The Implications for Higher-Accuracy Absolute Measurements for NGS and its GRAV-D Project

    NASA Astrophysics Data System (ADS)

    Childers, V. A.; Winester, D.; Roman, D. R.; Eckl, M. C.; Smith, D. A.

    2013-12-01

    Absolute and relative gravity measurements play an important role in the work of NOAA's National Geodetic Survey (NGS). When NGS decided to replace the US national vertical datum, the Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project added a new dimension to the NGS gravity program. Airborne gravity collection would complement existing satellite and surface gravity data to allow the creation of a gravimetric geoid sufficiently accurate to form the basis of the new reference surface. To provide absolute gravity ties for the airborne surveys, initially new FG5 absolute measurements were made at existing absolute stations and relative measurements were used to transfer those measurements to excenters near the absolute mark and to the aircraft sensor height at the parking space. In 2011, NGS obtained a field-capable A10 absolute gravimeter from Micro-g LaCoste which became the basis of the support of the airborne surveys. Now A10 measurements are made at the aircraft location and transferred to sensor height. Absolute and relative gravity play other roles in GRAV-D. Comparison of surface data with new airborne collection will highlight surface surveys with bias or tilt errors and can provide enough information to repair or discard the data. We expect that areas of problem surface data may be re-measured. The GRAV-D project also plans to monitor the geoid in regions of rapid change and update the vertical datum when appropriate. Geoid change can result from glacial isostatic adjustment (GIA), tectonic change, and the massive drawdown of large scale aquifers. The NGS plan for monitoring these changes over time is still in its preliminary stages and is expected to rely primarily on the GRACE and GRACE Follow On satellite data in conjunction with models of GIA and tectonic change. We expect to make absolute measurements in areas of rapid change in order to verify model predictions. With the opportunities presented by rapid, highly accurate

  2. Total-pressure-tube averaging in pulsating flows.

    NASA Technical Reports Server (NTRS)

    Krause, L. N.

    1973-01-01

    A number of total-pressure tubes were tested in a nonsteady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. The tests were performed at a pressure level of 1 bar, for Mach numbers up to near 1, and frequencies up to 3 kHz. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonances which further increased the indicated pressure were encountered within the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure, and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles.

  3. Ultraspectral Sounding Retrieval Error Budget and Estimation

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping

    2011-01-01

    The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI)..

  4. Photocephalometry: errors of projection and landmark location.

    PubMed

    Phillips, C; Greer, J; Vig, P; Matteson, S

    1984-09-01

    A method called photocephalometry was recently described for the possible soft-tissue evaluation of orthognathic surgery patients by the superimposition of coordinated cephalographs and photographs. A grid analysis was performed to determine the accuracy of the superimposition method. In addition, the reliability of landmark identification was analyzed by the method error of Baumrind and Frantz, using three replicates of twelve patients' photographs. Comparison of twenty-one grid intervals showed that the magnification of the photographic image for any given grid plane is not correlated to that of the radiographic image. Accurate comparisons between soft- and hard-tissue anatomy by simply superimposing the images are not feasible because of the difference in the enlargement factors between the photographs and x-ray films. As was noted by Baumrind and Frantz, a wide range exists in the variability of estimating the location of landmarks. Sixty-six percent of the lateral photographic landmarks and 57% of the frontal landmarks had absolute mean errors for all twelve patients that were less than or equal to 2.0 mm. In general, the envelope of error for most landmarks was not circular. Although the photocephalometric apparatus as described by Hohl and colleagues does not yield the desired quantitative correlation between hard and soft tissues, valuable quantitative information on soft tissue can be easily obtained with the standardization and replication possible with the camera setup and enlarged photographs. PMID:6591803

  5. Correct averaging in transmission radiography: Analysis of the inverse problem

    NASA Astrophysics Data System (ADS)

    Wagner, Michael; Hampel, Uwe; Bieberle, Martina

    2016-05-01

    Transmission radiometry is frequently used in industrial measurement processes as a means to assess the thickness or composition of a material. A common problem encountered in such applications is the so-called dynamic bias error, which results from averaging beam intensities over time while the material distribution changes. We recently reported on a method to overcome the associated measurement error by solving an inverse problem, which in principle restores the exact average attenuation by considering the Poisson statistics of the underlying particle or photon emission process. In this paper we present a detailed analysis of the inverse problem and its optimal regularized numerical solution. As a result we derive an optimal parameter configuration for the inverse problem.

  6. Absolute value optimization to estimate phase properties of stochastic time series

    NASA Technical Reports Server (NTRS)

    Scargle, J. D.

    1977-01-01

    Most existing deconvolution techniques are incapable of determining phase properties of wavelets from time series data; to assure a unique solution, minimum phase is usually assumed. It is demonstrated, for moving average processes of order one, that deconvolution filtering using the absolute value norm provides an estimate of the wavelet shape that has the correct phase character when the random driving process is nonnormal. Numerical tests show that this result probably applies to more general processes.

  7. Absolute intensity calibration of the 32-channel heterodyne radiometer on experimental advanced superconducting tokamak

    SciTech Connect

    Liu, X.; Zhao, H. L.; Liu, Y. Li, E. Z.; Han, X.; Ti, A.; Hu, L. Q.; Zhang, X. D.; Domier, C. W.; Luhmann, N. C.

    2014-09-15

    This paper presents the results of the in situ absolute intensity calibration for the 32-channel heterodyne radiometer on the experimental advanced superconducting tokamak. The hot/cold load method is adopted, and the coherent averaging technique is employed to improve the signal to noise ratio. Measured spectra and electron temperature profiles are compared with those from an independent calibrated Michelson interferometer, and there is a relatively good agreement between the results from the two different systems.

  8. Absolute intensity calibration of the 32-channel heterodyne radiometer on experimental advanced superconducting tokamak.

    PubMed

    Liu, X; Zhao, H L; Liu, Y; Li, E Z; Han, X; Domier, C W; Luhmann, N C; Ti, A; Hu, L Q; Zhang, X D

    2014-09-01

    This paper presents the results of the in situ absolute intensity calibration for the 32-channel heterodyne radiometer on the experimental advanced superconducting tokamak. The hot/cold load method is adopted, and the coherent averaging technique is employed to improve the signal to noise ratio. Measured spectra and electron temperature profiles are compared with those from an independent calibrated Michelson interferometer, and there is a relatively good agreement between the results from the two different systems. PMID:25273727

  9. Absolute phase-assisted three-dimensional data registration for a dual-camera structured light system

    SciTech Connect

    Zhang Song; Yau Shingtung

    2008-06-10

    For a three-dimensional shape measurement system with a single projector and multiple cameras, registering patches from different cameras is crucial. Registration usually involves a complicated and time-consuming procedure. We propose a new method that can robustly match different patches via absolute phase without significantly increasing its cost. For y and z coordinates, the transformations from one camera to the other are approximated as third-order polynomial functions of the absolute phase. The x coordinates involve only translations and scalings. These functions are calibrated and only need to be determined once. Experiments demonstrated that the alignment error is within RMS 0.7 mm.

  10. Estimation of the absolute position of mobile systems by an optoelectronic processor

    NASA Technical Reports Server (NTRS)

    Feng, Liqiang; Fainman, Yeshaiahu; Koren, Yoram

    1992-01-01

    A method that determine the absolute position of a mobile system with a hybrid optoelectronic processor has been developed. Position estimates are based on an analysis of circular landmarks that are detected by a TV camera attached to the mobile system. The difference between the known shape of the landmark and its image provides the information needed to determine the absolute position of the mobile system. For robust operation, the parameters of the landmark image are extracted at high speeds using an optical processor that performs an optical Hough transform. The coordinates of the mobile system are computed from these parameters in a digital co-processor using fast algorithms. Different sources of position estimation errors have also been analyzed, and consequent algorithms to improve the navigation performance of the mobile system have been developed and evaluated by both computer simulation and experiments.

  11. Absolute Position Sensing Based on a Robust Differential Capacitive Sensor with a Grounded Shield Window.

    PubMed

    Bai, Yang; Lu, Yunfeng; Hu, Pengcheng; Wang, Gang; Xu, Jinxin; Zeng, Tao; Li, Zhengkun; Zhang, Zhonghua; Tan, Jiubin

    2016-01-01

    A simple differential capacitive sensor is provided in this paper to measure the absolute positions of length measuring systems. By utilizing a shield window inside the differential capacitor, the measurement range and linearity range of the sensor can reach several millimeters. What is more interesting is that this differential capacitive sensor is only sensitive to one translational degree of freedom (DOF) movement, and immune to the vibration along the other two translational DOFs. In the experiment, we used a novel circuit based on an AC capacitance bridge to directly measure the differential capacitance value. The experimental result shows that this differential capacitive sensor has a sensitivity of 2 × 10(-4) pF/μm with 0.08 μm resolution. The measurement range of this differential capacitive sensor is 6 mm, and the linearity error are less than 0.01% over the whole absolute position measurement range. PMID:27187393

  12. The solar absolute spectral irradiance 1150-3173 A - May 17, 1982

    NASA Technical Reports Server (NTRS)

    Mount, G. H.; Rottman, G. J.

    1983-01-01

    The full-disk solar spectral irradiance in the spectral range 1150-3173 A was obtained from a rocket observation above White Sands Missile Range, NM, on May 17, 1982, half way in time between solar maximum and solar minimum. Comparison with measurements made during solar maximum in 1980 indicate a large decrease in the absolute solar irradiance at wavelengths below 1900 A to approximately solar minimum values. No change above 1900 A from solar maximum to this flight was observed to within the errors of the measurements. Irradiance values lower than the Broadfoot results in the 2100-2500 A spectral range are found, but excellent agreement with Broadfoot between 2500 and 3173 A is found. The absolute calibration of the instruments for this flight was accomplished at the National Bureau of Standards Synchrotron Radiation Facility which significantly improves calibration of solar measurements made in this spectral region.

  13. Absolute brightness temperature measurements at 3.5-mm wavelength. [of sun, Venus, Jupiter and Saturn

    NASA Technical Reports Server (NTRS)

    Ulich, B. L.; Rhodes, P. J.; Davis, J. H.; Hollis, J. M.

    1980-01-01

    Careful observations have been made at 86.1 GHz to derive the absolute brightness temperatures of the sun (7914 + or - 192 K), Venus (357.5 + or - 13.1 K), Jupiter (179.4 + or - 4.7 K), and Saturn (153.4 + or - 4.8 K) with a standard error of about three percent. This is a significant improvement in accuracy over previous results at millimeter wavelengths. A stable transmitter and novel superheterodyne receiver were constructed and used to determine the effective collecting area of the Millimeter Wave Observatory (MWO) 4.9-m antenna relative to a previously calibrated standard gain horn. The thermal scale was set by calibrating the radiometer with carefully constructed and tested hot and cold loads. The brightness temperatures may be used to establish an absolute calibration scale and to determine the antenna aperture and beam efficiencies of other radio telescopes at 3.5-mm wavelength.

  14. Frequency-scanning interferometry for dynamic absolute distance measurement using Kalman filter.

    PubMed

    Tao, Long; Liu, Zhigang; Zhang, Weibo; Zhou, Yangli

    2014-12-15

    We propose a frequency-scanning interferometry using the Kalman filtering technique for dynamic absolute distance measurement. Frequency-scanning interferometry only uses a single tunable laser driven by a triangle waveform signal for forward and backward optical frequency scanning. The absolute distance and moving speed of a target can be estimated by the present input measurement of frequency-scanning interferometry and the previously calculated state based on the Kalman filter algorithm. This method not only compensates for movement errors in conventional frequency-scanning interferometry, but also achieves high-precision and low-complexity dynamic measurements. Experimental results of dynamic measurements under static state, vibration and one-dimensional movement are presented. PMID:25503050

  15. Absolute Position Sensing Based on a Robust Differential Capacitive Sensor with a Grounded Shield Window

    PubMed Central

    Bai, Yang; Lu, Yunfeng; Hu, Pengcheng; Wang, Gang; Xu, Jinxin; Zeng, Tao; Li, Zhengkun; Zhang, Zhonghua; Tan, Jiubin

    2016-01-01

    A simple differential capacitive sensor is provided in this paper to measure the absolute positions of length measuring systems. By utilizing a shield window inside the differential capacitor, the measurement range and linearity range of the sensor can reach several millimeters. What is more interesting is that this differential capacitive sensor is only sensitive to one translational degree of freedom (DOF) movement, and immune to the vibration along the other two translational DOFs. In the experiment, we used a novel circuit based on an AC capacitance bridge to directly measure the differential capacitance value. The experimental result shows that this differential capacitive sensor has a sensitivity of 2 × 10−4 pF/μm with 0.08 μm resolution. The measurement range of this differential capacitive sensor is 6 mm, and the linearity error are less than 0.01% over the whole absolute position measurement range. PMID:27187393

  16. Simple and accurate empirical absolute volume calibration of a multi-sensor fringe projection system

    NASA Astrophysics Data System (ADS)

    Gdeisat, Munther; Qudeisat, Mohammad; AlSa`d, Mohammed; Burton, David; Lilley, Francis; Ammous, Marwan M. M.

    2016-05-01

    This paper suggests a novel absolute empirical calibration method for a multi-sensor fringe projection system. The optical setup of the projector-camera sensor can be arbitrary. The term absolute calibration here means that the centre of the three dimensional coordinates in the resultant calibrated volume coincides with a preset centre to the three-dimensional real-world coordinate system. The use of a zero-phase fringe marking spot is proposed to increase depth calibration accuracy, where the spot centre is determined with sub-pixel accuracy. Also, a new method is proposed for transversal calibration. Depth and transversal calibration methods have been tested using both single sensor and three-sensor fringe projection systems. The standard deviation of the error produced by this system is 0.25 mm. The calibrated volume produced by this method is 400 mm×400 mm×140 mm.

  17. Absolute response of Fuji imaging plate detectors to picosecond-electron bunches.

    PubMed

    Zeil, K; Kraft, S D; Jochmann, A; Kroll, F; Jahr, W; Schramm, U; Karsch, L; Pawelke, J; Hidding, B; Pretzler, G

    2010-01-01

    The characterization of the absolute number of electrons generated by laser wakefield acceleration often relies on absolutely calibrated FUJI imaging plates (IP), although their validity in the regime of extreme peak currents is untested. Here, we present an extensive study on the dependence of the sensitivity of BAS-SR and BAS-MS IP to picosecond electron bunches of varying charge of up to 60 pC, performed at the electron accelerator ELBE, making use of about three orders of magnitude of higher peak intensity than in prior studies. We demonstrate that the response of the IPs shows no saturation effect and that the BAS-SR IP sensitivity of 0.0081 photostimulated luminescence per electron number confirms surprisingly well data from previous works. However, the use of the identical readout system and handling procedures turned out to be crucial and, if unnoticed, may be an important error source. PMID:20113093

  18. Absolute response of Fuji imaging plate detectors to picosecond-electron bunches

    SciTech Connect

    Zeil, K.; Kraft, S. D.; Jochmann, A.; Kroll, F.; Jahr, W.; Schramm, U.; Karsch, L.; Pawelke, J.; Hidding, B.; Pretzler, G.

    2010-01-15

    The characterization of the absolute number of electrons generated by laser wakefield acceleration often relies on absolutely calibrated FUJI imaging plates (IP), although their validity in the regime of extreme peak currents is untested. Here, we present an extensive study on the dependence of the sensitivity of BAS-SR and BAS-MS IP to picosecond electron bunches of varying charge of up to 60 pC, performed at the electron accelerator ELBE, making use of about three orders of magnitude of higher peak intensity than in prior studies. We demonstrate that the response of the IPs shows no saturation effect and that the BAS-SR IP sensitivity of 0.0081 photostimulated luminescence per electron number confirms surprisingly well data from previous works. However, the use of the identical readout system and handling procedures turned out to be crucial and, if unnoticed, may be an important error source.

  19. [The error, source of learning].

    PubMed

    Joyeux, Stéphanie; Bohic, Valérie

    2016-05-01

    The error itself is not recognised as a fault. It is the intentionality which differentiates between an error and a fault. An error is unintentional while a fault is a failure to respect known rules. The risk of error is omnipresent in health institutions. Public authorities have therefore set out a series of measures to reduce this risk. PMID:27155272

  20. Imagery of Errors in Typing

    ERIC Educational Resources Information Center

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  1. Neural Correlates of Reach Errors

    PubMed Central

    Hashambhoy, Yasmin; Rane, Tushar; Shadmehr, Reza

    2005-01-01

    Reach errors may be broadly classified into errors arising from unpredictable changes in target location, called target errors, and errors arising from miscalibration of internal models, called execution errors. Execution errors may be caused by miscalibration of dynamics (e.g.. when a force field alters limb dynamics) or by miscalibration of kinematics (e.g., when prisms alter visual feedback). While all types of errors lead to similar online corrections, we found that the motor system showed strong trial-by-trial adaptation in response to random execution errors but not in response to random target errors. We used fMRI and a compatible robot to study brain regions involved in processing each kind of error. Both kinematic and dynamic execution errors activated regions along the central and the post-central sulci and in lobules V, VI, and VIII of the cerebellum, making these areas possible sites of plastic changes in internal models for reaching. Only activity related to kinematic errors extended into parietal area 5. These results are inconsistent with the idea that kinematics and dynamics of reaching are computed in separate neural entities. In contrast, only target errors caused increased activity in the striatum and the posterior superior parietal lobule. The cerebellum and motor cortex were as strongly activated as with execution errors. These findings indicate a neural and behavioral dissociation between errors that lead to switching of behavioral goals, and errors that lead to adaptation of internal models of limb dynamics and kinematics. PMID:16251440

  2. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  3. The Insufficiency of Error Analysis

    ERIC Educational Resources Information Center

    Hammarberg, B.

    1974-01-01

    The position here is that error analysis is inadequate, particularly from the language-teaching point of view. Non-errors must be considered in specifying the learner's current command of the language, its limits, and his learning tasks. A cyclic procedure of elicitation and analysis, to secure evidence of errors and non-errors, is outlined.…

  4. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  5. Absolute testing of surface based on sub-aperture stitching interferometry

    NASA Astrophysics Data System (ADS)

    Jia, Xin; Xu, Fuchao; Xie, Weimin; Xing, Tingwen

    2015-02-01

    Large-aperture optical elements are widely employed in high-power laser system, astronomy, and outer-space technology. Sub-aperture stitching is an effective way to extend the lateral and vertical dynamic range of a conventional interferometer. Most of the commercial available sub-aperture stitching interferometers measure the surface with a standard lens that produces a reference wavefront, and the precision of the interferometer is generally limited by the standard lens. The test accuracy can be achieved by removing the error of reference surface by the absolute testing method. In our paper we use the different sub-apertures as the different flats to get the profile of the reference lens. Only two lens in the testing process which is fewer than the traditional 3-flat method. In the testing equipment, we add a reflective lens and a lens which can transparent and reflect to get the non rationally symmetric errors of the testing flat. The arithmetic is present in this paper which uses the absolute testing method to improve the testing accuracy of the sub-aperture stitching interferometers by removing the errors caused by reference surface.

  6. Performance of multi level error correction in binary holographic memory

    NASA Technical Reports Server (NTRS)

    Hanan, Jay C.; Chao, Tien-Hsin; Reyes, George F.

    2004-01-01

    At the Optical Computing Lab in the Jet Propulsion Laboratory (JPL) a binary holographic data storage system was designed and tested with methods of recording and retrieving the binary information. Levels of error correction were introduced to the system including pixel averaging, thresholding, and parity checks. Errors were artificially introduced into the binary holographic data storage system and were monitored as a function of the defect area fraction, which showed a strong influence on data integrity. Average area fractions exceeding one quarter of the bit area caused unrecoverable errors. Efficient use of the available data density was discussed. .

  7. Evaluating the Effect of Global Positioning System (GPS) Satellite Clock Error via GPS Simulation

    NASA Astrophysics Data System (ADS)

    Sathyamoorthy, Dinesh; Shafii, Shalini; Amin, Zainal Fitry M.; Jusoh, Asmariah; Zainun Ali, Siti

    2016-06-01

    This study is aimed at evaluating the effect of Global Positioning System (GPS) satellite clock error using GPS simulation. Two conditions of tests are used; Case 1: All the GPS satellites have clock errors within the normal range of 0 to 7 ns, corresponding to pseudorange error range of 0 to 2.1 m; Case 2: One GPS satellite suffers from critical failure, resulting in clock error in the pseudorange of up to 1 km. It is found that increase of GPS satellite clock error causes increase of average positional error due to increase of pseudorange error in the GPS satellite signals, which results in increasing error in the coordinates computed by the GPS receiver. Varying average positional error patterns are observed for the each of the readings. This is due to the GPS satellite constellation being dynamic, causing varying GPS satellite geometry over location and time, resulting in GPS accuracy being location / time dependent. For Case 1, in general, the highest average positional error values are observed for readings with the highest PDOP values, while the lowest average positional error values are observed for readings with the lowest PDOP values. For Case 2, no correlation is observed between the average positional error values and PDOP, indicating that the error generated is random.

  8. Manson's triple error.

    PubMed

    F, Delaporte

    2008-09-01

    The author discusses the significance, implications and limitations of Manson's work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error. PMID:18814729

  9. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  10. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  11. THE ABSOLUTE MAGNITUDE OF RRc VARIABLES FROM STATISTICAL PARALLAX

    SciTech Connect

    Kollmeier, Juna A.; Burns, Christopher R.; Thompson, Ian B.; Preston, George W.; Crane, Jeffrey D.; Madore, Barry F.; Morrell, Nidia; Prieto, José L.; Shectman, Stephen; Simon, Joshua D.; Villanueva, Edward; Szczygieł, Dorota M.; Gould, Andrew; Sneden, Christopher; Dong, Subo

    2013-09-20

    We present the first definitive measurement of the absolute magnitude of RR Lyrae c-type variable stars (RRc) determined purely from statistical parallax. We use a sample of 242 RRc variables selected from the All Sky Automated Survey for which high-quality light curves, photometry, and proper motions are available. We obtain high-resolution echelle spectra for these objects to determine radial velocities and abundances as part of the Carnegie RR Lyrae Survey. We find that M{sub V,RRc} = 0.59 ± 0.10 at a mean metallicity of [Fe/H] = –1.59. This is to be compared with previous estimates for RRab stars (M{sub V,RRab} = 0.76 ± 0.12) and the only direct measurement of an RRc absolute magnitude (RZ Cephei, M{sub V,RRc} = 0.27 ± 0.17). We find the bulk velocity of the halo relative to the Sun to be (W{sub π}, W{sub θ}, W{sub z} ) = (12.0, –209.9, 3.0) km s{sup –1} in the radial, rotational, and vertical directions with dispersions (σ{sub W{sub π}},σ{sub W{sub θ}},σ{sub W{sub z}}) = (150.4, 106.1, 96.0) km s{sup -1}. For the disk, we find (W{sub π}, W{sub θ}, W{sub z} ) = (13.0, –42.0, –27.3) km s{sup –1} relative to the Sun with dispersions (σ{sub W{sub π}},σ{sub W{sub θ}},σ{sub W{sub z}}) = (67.7,59.2,54.9) km s{sup -1}. Finally, as a byproduct of our statistical framework, we are able to demonstrate that UCAC2 proper-motion errors are significantly overestimated as verified by UCAC4.

  12. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  13. Bayesian Error Estimation Functionals

    NASA Astrophysics Data System (ADS)

    Jacobsen, Karsten W.

    The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.

  14. Human Error In Complex Systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1991-01-01

    Report presents results of research aimed at understanding causes of human error in such complex systems as aircraft, nuclear powerplants, and chemical processing plants. Research considered both slips (errors of action) and mistakes (errors of intention), and influence of workload on them. Results indicated that: humans respond to conditions in which errors expected by attempting to reduce incidence of errors; and adaptation to conditions potent influence on human behavior in discretionary situations.

  15. Medication error detection in two major teaching hospitals: What are the types of errors?

    PubMed Central

    Saghafi, Fatemeh; Zargarzadeh, Amir H

    2014-01-01

    Background: Increasing number of reports on medication errors and relevant subsequent damages, especially in medical centers has become a growing concern for patient safety in recent decades. Patient safety and in particular, medication safety is a major concern and challenge for health care professionals around the world. Our prospective study was designed to detect prescribing, transcribing, dispensing, and administering medication errors in two major university hospitals. Materials and Methods: After choosing 20 similar hospital wards in two large teaching hospitals in the city of Isfahan, Iran, the sequence was randomly selected. Diagrams for drug distribution were drawn by the help of pharmacy directors. Direct observation technique was chosen as the method for detecting the errors. A total of 50 doses were studied in each ward to detect prescribing, transcribing and administering errors in each ward. The dispensing error was studied on 1000 doses dispensed in each hospital pharmacy. Results: A total of 8162 number of doses of medications were studied during the four stages, of which 8000 were complete data to be analyzed. 73% of prescribing orders were incomplete and did not have all six parameters (name, dosage form, dose and measuring unit, administration route, and intervals of administration). We found 15% transcribing errors. One-third of administration of medications on average was erroneous in both hospitals. Dispensing errors ranged between 1.4% and 2.2%. Conclusion: Although prescribing and administrating compromise most of the medication errors, improvements are needed in all four stages with regard to medication errors. Clear guidelines must be written and executed in both hospitals to reduce the incidence of medication errors. PMID:25364360

  16. Error Modeling and Calibration for Encoded Sun Sensors

    PubMed Central

    Fan, Qiaoyun; Zhang, Guangjun; Li, Jian; Wei, Xinguo; Li, Xiaoyang

    2013-01-01

    Error factors in the encoded sun sensor (ESS) are analyzed and simulated. Based on the analysis results, an ESS error compensation model containing structural errors and fine-code algorithm errors is established, and the corresponding calibration method for model parameters is proposed. As external parameters, installation deviation between ESS and calibration equipment are introduced to the ESS calibration model, so that the model parameters can be calibrated accurately. The experimental results show that within plus/minus 60 degree of incident angle, the ESS measurement accuracy after compensation is three times higher on average than that before compensation. PMID:23470486

  17. Prospects for the Moon as an SI-Traceable Absolute Spectroradiometric Standard for Satellite Remote Sensing

    NASA Astrophysics Data System (ADS)

    Cramer, C. E.; Stone, T. C.; Lykke, K.; Woodward, J. T.

    2015-12-01

    The Earth's Moon has many physical properties that make it suitable for use as a reference light source for radiometric calibration of remote sensing satellite instruments. Lunar calibration has been successfully applied to many imagers in orbit, including both MODIS instruments and NPP-VIIRS, using the USGS ROLO model to predict the reference exoatmospheric lunar irradiance. Sensor response trending was developed for SeaWIFS with a relative accuracy better than 0.1 % per year with lunar calibration techniques. However, the Moon rarely is used as an absolute reference for on-orbit calibration, primarily due to uncertainties in the ROLO model absolute scale of 5%-10%. But this limitation lies only with the models - the Moon itself is radiometrically stable, and development of a high-accuracy absolute lunar reference is inherently feasible. A program has been undertaken by NIST to collect absolute measurements of the lunar spectral irradiance with absolute accuracy <1 % (k=2), traceable to SI radiometric units. Initial Moon observations were acquired from the Whipple Observatory on Mt. Hopkins, Arizona, elevation 2367 meters, with continuous spectral coverage from 380 nm to 1040 nm at ~3 nm resolution. The lunar spectrometer acquired calibration measurements several times each observing night by pointing to a calibrated integrating sphere source. The lunar spectral irradiance at the top of the atmosphere was derived from a time series of ground-based measurements by a Langley analysis that incorporated measured atmospheric conditions and ROLO model predictions for the change in irradiance resulting from the changing Sun-Moon-Observer geometry throughout each night. Two nights were selected for further study. An extensive error analysis, which includes instrument calibration and atmospheric correction terms, shows a combined standard uncertainty under 1 % over most of the spectral range. Comparison of these two nights' spectral irradiance measurements with predictions

  18. Gyrokinetic Statistical Absolute Equilibrium and Turbulence

    SciTech Connect

    Jian-Zhou Zhu and Gregory W. Hammett

    2011-01-10

    A paradigm based on the absolute equilibrium of Galerkin-truncated inviscid systems to aid in understanding turbulence [T.-D. Lee, "On some statistical properties of hydrodynamical and magnetohydrodynamical fields," Q. Appl. Math. 10, 69 (1952)] is taken to study gyrokinetic plasma turbulence: A finite set of Fourier modes of the collisionless gyrokinetic equations are kept and the statistical equilibria are calculated; possible implications for plasma turbulence in various situations are discussed. For the case of two spatial and one velocity dimension, in the calculation with discretization also of velocity v with N grid points (where N + 1 quantities are conserved, corresponding to an energy invariant and N entropy-related invariants), the negative temperature states, corresponding to the condensation of the generalized energy into the lowest modes, are found. This indicates a generic feature of inverse energy cascade. Comparisons are made with some classical results, such as those of Charney-Hasegawa-Mima in the cold-ion limit. There is a universal shape for statistical equilibrium of gyrokinetics in three spatial and two velocity dimensions with just one conserved quantity. Possible physical relevance to turbulence, such as ITG zonal flows, and to a critical balance hypothesis are also discussed.

  19. Absolute surface energy for zincblende semiconductors

    NASA Astrophysics Data System (ADS)

    Zhang, S. B.; Wei, Su-Huai

    2003-03-01

    Recent advance in nanosciences requires the determination of surface (or facet) energy of semiconductors, which is often difficult due to the polar nature of some of the most important surfaces such as the (111)A/(111)B surfaces. Several approaches have been developed in the past [1-3] to deal with the problem but an unambiguous division of the polar surface energies is yet to come [2]. Here we show that an accurate division is indeed possible for the zincblende semiconductors and will present the results for GaAs, ZnSe, and CuInSe2 [4], respectively. A general trend emerges, relating the absolute surface energy to the ionicity of the bulk materials. [1] N. Chetty and R. M. Martin, Phys. Rev. B 45, 6074 (1992). [2] N. Moll, et al., Phys. Rev. B 54, 8844 (1996). [3] S. Mankefors, Phys. Rev. B 59, 13151 (1999). [4] S. B. Zhang and S.-H. Wei, Phys. Rev. B 65, 081402 (2002).

  20. Climate Absolute Radiance and Refractivity Observatory (CLARREO)

    NASA Technical Reports Server (NTRS)

    Leckey, John P.

    2015-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) is a mission, led and developed by NASA, that will measure a variety of climate variables with an unprecedented accuracy to quantify and attribute climate change. CLARREO consists of three separate instruments: an infrared (IR) spectrometer, a reflected solar (RS) spectrometer, and a radio occultation (RO) instrument. The mission will contain orbiting radiometers with sufficient accuracy, including on orbit verification, to calibrate other space-based instrumentation, increasing their respective accuracy by as much as an order of magnitude. The IR spectrometer is a Fourier Transform spectrometer (FTS) working in the 5 to 50 microns wavelength region with a goal of 0.1 K (k = 3) accuracy. The FTS will achieve this accuracy using phase change cells to verify thermistor accuracy and heated halos to verify blackbody emissivity, both on orbit. The RS spectrometer will measure the reflectance of the atmosphere in the 0.32 to 2.3 microns wavelength region with an accuracy of 0.3% (k = 2). The status of the instrumentation packages and potential mission options will be presented.

  1. Absolute decay width measurements in 16O

    NASA Astrophysics Data System (ADS)

    Wheldon, C.; Ashwood, N. I.; Barr, M.; Curtis, N.; Freer, M.; Kokalova, Tz; Malcolm, J. D.; Spencer, S. J.; Ziman, V. A.; Faestermann, Th; Krücken, R.; Wirth, H.-F.; Hertenberger, R.; Lutter, R.; Bergmaier, A.

    2012-09-01

    The reaction 126C(63Li, d)168O* at a 6Li bombarding energy of 42 MeV has been used to populate excited states in 16O. The deuteron ejectiles were measured using the high-resolution Munich Q3D spectrograph. A large-acceptance silicon-strip detector array was used to register the recoil and break-up products. This complete kinematic set-up has enabled absolute α-decay widths to be measured with high-resolution in the 13.9 to 15.9 MeV excitation energy regime in 16O; many for the first time. This energy region spans the 14.4 MeV four-α breakup threshold. Monte-Carlo simulations of the detector geometry and break-up processes yield detection efficiencies for the two dominant decay modes of 40% and 37% for the α+12C(g.s.) and a+12C(2+1) break-up channels respectively.

  2. Absolute spectrophotometry of northern compact planetary nebulae

    NASA Astrophysics Data System (ADS)

    Wright, S. A.; Corradi, R. L. M.; Perinotto, M.

    2005-06-01

    We present medium-dispersion spectra and narrowband images of six northern compact planetary nebulae (PNe): BoBn 1, DdDm 1, IC 5117, M 1-5, M 1-71, and NGC 6833. From broad-slit spectra, total absolute fluxes and equivalent widths were measured for all observable emission lines. High signal-to-noise emission line fluxes of Hα, Hβ, [Oiii], [Nii], and HeI may serve as emission line flux standards for northern hemisphere observers. From narrow-slit spectra, we derive systemic radial velocities. For four PNe, available emission line fluxes were measured with sufficient signal-to-noise to probe the physical properties of their electron densities, temperatures, and chemical abundances. BoBn 1 and DdDm 1, both type IV PNe, have an Hβ flux over three sigma away from previous measurements. We report the first abundance measurements of M 1-71. NGC 6833 measured radial velocity and galactic coordinates suggest that it is associated with the outer arm or possibly the galactic halo, and its low abundance ([O/H]=1.3× 10-4) may be indicative of low metallicity within that region.

  3. Subaperture test of wavefront error of large telescopes: error sources and stitching performance simulations

    NASA Astrophysics Data System (ADS)

    Chen, Shanyong; Li, Shengyi; Wang, Guilin

    2014-11-01

    The wavefront error of large telescopes requires to be measured to check the system quality and also estimate the misalignment of the telescope optics including the primary, the secondary and so on. It is usually realized by a focal plane interferometer and an autocollimator flat (ACF) of the same aperture with the telescope. However, it is challenging for meter class telescopes due to high cost and technological challenges in producing the large ACF. Subaperture test with a smaller ACF is hence proposed in combination with advanced stitching algorithms. Major error sources include the surface error of the ACF, misalignment of the ACF and measurement noises. Different error sources have different impacts on the wavefront error. Basically the surface error of the ACF behaves like systematic error and the astigmatism will be cumulated and enlarged if the azimuth of subapertures remains fixed. It is difficult to accurately calibrate the ACF because it suffers considerable deformation induced by gravity or mechanical clamping force. Therefore a selfcalibrated stitching algorithm is employed to separate the ACF surface error from the subaperture wavefront error. We suggest the ACF be rotated around the optical axis of the telescope for subaperture test. The algorithm is also able to correct the subaperture tip-tilt based on the overlapping consistency. Since all subaperture measurements are obtained in the same imaging plane, lateral shift of the subapertures is always known and the real overlapping points can be recognized in this plane. Therefore lateral positioning error of subapertures has no impact on the stitched wavefront. In contrast, the angular positioning error changes the azimuth of the ACF and finally changes the systematic error. We propose an angularly uneven layout of subapertures to minimize the stitching error, which is very different from our knowledge. At last, measurement noises could never be corrected but be suppressed by means of averaging and

  4. Evaluating a medical error taxonomy.

    PubMed Central

    Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a standard language for reporting medication errors. This project maps the NCC MERP taxonomy of medication error to MedWatch medical errors involving infusion pumps. Of particular interest are human factors associated with medical device errors. The NCC MERP taxonomy of medication errors is limited in mapping information from MEDWATCH because of the focus on the medical device and the format of reporting. PMID:12463789

  5. An absolute interval scale of order for point patterns

    PubMed Central

    Protonotarios, Emmanouil D.; Baum, Buzz; Johnston, Alan; Hunter, Ginger L.; Griffin, Lewis D.

    2014-01-01

    Human observers readily make judgements about the degree of order in planar arrangements of points (point patterns). Here, based on pairwise ranking of 20 point patterns by degree of order, we have been able to show that judgements of order are highly consistent across individuals and the dimension of order has an interval scale structure spanning roughly 10 just-notable-differences (jnd) between disorder and order. We describe a geometric algorithm that estimates order to an accuracy of half a jnd by quantifying the variability of the size and shape of spaces between points. The algorithm is 70% more accurate than the best available measures. By anchoring the output of the algorithm so that Poisson point processes score on average 0, perfect lattices score 10 and unit steps correspond closely to jnds, we construct an absolute interval scale of order. We demonstrate its utility in biology by using this scale to quantify order during the development of the pattern of bristles on the dorsal thorax of the fruit fly. PMID:25079866

  6. Gravitational acceleration as a cue for absolute size and distance?

    NASA Technical Reports Server (NTRS)

    Hecht, H.; Kaiser, M. K.; Banks, M. S.

    1996-01-01

    When an object's motion is influenced by gravity, as in the rise and fall of a thrown ball, the vertical component of acceleration is roughly constant at 9.8 m/sec2. In principle, an observer could use this information to estimate the absolute size and distance of the object (Saxberg, 1987a; Watson, Banks, von Hofsten, & Royden, 1992). In five experiments, we examined people's ability to utilize the size and distance information provided by gravitational acceleration. Observers viewed computer simulations of an object rising and falling on a trajectory aligned with the gravitational vector. The simulated objects were balls of different diameters presented across a wide range of simulated distances. Observers were asked to identify the ball that was presented and to estimate its distance. The results showed that observers were much more sensitive to average velocity than to the gravitational acceleration pattern. Likewise, verticality of the motion and visibility of the trajectory's apex had negligible effects on the accuracy of size and distance judgments.

  7. A rack-mounted precision waveguide-below-cutoff attenuator with an absolute electronic readout

    NASA Technical Reports Server (NTRS)

    Cook, C. C.

    1974-01-01

    A coaxial precision waveguide-below-cutoff attenuator is described which uses an absolute (unambiguous) electronic digital readout of displacement in inches in addition to the usual gear driven mechanical counter-dial readout in decibels. The attenuator is rack-mountable and has the input and output RF connectors in a fixed position. The attenuation rate for 55, 50, and 30 MHz operation is given along with a discussion of sources of errors. In addition, information is included to aid the user in making adjustments on the attenuator should it be damaged or disassembled for any reason.

  8. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOEpatents

    Prasad, Mano K.; Snyderman, Neal J.; Rowland, Mark S.

    2015-12-01

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  9. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOEpatents

    Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

    2012-06-05

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  10. The absolute disparity anomaly and the mechanism of relative disparities.

    PubMed

    Chopin, Adrien; Levi, Dennis; Knill, David; Bavelier, Daphne

    2016-06-01

    There has been a long-standing debate about the mechanisms underlying the perception of stereoscopic depth and the computation of the relative disparities that it relies on. Relative disparities between visual objects could be computed in two ways: (a) using the difference in the object's absolute disparities (Hypothesis 1) or (b) using relative disparities based on the differences in the monocular separations between objects (Hypothesis 2). To differentiate between these hypotheses, we measured stereoscopic discrimination thresholds for lines with different absolute and relative disparities. Participants were asked to judge the depth of two lines presented at the same distance from the fixation plane (absolute disparity) or the depth between two lines presented at different distances (relative disparity). We used a single stimulus method involving a unique memory component for both conditions, and no extraneous references were available. We also measured vergence noise using Nonius lines. Stereo thresholds were substantially worse for absolute disparities than for relative disparities, and the difference could not be explained by vergence noise. We attribute this difference to an absence of conscious readout of absolute disparities, termed the absolute disparity anomaly. We further show that the pattern of correlations between vergence noise and absolute and relative disparity acuities can be explained jointly by the existence of the absolute disparity anomaly and by the assumption that relative disparity information is computed from absolute disparities (Hypothesis 1). PMID:27248566

  11. The absolute disparity anomaly and the mechanism of relative disparities

    PubMed Central

    Chopin, Adrien; Levi, Dennis; Knill, David; Bavelier, Daphne

    2016-01-01

    There has been a long-standing debate about the mechanisms underlying the perception of stereoscopic depth and the computation of the relative disparities that it relies on. Relative disparities between visual objects could be computed in two ways: (a) using the difference in the object's absolute disparities (Hypothesis 1) or (b) using relative disparities based on the differences in the monocular separations between objects (Hypothesis 2). To differentiate between these hypotheses, we measured stereoscopic discrimination thresholds for lines with different absolute and relative disparities. Participants were asked to judge the depth of two lines presented at the same distance from the fixation plane (absolute disparity) or the depth between two lines presented at different distances (relative disparity). We used a single stimulus method involving a unique memory component for both conditions, and no extraneous references were available. We also measured vergence noise using Nonius lines. Stereo thresholds were substantially worse for absolute disparities than for relative disparities, and the difference could not be explained by vergence noise. We attribute this difference to an absence of conscious readout of absolute disparities, termed the absolute disparity anomaly. We further show that the pattern of correlations between vergence noise and absolute and relative disparity acuities can be explained jointly by the existence of the absolute disparity anomaly and by the assumption that relative disparity information is computed from absolute disparities (Hypothesis 1). PMID:27248566

  12. Evaluation and Applications of the Prediction of Intensity Model Error (PRIME) Model

    NASA Astrophysics Data System (ADS)

    Bhatia, K. T.; Nolan, D. S.; Demaria, M.; Schumacher, A.

    2015-12-01

    Forecasters and end users of tropical cyclone (TC) intensity forecasts would greatly benefit from a reliable expectation of model error to counteract the lack of consistency in TC intensity forecast performance. As a first step towards producing error predictions to accompany each TC intensity forecast, Bhatia and Nolan (2013) studied the relationship between synoptic parameters, TC attributes, and forecast errors. In this study, we build on previous results of Bhatia and Nolan (2013) by testing the ability of the Prediction of Intensity Model Error (PRIME) model to forecast the absolute error and bias of four leading intensity models available for guidance in the Atlantic basin. PRIME forecasts are independently evaluated at each 12-hour interval from 12 to 120 hours during the 2007-2014 Atlantic hurricane seasons. The absolute error and bias predictions of PRIME are compared to their respective climatologies to determine their skill. In addition to these results, we will present the performance of the operational version of PRIME run during the 2015 hurricane season. PRIME verification results show that it can reliably anticipate situations where particular models excel, and therefore could lead to a more informed protocol for hurricane evacuations and storm preparations. These positive conclusions suggest that PRIME forecasts also have the potential to lower the error in the original intensity forecasts of each model. As a result, two techniques are proposed to develop a post-processing procedure for a multimodel ensemble based on PRIME. The first approach is to inverse-weight models using PRIME absolute error predictions (higher predicted absolute error corresponds to lower weights). The second multimodel ensemble applies PRIME bias predictions to each model's intensity forecast and the mean of the corrected models is evaluated. The forecasts of both of these experimental ensembles are compared to those of the equal-weight ICON ensemble, which currently

  13. An Analysis of the Effect on the Data Processing of Korea GPS Network by the Absolute Phase Center Variations of GPS Antenna

    NASA Astrophysics Data System (ADS)

    Baek, Jeongho; Lim, Hyung-Chul; Jo, Jung Hyun; Cho, Sungki; Cho, Jung-Ho

    2006-12-01

    The International GNSS Service (IGS) has prepared for a transition from the relative phase center variation (PCV) to the absolute PCV, because the terrestrial scale problem of the absolute PCV was resolved by estimating the PCV of the GPS satellites. Thus, the GPS data will be processed by using the absolute PCV which will be an IGS standard model in the near future. It is necessary to compare and analyze the results between the relative PCV and the absolute PCV for the establishment of the reliable processing strategy. This research analyzes the effect caused by the absolute PCV via the GPS network data processing. First, the four IGS stations, Daejeon, Suwon, Beijing and Wuhan, are selected to make longer baselines than 1000 km, and processed by using the relative PCV and the absolute PCV to examine the effect of the antenna raydome. Beijing and Wuhan stations of which the length of baselines are longer than 1000 km show the average difference of 1.33 cm in the vertical! component, and 2.97 cm when the antenna raydomes are considered. Second, the 7 permanent GPS stations among the total 9 stations, operated by Korea Astronomy and Space Science Institute, are processed by applying the relative PCV and the absolute PCV, and their results are compared and analyzed. An insignificant effect of the absolute PCV is shown in Korea regional network with the average difference of 0.12 cm in the vertical component.

  14. Margins of Error

    ERIC Educational Resources Information Center

    Parsons, Joe

    2003-01-01

    Language-minority students are the fastest-growing population in U.S. public schools. During the 1990s, their numbers rose from 8 million to 15 million. These include new immigrant students as well as students from Native American and indigenous backgrounds. Research shows that the distribution of "extremely bright," "average" and "cognitively…

  15. a Portable Apparatus for Absolute Measurements of the Earth's Gravity.

    NASA Astrophysics Data System (ADS)

    Zumberge, Mark Andrew

    We have developed a new, portable apparatus for making absolute measurements of the acceleration due to the earth's gravity. We use the method of interferometrically determining the acceleration of a freely falling corner -cube prism. The falling object is surrounded by a chamber which is driven vertically inside a fixed vacuum chamber. This falling chamber is servoed to track the falling corner -cube to shield it from drag due to background gas. In addition, the drag-free falling chamber removes the need for a magnetic release, shields the falling object from electrostatic forces, and provides a means of both gently arresting the falling object and quickly returning it to its start position, to allow rapid acquisition of data. A synthesized long period isolation device reduces the noise due to seismic oscillations. A new type of Zeeman laser is used as the light source in the interferometer, and is compared with the wavelength of an iodine stabilized laser. The times of occurrence of 45 interference fringes are measured to within 0.2 nsec over a 20 cm drop and are fit to a quadratic by an on-line minicomputer. 150 drops can be made in ten minutes resulting in a value of g having a precision of 3 to 6 parts in 10('9). Systematic errors have been determined to be less than 5 parts in 10('9) through extensive tests. Three months of gravity data have been obtained with a reproducibility ranging from 5 to 10 parts in 10('9). The apparatus has been designed to be easily portable. Field measurements are planned for the immediate future. An accuracy of 6 parts in 10('9) corresponds to a height sensitivity of 2 cm. Vertical motions in the earth's crust and tectonic density changes that may precede earthquakes are to be investigated using this apparatus.

  16. New identification method for Hammerstein models based on approximate least absolute deviation

    NASA Astrophysics Data System (ADS)

    Xu, Bao-Chang; Zhang, Ying-Dan

    2016-07-01

    Disorder and peak noises or large disturbances can deteriorate the identification effects of Hammerstein non-linear models when using the least-square (LS) method. The least absolute deviation technique can be used to resolve this problem; however, its absolute value cannot meet the need of differentiability required by most algorithms. To improve robustness and resolve the non-differentiable problem, an approximate least absolute deviation (ALAD) objective function is established by introducing a deterministic function that exhibits the characteristics of absolute value under certain situations. A new identification method for Hammerstein models based on ALAD is thus developed in this paper. The basic idea of this method is to apply the stochastic approximation theory in the process of deriving the recursive equations. After identifying the parameter matrix of the Hammerstein model via the new algorithm, the product terms in the matrix are separated by calculating the average values. Finally, algorithm convergence is proven by applying the ordinary differential equation method. The proposed algorithm has a better robustness as compared to other LS methods, particularly when abnormal points exist in the measured data. Furthermore, the proposed algorithm is easier to apply and converges faster. The simulation results demonstrate the efficacy of the proposed algorithm.

  17. Speech Errors, Error Correction, and the Construction of Discourse.

    ERIC Educational Resources Information Center

    Linde, Charlotte

    Speech errors have been used in the construction of production models of the phonological and semantic components of language, and for a model of interactional processes. Errors also provide insight into how speakers plan discourse and syntactic structure,. Different types of discourse exhibit different types of error. The present data are taken…

  18. Absolutely lossless compression of medical images.

    PubMed

    Ashraf, Robina; Akbar, Muhammad

    2005-01-01

    Data in medical images is very large and therefore for storage and/or transmission of these images, compression is essential. A method is proposed which provides high compression ratios for radiographic images with no loss of diagnostic quality. In the approach an image is first compressed at a high compression ratio but with loss, and the error image is then compressed losslessly. The resulting compression is not only strictly lossless, but also expected to yield a high compression ratio, especially if the lossy compression technique is good. A neural network vector quantizer (NNVQ) is used as a lossy compressor, while for lossless compression Huffman coding is used. Quality of images is evaluated by comparing with standard compression techniques available. PMID:17281110

  19. Orion Absolute Navigation System Progress and Challenge

    NASA Technical Reports Server (NTRS)

    Holt, Greg N.; D'Souza, Christopher

    2012-01-01

    The absolute navigation design of NASA's Orion vehicle is described. It has undergone several iterations and modifications since its inception, and continues as a work-in-progress. This paper seeks to benchmark the current state of the design and some of the rationale and analysis behind it. There are specific challenges to address when preparing a timely and effective design for the Exploration Flight Test (EFT-1), while still looking ahead and providing software extensibility for future exploration missions. The primary onboard measurements in a Near-Earth or Mid-Earth environment consist of GPS pseudo-range and delta-range, but for future explorations missions the use of star-tracker and optical navigation sources need to be considered. Discussions are presented for state size and composition, processing techniques, and consider states. A presentation is given for the processing technique using the computationally stable and robust UDU formulation with an Agee-Turner Rank-One update. This allows for computational savings when dealing with many parameters which are modeled as slowly varying Gauss-Markov processes. Preliminary analysis shows up to a 50% reduction in computation versus a more traditional formulation. Several state elements are discussed and evaluated, including position, velocity, attitude, clock bias/drift, and GPS measurement biases in addition to bias, scale factor, misalignment, and non-orthogonalities of the accelerometers and gyroscopes. Another consideration is the initialization of the EKF in various scenarios. Scenarios such as single-event upset, ground command, and cold start are discussed as are strategies for whole and partial state updates as well as covariance considerations. Strategies are given for dealing with latent measurements and high-rate propagation using multi-rate architecture. The details of the rate groups and the data ow between the elements is discussed and evaluated.

  20. Evaluation of the Absolute Regional Temperature Potential

    NASA Technical Reports Server (NTRS)

    Shindell, D. T.

    2012-01-01

    The Absolute Regional Temperature Potential (ARTP) is one of the few climate metrics that provides estimates of impacts at a sub-global scale. The ARTP presented here gives the time-dependent temperature response in four latitude bands (90-28degS, 28degS-28degN, 28-60degN and 60-90degN) as a function of emissions based on the forcing in those bands caused by the emissions. It is based on a large set of simulations performed with a single atmosphere-ocean climate model to derive regional forcing/response relationships. Here I evaluate the robustness of those relationships using the forcing/response portion of the ARTP to estimate regional temperature responses to the historic aerosol forcing in three independent climate models. These ARTP results are in good accord with the actual responses in those models. Nearly all ARTP estimates fall within +/-20%of the actual responses, though there are some exceptions for 90-28degS and the Arctic, and in the latter the ARTP may vary with forcing agent. However, for the tropics and the Northern Hemisphere mid-latitudes in particular, the +/-20% range appears to be roughly consistent with the 95% confidence interval. Land areas within these two bands respond 39-45% and 9-39% more than the latitude band as a whole. The ARTP, presented here in a slightly revised form, thus appears to provide a relatively robust estimate for the responses of large-scale latitude bands and land areas within those bands to inhomogeneous radiative forcing and thus potentially to emissions as well. Hence this metric could allow rapid evaluation of the effects of emissions policies at a finer scale than global metrics without requiring use of a full climate model.

  1. Absolute determination of local tropospheric OH concentrations

    NASA Technical Reports Server (NTRS)

    Armerding, Wolfgang; Comes, Franz-Josef

    1994-01-01

    Long path absorption (LPA) according to Lambert Beer's law is a method to determine absolute concentrations of trace gases such as tropospheric OH. We have developed a LPA instrument which is based on a rapid tuning of the light source which is a frequency doubled dye laser. The laser is tuned across two or three OH absorption features around 308 nm with a scanning speed of 0.07 cm(exp -1)/microsecond and a repetition rate of 1.3 kHz. This high scanning speed greatly reduces the fluctuation of the light intensity caused by the atmosphere. To obtain the required high sensitivity the laser output power is additionally made constant and stabilized by an electro-optical modulator. The present sensitivity is of the order of a few times 10(exp 5) OH per cm(exp 3) for an acquisition time of a minute and an absorption path length of only 1200 meters so that a folding of the optical path in a multireflection cell was possible leading to a lateral dimension of the cell of a few meters. This allows local measurements to be made. Tropospheric measurements have been carried out in 1991 resulting in the determination of OH diurnal variation at specific days in late summer. Comparison with model calculations have been made. Interferences are mainly due to SO2 absorption. The problem of OH self generation in the multireflection cell is of minor extent. This could be shown by using different experimental methods. The minimum-maximum signal to noise ratio is about 8 x 10(exp -4) for a single scan. Due to the small size of the absorption cell the realization of an open air laboratory is possible in which by use of an additional UV light source or by additional fluxes of trace gases the chemistry can be changed under controlled conditions allowing kinetic studies of tropospheric photochemistry to be made in open air.

  2. Design of piezoresistive MEMS absolute pressure sensor

    NASA Astrophysics Data System (ADS)

    Kumar, S.; Pant, B. D.

    2012-10-01

    MEMS pressure sensors are one of the most widely commercialized microsensors in the MEMS industry. They have a plethora of applications in various fields including the automobile, space, biomedical, aviation and military sectors. One of the simplest and most efficient methods in MEMS pressure sensors for measuring pressure is to use the phenomenon of piezoresistance. The piezoresistive effect causes change in the resistance of certain doped materials when they are subjected to stress, as a result of energy band deformation. Piezoresistive pressure sensors consist of piezoresistors placed over a thin diaphragm which deflects under the action of the pressure to be measured. The result of this deflection causes the piezoresistors to change their resistance due to the stress experienced by them. The change is converted into electrical signals and measured in order to find the value of applied pressure. In this work, a high range (30 Bar) pressure sensor is designed based on the principle of piezoresistivity. The inaccuracies in the analytical models that are generally used to model the pressure sensor diaphragm have also been analysed. Thus, the Finite Element Method (FEM) is adopted to optimize the pressure sensor for parameters like sensitivity and linearity. This is achieved by choosing the proper shape of piezoresistor, thickness of diaphragm and the position of the piezoresistor on the pressure sensor diaphragm. For the square diaphragm, sensitivity of 5.18 mV/V/Bar and a linearity error of 0.02% are obtained. For the circular diaphragm, sensitivity of 3.69 mV/V/Bar and a linearity error of 0.011% are obtained.

  3. Systematic errors for a Mueller matrix dual rotating compensator ellipsometer.

    PubMed

    Broch, Laurent; En Naciri, Aotmane; Johann, Luc

    2008-06-01

    The characterization of anisotropic materials and complex systems by ellipsometry has pushed the design of instruments to require the measurement of the full reflection Mueller matrix of the sample with a great precision. Therefore Mueller matrix ellipsometers have emerged over the past twenty years. The values of some coefficients of the matrix can be very small and errors due to noise or systematic errors can induce distored analysis. We present a detailed characterization of the systematic errors for a Mueller Matrix Ellipsometer in the dual-rotating compensator configuration. Starting from a general formalism, we derive explicit first-order expressions for the errors on all the coefficients of the Mueller matrix of the sample. The errors caused by inaccuracy of the azimuthal arrangement of the optical components and residual ellipticity introduced by imperfect optical elements are shown. A new method based on a four-zone averaging measurement is proposed to vanish the systematic errors. PMID:18545594

  4. Measurement of the Absolute Branching Fraction of D0 to K- pi+

    SciTech Connect

    Aubert, B.; Bona, M.; Boutigny, D.; Karyotakis, Y.; Lees, J.P.; Poireau, V.; Prudent, X.; Tisserand, V.; Zghiche, A.; Garra Tico, J.; Grauges, E.; Lopez, L.; Palano, A.; Eigen, G.; Ofte, I.; Stugu, B.; Sun, L.; Abrams, G.S.; Battaglia, M.; Brown, D.N.; Button-Shafer, J.; /LBL, Berkeley /Birmingham U. /Ruhr U., Bochum /Bristol U. /British Columbia U. /Brunel U. /Novosibirsk, IYF /UC, Irvine /UCLA /UC, Riverside /UC, San Diego /UC, Santa Barbara /UC, Santa Cruz /Caltech /Cincinnati U. /Colorado U. /Colorado State U. /Dortmund U. /Munich, Tech. U. /Ecole Polytechnique /Edinburgh U. /Ferrara U. /Frascati /Genoa U. /Harvard U. /Heidelberg U. /Imperial Coll., London /Iowa U. /Iowa State U. /Johns Hopkins U. /Karlsruhe U. /Orsay, LAL /LLNL, Livermore /Liverpool U. /Queen Mary, U. of London /Royal Holloway, U. of London /Louisville U. /Manchester U. /Maryland U. /Massachusetts U., Amherst /MIT, LNS /McGill U. /Maryland U. /INFN, Milan /Mississippi U. /Montreal U. /Mt. Holyoke Coll. /Naples U. /NIKHEF, Amsterdam /Notre Dame U. /Ohio State U. /Oregon U. /Padua U. /Paris U., VI-VII /Pennsylvania U. /Perugia U. /Pisa U. /Prairie View A-M /Princeton U. /INFN, Rome /Rostock U. /Rutherford /DSM, DAPNIA, Saclay /South Carolina U. /SLAC /Stanford U., Phys. Dept. /SUNY, Albany /Tennessee U. /Texas U. /Texas U., Dallas /Turin U. /Trieste U. /Valencia U., IFIC /Victoria U. /Warwick U. /Wisconsin U., Madison /Yale U.

    2007-04-25

    The authors measure the absolute branching fraction for D{sup 0} {yields} K{sup -} {pi}{sup +} using partial reconstruction of {bar B}{sup 0} {yields} D*{sup +}X{ell}{sup -}{bar {nu}}{sub {ell}} decays, in which only the charged lepton and the pion from the decay D*{sup +} {yields} D{sup 0}{pi}{sup +} are used. Based on a data sample of 230 million B{bar B} pairs collected at the {Upsilon}(4S) resonance with the BABAR detector at the PEP-II asymmetric-energy B Factory at SLAC, they obtain {Beta}(D{sup 0} {yields} K{sup -}{pi}{sup +}) = (4.007 {+-} 0.037 {+-} 0.070)%, where the first error is statistical and the second error is systematic.

  5. Absolute Thermal SST Measurements over the Deepwater Horizon Oil Spill

    NASA Astrophysics Data System (ADS)

    Good, W. S.; Warden, R.; Kaptchen, P. F.; Finch, T.; Emery, W. J.

    2010-12-01

    Climate monitoring and natural disaster rapid assessment require baseline measurements that can be tracked over time to distinguish anthropogenic versus natural changes to the Earth system. Disasters like the Deepwater Horizon Oil Spill require constant monitoring to assess the potential environmental and economic impacts. Absolute calibration and validation of Earth-observing sensors is needed to allow for comparison of temporally separated data sets and provide accurate information to policy makers. The Ball Experimental Sea Surface Temperature (BESST) radiometer was designed and built by Ball Aerospace to provide a well calibrated measure of sea surface temperature (SST) from an unmanned aerial system (UAS). Currently, emissive skin SST observed by satellite infrared radiometers is validated by shipborne instruments that are expensive to deploy and can only take a few data samples along the ship track to overlap within a single satellite pixel. Implementation on a UAS will allow BESST to map the full footprint of a satellite pixel and perform averaging to remove any local variability due to the difference in footprint size of the instruments. It also enables the capability to study this sub-pixel variability to determine if smaller scale effects need to be accounted for in models to improve forecasting of ocean events. In addition to satellite sensor validation, BESST can distinguish meter scale variations in SST which could be used to remotely monitor and assess thermal pollution in rivers and coastal areas as well as study diurnal and seasonal changes to bodies of water that impact the ocean ecosystem. BESST was recently deployed on a conventional Twin Otter airplane for measurements over the Gulf of Mexico to access the thermal properties of the ocean surface being affected by the oil spill. Results of these measurements will be presented along with ancillary sensor data used to eliminate false signals including UV and Synthetic Aperture Radar (SAR

  6. Mid-infrared absolute spectral responsivity scale based on an absolute cryogenic radiometer and an optical parametric oscillator laser

    NASA Astrophysics Data System (ADS)

    Zhao, Kun; Shi, Xueshun; Chen, Haidong; Liu, Yulong; Liu, Changming; Chen, Kunfeng; Li, Ligong; Gan, Haiyong; Ma, Chong

    2016-06-01

    We are reporting on a laser-based absolute spectral responsivity scale in the mid-infrared spectral range. By using a mid-infrared tunable optical parametric oscillator as the laser source, the absolute responsivity scale has been established by calibrating thin-film thermopile detectors against an absolute cryogenic radiometer. The thin-film thermopile detectors can be then used as transfer standard detectors. The extended uncertainty of the absolute spectral responsivity measurement has been analyzed to be 0.58%–0.68% (k  =  2).

  7. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  8. Standard Errors for Matrix Correlations.

    ERIC Educational Resources Information Center

    Ogasawara, Haruhiko

    1999-01-01

    Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)

  9. Predictive RANS simulations via Bayesian Model-Scenario Averaging

    SciTech Connect

    Edeling, W.N.; Cinnella, P.; Dwight, R.P.

    2014-10-15

    The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier–Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.

  10. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  11. Grammatical Errors and Communication Breakdown.

    ERIC Educational Resources Information Center

    Tomiyama, Machiko

    This study investigated the relationship between grammatical errors and communication breakdown by examining native speakers' ability to correct grammatical errors. The assumption was that communication breakdown exists to a certain degree if a native speaker cannot correct the error or if the correction distorts the information intended to be…

  12. Supplementary and Enrichment Series: Absolute Value. Teachers' Commentary. SP-25.

    ERIC Educational Resources Information Center

    Bridgess, M. Philbrick, Ed.

    This is one in a series of manuals for teachers using SMSG high school supplementary materials. The pamphlet includes commentaries on the sections of the student's booklet, answers to the exercises, and sample test questions. Topics covered include addition and multiplication in terms of absolute value, graphs of absolute value in the Cartesian…

  13. Supplementary and Enrichment Series: Absolute Value. SP-24.

    ERIC Educational Resources Information Center

    Bridgess, M. Philbrick, Ed.

    This is one in a series of SMSG supplementary and enrichment pamphlets for high school students. This series is designed to make material for the study of topics of special interest to students readily accessible in classroom quantity. Topics covered include absolute value, addition and multiplication in terms of absolute value, graphs of absolute…

  14. Novalis' Poetic Uncertainty: A "Bildung" with the Absolute

    ERIC Educational Resources Information Center

    Mika, Carl

    2016-01-01

    Novalis, the Early German Romantic poet and philosopher, had at the core of his work a mysterious depiction of the "absolute." The absolute is Novalis' name for a substance that defies precise knowledge yet calls for a tentative and sensitive speculation. How one asserts a truth, represents an object, and sets about encountering things…

  15. Absolute Humidity and the Seasonality of Influenza (Invited)

    NASA Astrophysics Data System (ADS)

    Shaman, J. L.; Pitzer, V.; Viboud, C.; Grenfell, B.; Goldstein, E.; Lipsitch, M.

    2010-12-01

    Much of the observed wintertime increase of mortality in temperate regions is attributed to seasonal influenza. A recent re-analysis of laboratory experiments indicates that absolute humidity strongly modulates the airborne survival and transmission of the influenza virus. Here we show that the onset of increased wintertime influenza-related mortality in the United States is associated with anomalously low absolute humidity levels during the prior weeks. We then use an epidemiological model, in which observed absolute humidity conditions temper influenza transmission rates, to successfully simulate the seasonal cycle of observed influenza-related mortality. The model results indicate that direct modulation of influenza transmissibility by absolute humidity alone is sufficient to produce this observed seasonality. These findings provide epidemiological support for the hypothesis that absolute humidity drives seasonal variations of influenza transmission in temperate regions. In addition, we show that variations of the basic and effective reproductive numbers for influenza, caused by seasonal changes in absolute humidity, are consistent with the general timing of pandemic influenza outbreaks observed for 2009 A/H1N1 in temperate regions. Indeed, absolute humidity conditions correctly identify the region of the United States vulnerable to a third, wintertime wave of pandemic influenza. These findings suggest that the timing of pandemic influenza outbreaks is controlled by a combination of absolute humidity conditions, levels of susceptibility and changes in population mixing and contact rates.

  16. The successively temporal error concealment algorithm using error-adaptive block matching principle

    NASA Astrophysics Data System (ADS)

    Lee, Yu-Hsuan; Wu, Tsai-Hsing; Chen, Chao-Chyun

    2014-09-01

    Generally, the temporal error concealment (TEC) adopts the blocks around the corrupted block (CB) as the search pattern to find the best-match block in previous frame. Once the CB is recovered, it is referred to as the recovered block (RB). Although RB can be the search pattern to find the best-match block of another CB, RB is not the same as its original block (OB). The error between the RB and its OB limits the performance of TEC. The successively temporal error concealment (STEC) algorithm is proposed to alleviate this error. The STEC procedure consists of tier-1 and tier-2. The tier-1 divides a corrupted macroblock into four corrupted 8 × 8 blocks and generates a recovering order for them. The corrupted 8 × 8 block with the first place of recovering order is recovered in tier-1, and remaining 8 × 8 CBs are recovered in tier-2 along the recovering order. In tier-2, the error-adaptive block matching principle (EA-BMP) is proposed for the RB as the search pattern to recover remaining corrupted 8 × 8 blocks. The proposed STEC outperforms sophisticated TEC algorithms on average PSNR by 0.3 dB on the packet error rate of 20% at least.

  17. Errors inducing radiation overdoses.

    PubMed

    Grammaticos, Philip C

    2013-01-01

    There is no doubt that equipments exposing radiation and used for therapeutic purposes should be often checked for possibly administering radiation overdoses to the patients. Technologists, radiation safety officers, radiologists, medical physicists, healthcare providers and administration should take proper care on this issue. "We must be beneficial and not harmful to the patients", according to the Hippocratic doctrine. Cases of radiation overdose are often reported. A series of cases of radiation overdoses have recently been reported. Doctors who were responsible, received heavy punishments. It is much better to prevent than to treat an error or a disease. A Personal Smart Card or Score Card has been suggested for every patient undergoing therapeutic and/or diagnostic procedures by the use of radiation. Taxonomy may also help. PMID:24251304

  18. Beta systems error analysis

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  19. Medical device error.

    PubMed

    Goodman, Gerald R

    2002-12-01

    This article discusses principal concepts for the analysis, classification, and reporting of problems involving medical device technology. We define a medical device in regulatory terminology and define and discuss concepts and terminology used to distinguish the causes and sources of medical device problems. Database classification systems for medical device failure tracking are presented, as are sources of information on medical device failures. The importance of near-accident reporting is discussed to alert users that reported medical device errors are typically limited to those that have caused an injury or death. This can represent only a fraction of the true number of device problems. This article concludes with a summary of the most frequently reported medical device failures by technology type, clinical application, and clinical setting. PMID:12400632

  20. Karst Water System Investigated by Absolute Gravimetry

    NASA Astrophysics Data System (ADS)

    Quinif, Y.; Meus, P.; van Camp, M.; Kaufmann, O.; van Ruymbeke, M.; Vandiepenbeeck, M.; Camelbeeck, T.

    2006-12-01

    The highly anisotropic and heterogeneous hydrogeological characteristics of karst aquifers are difficult to characterize and present challenges for modeling of storage capacities. Little is known about the surface and groundwater interconnection, about the connection between the porous formations and the draining cave and conduits, and about the variability of groundwater volume within the system. Usually, an aquifer is considered as a black box, where water fluxes are monitored as input and output. However, water inflow and outflow are highly variable and cannot be measured directly. A recent project, begun in 2006 sought to constrain the water budget in a Belgian karst aquifer and to assess the porosity and water dynamics, combining absolute gravity (AG) measurements and piezometric levels around the Rochefort cave. The advantage of gravity measurements is that they integrate all the subsystems in the karst system. This is not the case with traditional geophysical tools like boring or monitoring wells, which are soundings affected by their near environment and its heterogeneity. The investigated cave results from the meander cutoff system of the Lomme River. The main inputs are swallow holes of the river crossing the limestone massif. The river is canalized and the karst system is partly disconnected from the hydraulic system. In February and March 2006, when the river spilled over its dyke and sank into the most important swallow hole, this resulted in dramatic and nearly instantaneous increases in the piezometric levels in the cave, reaching up to 13 meters. Meanwhile, gravity increased by 50 and 90 nms-2 in February and March, respectively. A first conclusion is that during these sudden floods, the pores and fine fissures were poorly connected with the enlarged fractures, cave, and conduits. With a rise of 13 meters in the water level and a 5% porosity, a gravity change of 250 nms-2 should have been expected. This moderate gravity variation suggests either a

  1. Absolute radiometric calibration of advanced remote sensing systems

    NASA Technical Reports Server (NTRS)

    Slater, P. N.

    1982-01-01

    The distinction between the uses of relative and absolute spectroradiometric calibration of remote sensing systems is discussed. The advantages of detector-based absolute calibration are described, and the categories of relative and absolute system calibrations are listed. The limitations and problems associated with three common methods used for the absolute calibration of remote sensing systems are addressed. Two methods are proposed for the in-flight absolute calibration of advanced multispectral linear array systems. One makes use of a sun-illuminated panel in front of the sensor, the radiance of which is monitored by a spectrally flat pyroelectric radiometer. The other uses a large, uniform, high-radiance reference ground surface. The ground and atmospheric measurements required as input to a radiative transfer program to predict the radiance level at the entrance pupil of the orbital sensor are discussed, and the ground instrumentation is described.

  2. Testing the quasi-absolute method in photon activation analysis

    SciTech Connect

    Sun, Z. J.; Wells, D.; Starovoitova, V.; Segebade, C.

    2013-04-19

    In photon activation analysis (PAA), relative methods are widely used because of their accuracy and precision. Absolute methods, which are conducted without any assistance from calibration materials, are seldom applied for the difficulty in obtaining photon flux in measurements. This research is an attempt to perform a new absolute approach in PAA - quasi-absolute method - by retrieving photon flux in the sample through Monte Carlo simulation. With simulated photon flux and database of experimental cross sections, it is possible to calculate the concentration of target elements in the sample directly. The QA/QC procedures to solidify the research are discussed in detail. Our results show that the accuracy of the method for certain elements is close to a useful level in practice. Furthermore, the future results from the quasi-absolute method can also serve as a validation technique for experimental data on cross sections. The quasi-absolute method looks promising.

  3. Learning in the temporal bisection task: Relative or absolute?

    PubMed

    de Carvalho, Marilia Pinheiro; Machado, Armando; Tonneau, François

    2016-01-01

    We examined whether temporal learning in a bisection task is absolute or relational. Eight pigeons learned to choose a red key after a t-seconds sample and a green key after a 3t-seconds sample. To determine whether they had learned a relative mapping (short→Red, long→Green) or an absolute mapping (t-seconds→Red, 3t-seconds→Green), the pigeons then learned a series of new discriminations in which either the relative or the absolute mapping was maintained. Results showed that the generalization gradient obtained at the end of a discrimination predicted the pattern of choices made during the first session of a new discrimination. Moreover, most acquisition curves and generalization gradients were consistent with the predictions of the learning-to-time model, a Spencean model that instantiates absolute learning with temporal generalization. In the bisection task, the basis of temporal discrimination seems to be absolute, not relational. PMID:26752233

  4. Errors and correction of precipitation measurements in China

    NASA Astrophysics Data System (ADS)

    Ren, Zhihua; Li, Mingqin

    2007-05-01

    In order to discover the range of various errors in Chinese precipitation measurements and seek a correction method, 30 precipitation evaluation stations were set up countrywide before 1993. All the stations are reference stations in China. To seek a correction method for wind-induced error, a precipitation correction instrument called the “horizontal precipitation gauge” was devised beforehand. Field intercomparison observations regarding 29,000 precipitation events have been conducted using one pit gauge, two elevated operational gauges and one horizontal gauge at the above 30 stations. The range of precipitation measurement errors in China is obtained by analysis of intercomparison measurement results. The distribution of random errors and systematic errors in precipitation measurements are studied in this paper. A correction method, especially for wind-induced errors, is developed. The results prove that a correlation of power function exists between the precipitation amount caught by the horizontal gauge and the absolute difference of observations implemented by the operational gauge and pit gauge. The correlation coefficient is 0.99. For operational observations, precipitation correction can be carried out only by parallel observation with a horizontal precipitation gauge. The precipitation accuracy after correction approaches that of the pit gauge. The correction method developed is simple and feasible.

  5. Integrated Navigation System Design for Micro Planetary Rovers: Comparison of Absolute Heading Estimation Algorithms and Nonlinear Filtering

    PubMed Central

    Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-01-01

    This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293

  6. Integrated Navigation System Design for Micro Planetary Rovers: Comparison of Absolute Heading Estimation Algorithms and Nonlinear Filtering.

    PubMed

    Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-01-01

    This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293

  7. Effect of Body Mass Index on Magnitude of Setup Errors in Patients Treated With Adjuvant Radiotherapy for Endometrial Cancer With Daily Image Guidance

    SciTech Connect

    Lin, Lilie L.; Hertan, Lauren; Rengan, Ramesh; Teo, Boon-Keng Kevin

    2012-06-01

    Purpose: To determine the impact of body mass index (BMI) on daily setup variations and frequency of imaging necessary for patients with endometrial cancer treated with adjuvant intensity-modulated radiotherapy (IMRT) with daily image guidance. Methods and Materials: The daily shifts from a total of 782 orthogonal kilovoltage images from 30 patients who received pelvic IMRT between July 2008 and August 2010 were analyzed. The BMI, mean daily shifts, and random and systematic errors in each translational and rotational direction were calculated for each patient. Margin recipes were generated based on BMI. Linear regression and spearman rank correlation analysis were performed. To simulate a less-than-daily IGRT protocol, the average shift of the first five fractions was applied to subsequent setups without IGRT for assessing the impact on setup error and margin requirements. Results: Median BMI was 32.9 (range, 23-62). Of the 30 patients, 16.7% (n = 5) were normal weight (BMI <25); 23.3% (n = 7) were overweight (BMI {>=}25 to <30); 26.7% (n = 8) were mildly obese (BMI {>=}30 to <35); and 33.3% (n = 10) were moderately to severely obese (BMI {>=} 35). On linear regression, mean absolute vertical, longitudinal, and lateral shifts positively correlated with BMI (p = 0.0127, p = 0.0037, and p < 0.0001, respectively). Systematic errors in the longitudinal and vertical direction were found to be positively correlated with BMI category (p < 0.0001 for both). IGRT for the first five fractions, followed by correction of the mean error for all subsequent fractions, led to a substantial reduction in setup error and resultant margin requirement overall compared with no IGRT. Conclusions: Daily shifts, systematic errors, and margin requirements were greatest in obese patients. For women who are normal or overweight, a planning target margin margin of 7 to 10 mm may be sufficient without IGRT, but for patients who are moderately or severely obese, this is insufficient.

  8. Absolute frequency measurement at 10-16 level based on the international atomic time

    NASA Astrophysics Data System (ADS)

    Hachisu, H.; Fujieda, M.; Kumagai, M.; Ido, T.

    2016-06-01

    Referring to International Atomic Time (TAI), we measured the absolute frequency of the 87Sr lattice clock with its uncertainty of 1.1 x 10-15. Unless an optical clock is continuously operated for the five days of the TAI grid, it is required to evaluate dead time uncertainty in order to use the available five-day average of the local frequency reference. We homogeneously distributed intermittent measurements over the five-day grid of TAI, by which the dead time uncertainty was reduced to low 10-16 level. Three campaigns of the five (or four)-day consecutive measurements have resulted in the absolute frequency of the 87Sr clock transition of 429 228 004 229 872.85 (47) Hz, where the systematic uncertainty of the 87Sr optical frequency standard amounts to 8.6 x 10-17.

  9. A Liquid-Helium-Cooled Absolute Reference Cold Load forLong-Wavelength Radiometric Calibration

    SciTech Connect

    Bensadoun, M.; Witebsky, C.; Smoot, George F.; De Amici,Giovanni; Kogut, A.; Levin, S.

    1990-05-01

    We describe a large (78-cm) diameter liquid-helium-cooled black-body absolute reference cold load for the calibration of microwave radiometers. The load provides an absolute calibration near the liquid helium (LHe) boiling point, accurate to better than 30 mK for wavelengths from 2.5 to 25 cm (12-1.2 GHz). The emission (from non-LHe temperature parts of the cold load) and reflection are small and well determined. Total corrections to the LHe boiling point temperature are {le} 50 mK over the operating range. This cold load has been used at several wavelengths at the South Pole and at the White Mountain Research Station. In operation, the average LHe loss rate was {le} 4.4 l/hr. Design considerations, radiometric and thermal performance and operational aspects are discussed. A comparison with other LHe-cooled reference loads including the predecessor of this cold load is given.

  10. Absolute Absorption Cross Sections from Photon Recoil in a Matter-Wave Interferometer

    NASA Astrophysics Data System (ADS)

    Eibenberger, Sandra; Cheng, Xiaxi; Cotter, J. P.; Arndt, Markus

    2014-06-01

    We measure the absolute absorption cross section of molecules using a matter-wave interferometer. A nanostructured density distribution is imprinted onto a dilute molecular beam through quantum interference. As the beam crosses the light field of a probe laser some molecules will absorb a single photon. These absorption events impart a momentum recoil which shifts the position of the molecule relative to the unperturbed beam. Averaging over the shifted and unshifted components within the beam leads to a reduction of the fringe visibility, enabling the absolute absorption cross section to be extracted with high accuracy. This technique is independent of the molecular density, it is minimally invasive and successfully eliminates many problems related to photon cycling, state mixing, photobleaching, photoinduced heating, fragmentation, and ionization. It can therefore be extended to a wide variety of neutral molecules, clusters, and nanoparticles.

  11. Register file soft error recovery

    DOEpatents

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  12. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  13. Preliminary estimates of radiosonde thermistor errors

    NASA Technical Reports Server (NTRS)

    Schmidlin, F. J.; Luers, J. K.; Huffman, P. D.

    1986-01-01

    Radiosonde temperature measurements are subject to errors, not the least of which is the effect of long- and short-wave radiation. Methods of adjusting the daytime temperatures to a nighttime equivalent are used by some analysis centers. Other than providing consistent observations for analysis this procedure does not provide a true correction. The literature discusses the problem of radiosonde temperature errors but it is not apparent what effort, if any, has been taken to quantify these errors. To accomplish the latter, radiosondes containing multiple thermistors with different coatings were flown at Goddard Space Flight Center/Wallops Flight Facility. The coatings employed had different spectral characteristics and, therefore, different adsorption and emissivity properties. Discrimination of the recorded temperatures enabled day and night correction values to be determined for the US standard white-coated rod thermistor. The correction magnitudes are given and a comparison of US measured temperatures before and after correction are compared with temperatures measured with the Vaisala radiosonde. The corrections are in the proper direction, day and night, and reduce day-night temperature differences to less than 0.5 C between surface and 30 hPa. The present uncorrected temperatures used with the Viz radiosonde have day-night differences that exceed 1 C at levels below 90 hPa. Additional measurements are planned to confirm these preliminary results and determine the solar elevation angle effect on the corrections. The technique used to obtain the corrections may also be used to recover a true absolute value and might be considered a valuable contribution to the meteorological community for use as a reference instrument.

  14. Absolute Timing of the Crab Pulsar: X-ray, Radio, and Optical Observations

    NASA Astrophysics Data System (ADS)

    Ray, P. S.; Wood, K. S.; Wolff, M. T.; Lovellette, M. N.; Sheikh, S.; Moon, D.-S.; Eikenberry, S. S.; Roberts, M.; Bloom, E. D.; Tournear, D.; Saz Parkinson, P.; Reilly, K.

    2002-12-01

    We report on multiwavelength observations of the Crab Pulsar and compare the pulse arrival time at radio, IR, optical, and X-ray wavelengths. Comparing absolute arrival times at multiple energies can provide clues to the magnetospheric structure and emission region geometry. Absolute time calibration of each observing system is of paramount importance for these observations and we describe how this is done for each system. We directly compare arrival time determinations for 2--10 keV X-ray observations made contemporaneously with the PCA on the Rossi X-ray Timing Explorer and the USA Experiment on ARGOS. These two X-ray measurements employ very different means of measuring time and satellite position and thus have different systematic error budgets. The comparison with other wavelengths requires additional steps such as dispersion measure corrections and a precise definition of the ``peak'' of the light curve since the light curve shape varies with observing wavelength. We will describe each of these effects and quantify the magnitude of the systematic error that each may contribute. Basic research on X-ray Astronomy at NRL is funded by NRL/ONR.

  15. A Simple Approximation for the Symbol Error Rate of Triangular Quadrature Amplitude Modulation

    NASA Astrophysics Data System (ADS)

    Duy, Tran Trung; Kong, Hyung Yun

    In this paper, we consider the error performance of the regular triangular quadrature amplitude modulation (TQAM). In particular, using an accurate exponential bound of the complementary error function, we derive a simple approximation for the average symbol error rate (SER) of TQAM over Additive White Gaussian Noise (AWGN) and fading channels. The accuracy of our approach is verified by some simulation results.

  16. Social aspects of clinical errors.

    PubMed

    Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave

    2009-08-01

    Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors. PMID:19201405

  17. Absolute localization of ground robots by matching LiDAR and image data in dense forested environments

    NASA Astrophysics Data System (ADS)

    Hussein, Marwan; Renner, Matthew; Iagnemma, Karl

    2014-06-01

    A method for the autonomous geolocation of ground vehicles in forest environments is discussed. The method provides an estimate of the global horizontal position of a vehicle strictly based on finding a geometric match between a map of observed tree stems, scanned in 3D by Light Detection and Ranging (LiDAR) sensors onboard the vehicle, to another stem map generated from the structure of tree crowns analyzed from high resolution aerial orthoimagery of the forest canopy. Extraction of stems from 3D data is achieved by using Support Vector Machine (SVM) classifiers and height above ground filters that separate ground points from vertical stem features. Identification of stems from overhead imagery is achieved by finding the centroids of tree crowns extracted using a watershed segmentation algorithm. Matching of the two maps is achieved by using a robust Iterative Closest Point (ICP) algorithm that determines the rotation and translation vectors to align the datasets. The alignment is used to calculate the absolute horizontal location of the vehicle. The method has been tested with real-world data and has been able to estimate vehicle geoposition with an average error of less than 2 m. It is noted that the algorithm's accuracy performance is currently limited by the accuracy and resolution of aerial orthoimagery used. The method can be used in real-time as a complement to the Global Positioning System (GPS) in areas where signal coverage is inadequate due to attenuation by the forest canopy, or due to intentional denied access. The method has two key properties that are significant: i) It does not require a priori knowledge of the area surrounding the robot. ii) Uses the geometry of detected tree stems as the only input to determine horizontal geoposition.

  18. Below-Average, Average, and Above-Average Readers Engage Different and Similar Brain Regions while Reading

    ERIC Educational Resources Information Center

    Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri

    2006-01-01

    Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…

  19. Discrete models of fluids: spatial averaging, closure and model reduction

    SciTech Connect

    Panchenko, Alexander; Tartakovsky, Alexandre M.; Cooper, Kevin

    2014-04-15

    We consider semidiscrete ODE models of single-phase fluids and two-fluid mixtures. In the presence of multiple fine-scale heterogeneities, the size of these ODE systems can be very large. Spatial averaging is then a useful tool for reducing computational complexity of the problem. The averages satisfy exact balance equations of mass, momentum, and energy. These equations do not form a satisfactory continuum model because evaluation of stress and heat flux requires solving the underlying ODEs. To produce continuum equations that can be simulated without resolving microscale dynamics, we recently proposed a closure method based on the use of regularized deconvolution. Here we continue the investigation of deconvolution closure with the long term objective of developing consistent computational upscaling for multiphase particle methods. The structure of the fine-scale particle solvers is reminiscent of molecular dynamics. For this reason we use nonlinear averaging introduced for atomistic systems by Noll, Hardy, and Murdoch-Bedeaux. We also consider a simpler linear averaging originally developed in large eddy simulation of turbulence. We present several simple but representative examples of spatially averaged ODEs, where the closure error can be analyzed. Based on this analysis we suggest a general strategy for reducing the relative error of approximate closure. For problems with periodic highly oscillatory material parameters we propose a spectral boosting technique that augments the standard deconvolution and helps to correctly account for dispersion effects. We also conduct several numerical experiments, one of which is a complete mesoscale simulation of a stratified two-fluid flow in a channel. In this simulation, the operation count per coarse time step scales sublinearly with the number of particles.

  20. Errors in potassium balance

    SciTech Connect

    Forbes, G.B.; Lantigua, R.; Amatruda, J.M.; Lockwood, D.H.

    1981-01-01

    Six overweight adult subjects given a low calorie diet containing adequate amounts of nitrogen but subnormal amounts of potassium (K) were observed on the Clinical Research Center for periods of 29 to 40 days. Metabolic balance of potassium was measured together with frequent assays of total body K by /sup 40/K counting. Metabolic K balance underestimated body K losses by 11 to 87% (average 43%): the intersubject variability is such as to preclude the use of a single correction value for unmeasured losses in K balance studies.

  1. Dynamic and static error analyses of neutron radiography testing

    SciTech Connect

    Joo, H.; Glickstein, S.S.

    1999-03-01

    Neutron radiography systems are being used for real-time visualization of the dynamic behavior as well as time-averaged measurements of spatial vapor fraction distributions for two phase fluids. The data in the form of video images are typically recorded on videotape at 30 frames per second. Image analysis of he video pictures is used to extract time-dependent or time-averaged data. The determination of the average vapor fraction requires averaging of the logarithm of time-dependent intensity measurements of the neutron beam (gray scale distribution of the image) that passes through the fluid. This could be significantly different than averaging the intensity of the transmitted beam and then taking the logarithm of that term. This difference is termed the dynamic error (error in the time-averaged vapor fractions due to the inherent time-dependence of the measured data) and is separate from the static error (statistical sampling uncertainty). Detailed analyses of both sources of errors are discussed.

  2. Fabrication of capacitive absolute pressure sensors by thin film vacuum encapsulation on SOI substrates

    NASA Astrophysics Data System (ADS)

    Belsito, Luca; Mancarella, Fulvio; Roncaglia, Alberto

    2016-09-01

    The paper reports on the fabrication and characterization of absolute capacitive pressure sensors fabricated by polysilicon low-pressure chemical vapour deposition vacuum packaging on silicon-on-insulator substrates. The fabrication process proposed is carried out at wafer level and allows obtaining a large number of miniaturized sensors per substrate on 1  ×  2 mm2 chips with high yield. The sensors present average pressure sensitivity of 8.3 pF/bar and average pressure resolution limit of 0.24 mbar within the measurement range 200–1200 mbar. The temperature drift of the sensor prototypes was also measured in the temperature range 25–45 °C, yielding an average temperature sensitivity of 67 fF K‑1 at ambient pressure.

  3. Development of a graphite probe calorimeter for absolute clinical dosimetry

    SciTech Connect

    Renaud, James; Seuntjens, Jan; Sarfehnia, Arman; Marchington, David

    2013-02-15

    The aim of this work is to present the numerical design optimization, construction, and experimental proof of concept of a graphite probe calorimeter (GPC) conceived for dose measurement in the clinical environment (U.S. provisional patent 61/652,540). A finite element method (FEM) based numerical heat transfer study was conducted using a commercial software package to explore the feasibility of the GPC and to optimize the shape, dimensions, and materials used in its design. A functioning prototype was constructed inhouse and used to perform dose to water measurements under a 6 MV photon beam at 400 and 1000 MU/min, in a thermally insulated water phantom. Heat loss correction factors were determined using FEM analysis while the radiation field perturbation and the graphite to water absorbed dose conversion factors were calculated using Monte Carlo simulations. The difference in the average measured dose to water for the 400 and 1000 MU/min runs using the TG-51 protocol and the GPC was 0.2% and 1.2%, respectively. Heat loss correction factors ranged from 1.001 to 1.002, while the product of the perturbation and dose conversion factors was calculated to be 1.130. The combined relative uncertainty was estimated to be 1.4%, with the largest contributors being the specific heat capacity of the graphite (type B, 0.8%) and the reproducibility, defined as the standard deviation of the mean measured dose (type A, 0.6%). By establishing the feasibility of using the GPC as a practical clinical absolute photon dosimeter, this work lays the foundation for further device enhancements, including the development of an isothermal mode of operation and an overall miniaturization, making it potentially suitable for use in small and composite radiation fields. It is anticipated that, through the incorporation of isothermal stabilization provided by temperature controllers, a subpercent overall uncertainty will be achieved.

  4. Uranium isotopic composition and absolute ages of Allende chondrules

    NASA Astrophysics Data System (ADS)

    Brennecka, G. A.; Budde, G.; Kleine, T.

    2015-11-01

    A handful of events, such as the condensation of refractory inclusions and the formation of chondrules, represent important stages in the formation and evolution of the early solar system and thus are critical to understanding its development. Compared to the refractory inclusions, chondrules appear to have a protracted period of formation that spans millions of years. As such, understanding chondrule formation requires a catalog of reliable ages, free from as many assumptions as possible. The Pb-Pb chronometer has this potential; however, because common individual chondrules have extremely low uranium contents, obtaining U-corrected Pb-Pb ages of individual chondrules is unrealistic in the vast majority of cases at this time. Thus, in order to obtain the most accurate 238U/235U ratio possible for chondrules, we separated and pooled thousands of individual chondrules from the Allende meteorite. In this work, we demonstrate that no discernible differences exist in the 238U/235U compositions between chondrule groups when separated by size and magnetic susceptibility, suggesting that no systematic U-isotope variation exists between groups of chondrules. Consequently, chondrules are likely to have a common 238U/235U ratio for any given meteorite. A weighted average of the six groups of chondrule separates from Allende results in a 238U/235U ratio of 137.786 ± 0.004 (±0.016 including propagated uncertainty on the U standard [Richter et al. 2010]). Although it is still possible that individual chondrules have significant U isotope variation within a given meteorite, this value represents our best estimate of the 238U/235U ratio for Allende chondrules and should be used for absolute dating of these objects, unless such chondrules can be measured individually.

  5. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  6. 40 CFR 76.11 - Emissions averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...

  7. RHIC BPM system average orbit calculations

    SciTech Connect

    Michnoff,R.; Cerniglia, P.; Degen, C.; Hulsart, R.; et al.

    2009-05-04

    RHIC beam position monitor (BPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contribute to observed average orbit fluctuations. In 2006, the number of turns for average orbit calculations was made programmable; this was used to explore averaging over single periods near 10 Hz. Although this has provided an average orbit signal quality improvement, an average over many periods would further improve the accuracy of the measured closed orbit. A new continuous average orbit calculation was developed just prior to the 2009 RHIC run and was made operational in March 2009. This paper discusses the new algorithm and performance with beam.

  8. The 13 errors.

    PubMed

    Flower, J

    1998-01-01

    The reality is that most change efforts fail. McKinsey & Company carried out a fascinating research project on change to "crack the code" on creating and managing change in large organizations. One of the questions they asked--and answered--is why most organizations fail in their efforts to manage change. They found that 80 percent of these failures could be traced to 13 common errors. They are: (1) No winning strategy; (2) failure to make a compelling and urgent case for change; (3) failure to distinguish between decision-driven and behavior-dependent change; (4) over-reliance on structure and systems to change behavior; (5) lack of skills and resources; (6) failure to experiment; (7) leaders' inability or unwillingness to confront how they and their roles must change; (8) failure to mobilize and engage pivotal groups; (9) failure to understand and shape the informal organization; (10) inability to integrate and align all the initiatives; (11) no performance focus; (12) excessively open-ended process; and (13) failure to make the whole process transparent and meaningful to individuals. PMID:10351717

  9. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  10. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  11. Error Modeling of Multi-baseline Optical Truss. Part II; Application to SIM Metrology Truss Field Dependent Error

    NASA Technical Reports Server (NTRS)

    Zhang, Liwei Dennis; Milman, Mark; Korechoff, Robert

    2004-01-01

    The current design of the Space Interferometry Mission (SIM) employs a 19 laser-metrology-beam system (also called L19 external metrology truss) to monitor changes of distances between the fiducials of the flight system's multiple baselines. The function of the external metrology truss is to aid in the determination of the time-variations of the interferometer baseline. The largest contributor to truss error occurs in SIM wide-angle observations when the articulation of the siderostat mirrors (in order to gather starlight from different sky coordinates) brings to light systematic errors due to offsets at levels of instrument components (which include comer cube retro-reflectors, etc.). This error is labeled external metrology wide-angle field-dependent error. Physics-based model of field-dependent error at single metrology gauge level is developed and linearly propagated to errors in interferometer delay. In this manner delay error sensitivity to various error parameters or their combination can be studied using eigenvalue/eigenvector analysis. Also validation of physics-based field-dependent model on SIM testbed lends support to the present approach. As a first example, dihedral error model is developed for the comer cubes (CC) attached to the siderostat mirrors. Then the delay errors due to this effect can be characterized using the eigenvectors of composite CC dihedral error. The essence of the linear error model is contained in an error-mapping matrix. A corresponding Zernike component matrix approach is developed in parallel, first for convenience of describing the RMS of errors across the field-of-regard (FOR), and second for convenience of combining with additional models. Average and worst case residual errors are computed when various orders of field-dependent terms are removed from the delay error. Results of the residual errors are important in arriving at external metrology system component requirements. Double CCs with ideally co-incident vertices

  12. Sampling errors in satellite estimates of tropical rain

    NASA Technical Reports Server (NTRS)

    Mcconnell, Alan; North, Gerald R.

    1987-01-01

    The GATE rainfall data set is used in a statistical study to estimate the sampling errors that might be expected for the type of snapshot sampling that a low earth-orbiting satellite makes. For averages over the entire 400-km square and for the duration of several weeks, strong evidence is found that sampling errors less than 10 percent can be expected in contributions from each of four rain rate categories which individually account for about one quarter of the total rain.

  13. Aerial measurement error with a dot planimeter: Some experimental estimates

    NASA Technical Reports Server (NTRS)

    Yuill, R. S.

    1971-01-01

    A shape analysis is presented which utilizes a computer to simulate a multiplicity of dot grids mathematically. Results indicate that the number of dots placed over an area to be measured provides the entire correlation with accuracy of measurement, the indices of shape being of little significance. Equations and graphs are provided from which the average expected error, and the maximum range of error, for various numbers of dot points can be read.

  14. Error analysis in laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Gantert, Walter A.; Tendick, Frank; Bhoyrul, Sunil; Tyrrell, Dana; Fujino, Yukio; Rangel, Shawn; Patti, Marco G.; Way, Lawrence W.

    1998-06-01

    Iatrogenic complications in laparoscopic surgery, as in any field, stem from human error. In recent years, cognitive psychologists have developed theories for understanding and analyzing human error, and the application of these principles has decreased error rates in the aviation and nuclear power industries. The purpose of this study was to apply error analysis to laparoscopic surgery and evaluate its potential for preventing complications. Our approach is based on James Reason's framework using a classification of errors according to three performance levels: at the skill- based performance level, slips are caused by attention failures, and lapses result form memory failures. Rule-based mistakes constitute the second level. Knowledge-based mistakes occur at the highest performance level and are caused by shortcomings in conscious processing. These errors committed by the performer 'at the sharp end' occur in typical situations which often times are brought about by already built-in latent system failures. We present a series of case studies in laparoscopic surgery in which errors are classified and the influence of intrinsic failures and extrinsic system flaws are evaluated. Most serious technical errors in lap surgery stem from a rule-based or knowledge- based mistake triggered by cognitive underspecification due to incomplete or illusory visual input information. Error analysis in laparoscopic surgery should be able to improve human performance, and it should detect and help eliminate system flaws. Complication rates in laparoscopic surgery due to technical errors can thus be considerably reduced.

  15. Weighted Average Consensus-Based Unscented Kalman Filtering.

    PubMed

    Li, Wangyan; Wei, Guoliang; Han, Fei; Liu, Yurong

    2016-02-01

    In this paper, we are devoted to investigate the consensus-based distributed state estimation problems for a class of sensor networks within the unscented Kalman filter (UKF) framework. The communication status among sensors is represented by a connected undirected graph. Moreover, a weighted average consensus-based UKF algorithm is developed for the purpose of estimating the true state of interest, and its estimation error is bounded in mean square which has been proven in the following section. Finally, the effectiveness of the proposed consensus-based UKF algorithm is validated through a simulation example. PMID:26168453

  16. Yearly average performance of the principal solar collector tapes

    NASA Astrophysics Data System (ADS)

    Rabl, A.

    1981-01-01

    The results of hour by hour simulations for 26 meteorological stations were used to derive universal correlations for the yearly total energy that can be delivered by the principal solar collector types; flat plate, evacuated tubes, CPC, single and dual axis tracking collectors, and central receiver. The correlations are first and second order polynomials in yearly average insolation, latitude, and threshold (= heat loss/optical efficiency). With these correlations, the yearly collectible energy can be found by multiplying the coordinates of a single graph by the collector parameters, which reproduces the results of hour by hour simulations with an accuracy (rms error) of 2% for flat plates and 2% to 4% for concentrators.

  17. Radar prediction of absolute rain fade distributions for earth-satellite paths and general methods for extrapolation of fade statistics to other locations

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1982-01-01

    The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.

  18. Spectral averaging techniques for Jacobi matrices

    SciTech Connect

    Rio, Rafael del; Martinez, Carmen; Schulz-Baldes, Hermann

    2008-02-15

    Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner-type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.

  19. Averaging and Adding in Children's Worth Judgements

    ERIC Educational Resources Information Center

    Schlottmann, Anne; Harman, Rachel M.; Paine, Julie

    2012-01-01

    Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…

  20. Estimating storm areal average rainfall intensity in field experiments

    NASA Astrophysics Data System (ADS)

    Peters-Lidard, Christa D.; Wood, Eric F.

    1994-07-01

    Estimates of areal mean precipitation intensity derived from rain gages are commonly used to assess the performance of rainfall radars and satellite rainfall retrieval algorithms. Areal mean precipitation time series collected during short-duration climate field studies are also used as inputs to water and energy balance models which simulate land-atmosphere interactions during the experiments. In two recent field experiments (1987 First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE) and the Multisensor Airborne Campaign for Hydrology 1990 (MAC-HYDRO '90)) designed to investigate the climatic signatures of land-surface forcings and to test airborne sensors, rain gages were placed over the watersheds of interest. These gages provide the sole means for estimating storm precipitation over these areas, and the gage densities present during these experiments indicate that there is a large uncertainty in estimating areal mean precipitation intensity for single storm events. Using a theoretical model of time- and area-averaged space- time rainfall and a model rainfall generator, the error structure of areal mean precipitation intensity is studied for storms statistically similar to those observed in the FIFE and MAC-HYDRO field experiments. Comparisons of the error versus gage density trade-off curves to those calculated using the storm observations show that the rainfall simulator can provide good estimates of the expected measurement error given only the expected intensity, coefficient of variation, and rain cell diameter or correlation length scale, and that these errors can quickly become very large (in excess of 20%) for certain storms measured with a network whose size is below a "critical" gage density. Because the mean storm rainfall error is particularly sensitive to the correlation length, it is important that future field experiments include radar and/or dense rain gage networks capable of accurately characterizing the

  1. Mini-implants and miniplates generate sub-absolute and absolute anchorage

    PubMed Central

    Consolaro, Alberto

    2014-01-01

    The functional demand imposed on bone promotes changes in the spatial properties of osteocytes as well as in their extensions uniformly distributed throughout the mineralized surface. Once spatial deformation is established, osteocytes create the need for structural adaptations that result in bone formation and resorption that happen to meet the functional demands. The endosteum and the periosteum are the effectors responsible for stimulating adaptive osteocytes in the inner and outer surfaces.Changes in shape, volume and position of the jaws as a result of skeletal correction of the maxilla and mandible require anchorage to allow bone remodeling to redefine morphology, esthetics and function as a result of spatial deformation conducted by orthodontic appliances. Examining the degree of changes in shape, volume and structural relationship of areas where mini-implants and miniplates are placed allows us to classify mini-implants as devices of subabsolute anchorage and miniplates as devices of absolute anchorage. PMID:25162561

  2. Absolute brightness temperature measurements at 2.1-mm wavelength

    NASA Technical Reports Server (NTRS)

    Ulich, B. L.

    1974-01-01

    Absolute measurements of the brightness temperatures of the Sun, new Moon, Venus, Mars, Jupiter, Saturn, and Uranus, and of the flux density of DR21 at 2.1-mm wavelength are reported. Relative measurements at 3.5-mm wavelength are also preented which resolve the absolute calibration discrepancy between The University of Texas 16-ft radio telescope and the Aerospace Corporation 15-ft antenna. The use of the bright planets and DR21 as absolute calibration sources at millimeter wavelengths is discussed in the light of recent observations.

  3. Absolute Antenna Calibration at the US National Geodetic Survey

    NASA Astrophysics Data System (ADS)

    Mader, G. L.; Bilich, A. L.

    2012-12-01

    Geodetic GNSS applications routinely demand millimeter precision and extremely high levels of accuracy. To achieve these accuracies, measurement and instrument biases at the centimeter to millimeter level must be understood. One of these biases is the antenna phase center, the apparent point of signal reception for a GNSS antenna. It has been well established that phase center patterns differ between antenna models and manufacturers; additional research suggests that the addition of a radome or the choice of antenna mount can significantly alter those a priori phase center patterns. For the more demanding GNSS positioning applications and especially in cases of mixed-antenna networks, it is all the more important to know antenna phase center variations as a function of both elevation and azimuth in the antenna reference frame and incorporate these models into analysis software. Determination of antenna phase center behavior is known as "antenna calibration". Since 1994, NGS has computed relative antenna calibrations for more than 350 antennas. In recent years, the geodetic community has moved to absolute calibrations - the IGS adopted absolute antenna phase center calibrations in 2006 for use in their orbit and clock products, and NGS's CORS group began using absolute antenna calibration upon the release of the new CORS coordinates in IGS08 epoch 2005.00 and NAD 83(2011,MA11,PA11) epoch 2010.00. Although NGS relative calibrations can be and have been converted to absolute, it is considered best practice to independently measure phase center characteristics in an absolute sense. Consequently, NGS has developed and operates an absolute calibration system. These absolute antenna calibrations accommodate the demand for greater accuracy and for 2-dimensional (elevation and azimuth) parameterization. NGS will continue to provide calibration values via the NGS web site www.ngs.noaa.gov/ANTCAL, and will publish calibrations in the ANTEX format as well as the legacy ANTINFO

  4. Mapping DNA polymerase errors by single-molecule sequencing.

    PubMed

    Lee, David F; Lu, Jenny; Chang, Seungwoo; Loparo, Joseph J; Xie, Xiaoliang S

    2016-07-27

    Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replication product is tagged with a unique nucleotide sequence before amplification. This allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases. PMID:27185891

  5. Random errors in egocentric networks.

    PubMed

    Almquist, Zack W

    2012-10-01

    The systematic errors that are induced by a combination of human memory limitations and common survey design and implementation have long been studied in the context of egocentric networks. Despite this, little if any work exists in the area of random error analysis on these same networks; this paper offers a perspective on the effects of random errors on egonet analysis, as well as the effects of using egonet measures as independent predictors in linear models. We explore the effects of false-positive and false-negative error in egocentric networks on both standard network measures and on linear models through simulation analysis on a ground truth egocentric network sample based on facebook-friendships. Results show that 5-20% error rates, which are consistent with error rates known to occur in ego network data, can cause serious misestimation of network properties and regression parameters. PMID:23878412

  6. Random errors in egocentric networks

    PubMed Central

    Almquist, Zack W.

    2013-01-01

    The systematic errors that are induced by a combination of human memory limitations and common survey design and implementation have long been studied in the context of egocentric networks. Despite this, little if any work exists in the area of random error analysis on these same networks; this paper offers a perspective on the effects of random errors on egonet analysis, as well as the effects of using egonet measures as independent predictors in linear models. We explore the effects of false-positive and false-negative error in egocentric networks on both standard network measures and on linear models through simulation analysis on a ground truth egocentric network sample based on facebook-friendships. Results show that 5–20% error rates, which are consistent with error rates known to occur in ego network data, can cause serious misestimation of network properties and regression parameters. PMID:23878412

  7. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  8. Raingage network characteristics and runoff model error

    SciTech Connect

    Fontaine, T.A.

    1990-01-01

    Most runoff models require calibration using data from observed rainfall-runoff events. In situations where the existing raingage network is the sole source of rainfall data, the accuracy of the calibration can be very dependent on the characteristics of the gage network. The role of gage arrangement and density on the accuracy of basin average precipitation was evaluated using historic extreme rainfall data collected over very dense networks. It was found that data from networks with typical gage arrangements and densities could be a major source of error in situations commonly encountered in urban hydrology.

  9. Teratogenic inborn errors of metabolism.

    PubMed Central

    Leonard, J. V.

    1986-01-01

    Most children with inborn errors of metabolism are born healthy without malformations as the fetus is protected by the metabolic activity of the placenta. However, certain inborn errors of the fetus have teratogenic effects although the mechanisms responsible for the malformations are not generally understood. Inborn errors in the mother may also be teratogenic. The adverse effects of these may be reduced by improved metabolic control of the biochemical disorder. PMID:3540927

  10. An Absolute Proper motions and position catalog in the galaxy halos

    NASA Astrophysics Data System (ADS)

    Qi, Zhaoxiang

    2015-08-01

    We present a new catalog of absolute proper motions and updated positions derived from the same Space Telescope Science Institute digitized Schmidt survey plates utilized for the construction of the Guide Star Catalog II. As special attention was devoted to the absolutization process and removal of position, magnitude and color dependent systematic errors through the use of both stars and galaxies, this release is solely based on plate data outside the galactic plane, i.e. |b| ≥ 27o. The resulting global zero point error is less than 0.6 mas/yr, and the precision better than 4.0 mas/yr for objects brighter than RF = 18.5, rising to 9.0 mas/yr for objects with magnitude in the range 18.5 < RF < 20.0. The catalog covers 22,525 square degrees and lists 100,777,385 objects to the limiting magnitude of RF ˜ 20.8. Alignment with the International Celestial Reference System (ICRS) was made using 1288 objects common to the second realization of the International Celestial Reference Frame (ICRF2) at radio wavelengths. As a result, the coordinate axes realized by our astrometric data are believed to be aligned with the extragalactic radio frame to within ±0.2 mas at the reference epoch J2000.0. This makes our compilation one of the deepest and densest ICRF-registered astrometric catalogs outside the galactic plane. Although the Gaia mission is poised to set the new standard in catalog astronomy and will in many ways supersede this catalog, the methods and procedures reported here will prove useful to remove astrometric magnitude- and color-dependent systematic errors from the next generation of ground-based surveys reaching significantly deeper than the Gaia catalog.

  11. Averaging procedures for flow within vegetation canopies

    NASA Astrophysics Data System (ADS)

    Raupach, M. R.; Shaw, R. H.

    1982-01-01

    Most one-dimensional models of flow within vegetation canopies are based on horizontally averaged flow variables. This paper formalizes the horizontal averaging operation. Two averaging schemes are considered: pure horizontal averaging at a single instant, and time averaging followed by horizontal averaging. These schemes produce different forms for the mean and turbulent kinetic energy balances, and especially for the ‘wake production’ term describing the transfer of energy from large-scale motion to wake turbulence by form drag. The differences are primarily due to the appearance, in the covariances produced by the second scheme, of dispersive components arising from the spatial correlation of time-averaged flow variables. The two schemes are shown to coincide if these dispersive fluxes vanish.

  12. Confidence limits and their errors

    SciTech Connect

    Rajendran Raja

    2002-03-22

    Confidence limits are common place in physics analysis. Great care must be taken in their calculation and use especially in cases of limited statistics. We introduce the concept of statistical errors of confidence limits and argue that not only should limits be calculated but also their errors in order to represent the results of the analysis to the fullest. We show that comparison of two different limits from two different experiments becomes easier when their errors are also quoted. Use of errors of confidence limits will lead to abatement of the debate on which method is best suited to calculate confidence limits.

  13. Compensating For GPS Ephemeris Error

    NASA Technical Reports Server (NTRS)

    Wu, Jiun-Tsong

    1992-01-01

    Method of computing position of user station receiving signals from Global Positioning System (GPS) of navigational satellites compensates for most of GPS ephemeris error. Present method enables user station to reduce error in its computed position substantially. User station must have access to two or more reference stations at precisely known positions several hundred kilometers apart and must be in neighborhood of reference stations. Based on fact that when GPS data used to compute baseline between reference station and user station, vector error in computed baseline is proportional ephemeris error and length of baseline.

  14. Retransmission error control with memory

    NASA Technical Reports Server (NTRS)

    Sindhu, P. S.

    1977-01-01

    In this paper, an error control technique that is a basic improvement over automatic-repeat-request ARQ is presented. Erroneously received blocks in an ARQ system are used for error control. The technique is termed ARQ-with-memory (MRQ). The general MRQ system is described, and simple upper and lower bounds are derived on the throughput achievable by MRQ. The performance of MRQ with respect to throughput, message delay and probability of error is compared to that of ARQ by simulating both systems using error data from a VHF satellite channel being operated in the ALOHA packet broadcasting mode.

  15. Medication Errors in Outpatient Pediatrics.

    PubMed

    Berrier, Kyla

    2016-01-01

    Medication errors may occur during parental administration of prescription and over-the-counter medications in the outpatient pediatric setting. Misinterpretation of medication labels and dosing errors are two types of errors in medication administration. Health literacy may play an important role in parents' ability to safely manage their child's medication regimen. There are several proposed strategies for decreasing these medication administration errors, including using standardized dosing instruments, using strictly metric units for medication dosing, and providing parents and caregivers with picture-based dosing instructions. Pediatric healthcare providers should be aware of these strategies and seek to implement many of them into their practices. PMID:27537086

  16. Physical examination. Frequently observed errors.

    PubMed

    Wiener, S; Nathanson, M

    1976-08-16

    A method allowing for direct observation of intern and resident physicians while interviewing and examining patients has been in use on our medical wards for the last five years. A large number of errors in the performance of the medical examination by young physicians were noted and a classification of these errors into those of technique, omission, detection, interpretation, and recording was made. An approach to detection and correction of each of these kinds of errors is presented, as well as a discussion of possible reasons for the occurrence of these errors in physician performance. PMID:947266

  17. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  18. Absolute calibration of sniffer probes on Wendelstein 7-X.

    PubMed

    Moseev, D; Laqua, H P; Marsen, S; Stange, T; Braune, H; Erckmann, V; Gellert, F; Oosterbeek, J W

    2016-08-01

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m(2) per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m(2) per MW injected beam power is measured. PMID:27587121

  19. Absolute Value Boundedness, Operator Decomposition, and Stochastic Media and Equations

    NASA Technical Reports Server (NTRS)

    Adomian, G.; Miao, C. C.

    1973-01-01

    The research accomplished during this period is reported. Published abstracts and technical reports are listed. Articles presented include: boundedness of absolute values of generalized Fourier coefficients, propagation in stochastic media, and stationary conditions for stochastic differential equations.

  20. The conditions of absolute summability of multiple trigonometric series

    NASA Astrophysics Data System (ADS)

    Bitimkhan, Samat; Akishev, Gabdolla

    2015-09-01

    In this work necessary and sufficient conditions of absolute summability of multiple trigonometric Fourier series of functions from anisotropic spaces of Lebesque are found in terms of its best approximation, the module of smoothness and the mixed smoothness module.

  1. Absolute calibration of sniffer probes on Wendelstein 7-X

    NASA Astrophysics Data System (ADS)

    Moseev, D.; Laqua, H. P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V.; Gellert, F.; Oosterbeek, J. W.

    2016-08-01

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m2 per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m2 per MW injected beam power is measured.

  2. Absolute beam flux measurement at NDCX-I using gold-melting calorimetry technique

    SciTech Connect

    Ni, P.A.; Bieniosek, F.M.; Lidia, S.M.; Welch, J.

    2011-04-01

    We report on an alternative way to measure the absolute beam flux at the NDCX-I, LBNL linear accelerator. Up to date, the beam flux is determined from the analysis of the beam-induced optical emission from a ceramic scintilator (Al-Si). The new approach is based on calorimetric technique, where energy flux is deduced from the melting dynamics of a gold foil. We estimate an average 260 kW/cm2 beam flux over 5 {micro}s, which is consistent with values provided by the other methods. Described technique can be applied to various ion species and energies.

  3. Study of the absolute bioavailability of citrocard, a new GABA derivative.

    PubMed

    Smirnova, L A; Perfilova, V N; Tyurenkov, I N; Ryabukha, A F; Suchkov, E A; Lebedeva, S A

    2013-08-01

    The main pharmacokinetic parameters attest to short elimination half-life and mean retention time of a single citrocard molecule. The average rate of plasma concentration decrease of the compound determined small area under the pharmacokinetic curve. Steady-state distribution volume was low and only slightly surpassed the volume of extracellular body fluids in rat, which indicated moderate capacity of citrocard to distribution and accumulation in the tissues, which is seen from low systemic clearance (Cl) despite the quick elimination of the compound. Absolute bioavailability was 64%. PMID:24143367

  4. A posteriori error estimator and error control for contact problems

    NASA Astrophysics Data System (ADS)

    Weiss, Alexander; Wohlmuth, Barbara I.

    2009-09-01

    In this paper, we consider two error estimators for one-body contact problems. The first error estimator is defined in terms of H( div ) -conforming stress approximations and equilibrated fluxes while the second is a standard edge-based residual error estimator without any modification with respect to the contact. We show reliability and efficiency for both estimators. Moreover, the error is bounded by the first estimator with a constant one plus a higher order data oscillation term plus a term arising from the contact that is shown numerically to be of higher order. The second estimator is used in a control-based AFEM refinement strategy, and the decay of the error in the energy is shown. Several numerical tests demonstrate the performance of both estimators.

  5. Absolute and Convective Instability of a Liquid Jet in Microgravity

    NASA Technical Reports Server (NTRS)

    Lin, Sung P.; Vihinen, I.; Honohan, A.; Hudman, Michael D.

    1996-01-01

    The transition from convective to absolute instability is observed in the 2.2 second drop tower of the NASA Lewis Research Center. In convective instability the disturbance grows spatially as it is convected downstream. In absolute instability the disturbance propagates both downstream and upstream, and manifests itself as an expanding sphere. The transition Reynolds numbers are determined for two different Weber numbers by use of Glycerin and a Silicone oil. Preliminary comparisons with theory are made.

  6. Absolute biphoton meter of the quantum efficiency of photomultipliers

    NASA Astrophysics Data System (ADS)

    Ginzburg, V. M.; Keratishvili, N. G.; Korzhenevich, E. L.; Lunev, G. V.; Sapritskii, V. I.

    1992-07-01

    An biphoton absolute meter of photomultiplier quantum efficiency is presented which is based on spontaneous parametric down-conversion. Calculation and experiment results were obtained which made it possible to choose the parameters of the setup that guarantee a linear dependence of wavelength on the Z coordinate (along the axicon axis). Results of a series of absolute measurements of the quantum efficiency of a specific photomultiplier (FEU-136) are presented.

  7. Absolute/convective instability of planar viscoelastic jets

    NASA Astrophysics Data System (ADS)

    Ray, Prasun K.; Zaki, Tamer A.

    2015-01-01

    Spatiotemporal linear stability analysis is used to investigate the onset of local absolute instability in planar viscoelastic jets. The influence of viscoelasticity in dilute polymer solutions is modeled with the FENE-P constitutive equation which requires the specification of a non-dimensional polymer relaxation time (the Weissenberg number, We), the maximum polymer extensibility, L, and the ratio of solvent and solution viscosities, β. A two-parameter family of velocity profiles is used as the base state with the parameter, S, controlling the amount of co- or counter-flow while N-1 sets the thickness of the jet shear layer. We examine how the variation of these fluid and flow parameters affects the minimum value of S at which the flow becomes locally absolutely unstable. Initially setting the Reynolds number to Re = 500, we find that the first varicose jet-column mode dictates the presence of absolute instability, and increasing the Weissenberg number produces important changes in the nature of the instability. The region of absolute instability shifts towards thin shear layers, and the amount of back-flow needed for absolute instability decreases (i.e., the influence of viscoelasticity is destabilizing). Additionally, when We is sufficiently large and N-1 is sufficiently small, single-stream jets become absolutely unstable. Numerical experiments with approximate equations show that both the polymer and solvent contributions to the stress become destabilizing when the scaled shear rate, η = /W e dU¯1/dx 2L ( /d U ¯ 1 d x 2 is the base-state velocity gradient), is sufficiently large. These qualitative trends are largely unchanged when the Reynolds number is reduced; however, the relative importance of the destabilizing stresses increases tangibly. Consequently, absolute instability is substantially enhanced, and single-stream jets become absolutely unstable over a sizable portion of the parameter space.

  8. Heat capacity and absolute entropy of iron phosphides

    SciTech Connect

    Dobrokhotova, Z.V.; Zaitsev, A.I.; Litvina, A.D.

    1994-09-01

    There is little or no data on the thermodynamic properties of iron phosphides despite their importance for several areas of science and technology. The information available is of a qualitative character and is based on assessments of the heat capacity and absolute entropy. In the present work, we measured the heat capacity over the temperature range of 113-873 K using a differential scanning calorimeter (DSC) and calculated the absolute entropy.

  9. Global absolut gravity reference system as replacement of IGSN 71

    NASA Astrophysics Data System (ADS)

    Wilmes, Herbert; Wziontek, Hartmut; Falk, Reinhard

    2015-04-01

    The determination of precise gravity field parameters is of great importance in a period in which earth sciences are achieving the necessary accuracy to monitor and document global change processes. This is the reason why experts from geodesy and metrology joined in a successful cooperation to make absolute gravity observations traceable to SI quantities, to improve the metrological kilogram definition and to monitor mass movements and smallest height changes for geodetic and geophysical applications. The international gravity datum is still defined by the International Gravity Standardization Net adopted in 1971 (IGSN 71). The network is based upon pendulum and spring gravimeter observations taken in the 1950s and 60s supported by the early free fall absolute gravimeters. Its gravity values agreed in every case to better than 0.1 mGal. Today, more than 100 absolute gravimeters are in use worldwide. The series of repeated international comparisons confirms the traceability of absolute gravity measurements to SI quantities and confirm the degree of equivalence of the gravimeters in the order of a few µGal. For applications in geosciences where e.g. gravity changes over time need to be analyzed, the temporal stability of an absolute gravimeter is most important. Therefore, the proposition is made to replace the IGSN 71 by an up-to-date gravity reference system which is based upon repeated absolute gravimeter comparisons and a global network of well controlled gravity reference stations.

  10. Revisiting absolute and relative judgments in the WITNESS model.

    PubMed

    Fife, Dustin; Perry, Colton; Gronlund, Scott D

    2014-04-01

    The WITNESS model (Clark in Applied Cognitive Psychology 17:629-654, 2003) provides a theoretical framework with which to investigate the factors that contribute to eyewitness identification decisions. One key factor involves the contributions of absolute versus relative judgments. An absolute contribution is determined by the degree of match between an individual lineup member and memory for the perpetrator; a relative contribution involves the degree to which the best-matching lineup member is a better match to memory than the remaining lineup members. In WITNESS, the proportional contributions of relative versus absolute judgments are governed by the values of the decision weight parameters. We conducted an exploration of the WITNESS model's parameter space to determine the identifiability of these relative/absolute decision weight parameters, and compared the results to a restricted version of the model that does not vary the decision weight parameters. This exploration revealed that the decision weights in WITNESS are difficult to identify: Data often can be fit equally well by setting the decision weights to nearly any value and compensating with a criterion adjustment. Clark, Erickson, and Breneman (Law and Human Behavior 35:364-380, 2011) claimed to demonstrate a theoretical basis for the superiority of lineup decisions that are based on absolute contributions, but the relationship between the decision weights and the criterion weakens this claim. These findings necessitate reconsidering the role of the relative/absolute judgment distinction in eyewitness decision making. PMID:23943556

  11. Spatial frequency domain error budget

    SciTech Connect

    Hauschildt, H; Krulewich, D

    1998-08-27

    The aim of this paper is to describe a methodology for designing and characterizing machines used to manufacture or inspect parts with spatial-frequency-based specifications. At Lawrence Livermore National Laboratory, one of our responsibilities is to design or select the appropriate machine tools to produce advanced optical and weapons systems. Recently, many of the component tolerances for these systems have been specified in terms of the spatial frequency content of residual errors on the surface. We typically use an error budget as a sensitivity analysis tool to ensure that the parts manufactured by a machine will meet the specified component tolerances. Error budgets provide the formalism whereby we account for all sources of uncertainty in a process, and sum them to arrive at a net prediction of how "precisely" a manufactured component can meet a target specification. Using the error budget, we are able to minimize risk during initial stages by ensuring that the machine will produce components that meet specifications before the machine is actually built or purchased. However, the current error budgeting procedure provides no formal mechanism for designing machines that can produce parts with spatial-frequency-based specifications. The output from the current error budgeting procedure is a single number estimating the net worst case or RMS error on the work piece. This procedure has limited ability to differentiate between low spatial frequency form errors versus high frequency surface finish errors. Therefore the current error budgeting procedure can lead us to reject a machine that is adequate or accept a machine that is inadequate. This paper will describe a new error budgeting methodology to aid in the design and characterization of machines used to manufacture or inspect parts with spatial-frequency-based specifications. The output from this new procedure is the continuous spatial frequency content of errors that result on a machined part. If the machine

  12. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  13. Error coding simulations in C

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1994-10-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  14. On the absolute calibration of SO2 cameras

    USGS Publications Warehouse

    Lübcke, Peter; Bobrowski, Nicole; Illing, Sebastian; Kern, Christoph; Alvarez Nieves, Jose Manuel; Vogel, Leif; Zielcke, Johannes; Delgados Granados, Hugo; Platt, Ulrich

    2013-01-01

    This work investigates the uncertainty of results gained through the two commonly used, but quite different, calibration methods (DOAS and calibration cells). Measurements with three different instruments, an SO2 camera, a NFOVDOAS system and an Imaging DOAS (I-DOAS), are presented. We compare the calibration-cell approach with the calibration from the NFOV-DOAS system. The respective results are compared with measurements from an I-DOAS to verify the calibration curve over the spatial extent of the image. The results show that calibration cells, while working fine in some cases, can lead to an overestimation of the SO2 CD by up to 60% compared with CDs from the DOAS measurements. Besides these errors of calibration, radiative transfer effects (e.g. light dilution, multiple scattering) can significantly influence the results of both instrument types. The measurements presented in this work were taken at Popocatepetl, Mexico, between 1 March 2011 and 4 March 2011. Average SO2 emission rates between 4.00 and 14.34 kg s−1 were observed.

  15. Dynamic consensus estimation of weighted average on directed graphs

    NASA Astrophysics Data System (ADS)

    Li, Shuai; Guo, Yi

    2015-07-01

    Recent applications call for distributed weighted average estimation over sensor networks, where sensor measurement accuracy or environmental conditions need to be taken into consideration in the final consensused group decision. In this paper, we propose new dynamic consensus filter design to distributed estimate weighted average of sensors' inputs on directed graphs. Based on recent advances in the filed, we modify the existing proportional-integral consensus filter protocol to remove the requirement of bi-directional gain exchange between neighbouring sensors, so that the algorithm works for directed graphs where bi-directional communications are not possible. To compensate for the asymmetric structure of the system introduced by such a removal, sufficient gain conditions are obtained for the filter protocols to guarantee the convergence. It is rigorously proved that the proposed filter protocol converges to the weighted average of constant inputs asymptotically, and to the weighted average of time-varying inputs with a bounded error. Simulations verify the effectiveness of the proposed protocols.

  16. Evaluating Methods for Constructing Average High-Density Electrode Positions

    PubMed Central

    Richards, John E.; Boswell, Corey; Stevens, Michael; Vendemia, Jennifer M.C.

    2014-01-01

    Accurate analysis of scalp-recorded electrical activity requires the identification of electrode locations in 3D space. For example, source analysis of EEG/ERP (electroencephalogram, EEG; event-related-potentials, ERP) with realistic head models requires the identification of electrode locations on the head model derived from structural MRI recordings. Electrode systems must cover the entire scalp in sufficient density to discriminate EEG activity on the scalp and to complete accurate source analysis. The current study compares techniques for averaging electrode locations from 86 participants with the 128 channel “Geodesic Sensor Net” (GSN; EGI, Inc.), 38 participants with the 128 channel “Hydrocel Geodesic Sensor Net” (HGSN; EGI, Inc.), and 174 participants with the 81 channels in the 10-10 configurations. A point-set registration between the participants and an average MRI template resulted in an average configuration showing small standard errors, which could be transformed back accurately into the participants’ original electrode space. Average electrode locations are available for the GSN (86 participants), Hydrocel-GSN (38 participants), and 10-10 and 10-5 systems (174 participants) PMID:25234713

  17. Evaluating methods for constructing average high-density electrode positions.

    PubMed

    Richards, John E; Boswell, Corey; Stevens, Michael; Vendemia, Jennifer M C

    2015-01-01

    Accurate analysis of scalp-recorded electrical activity requires the identification of electrode locations in 3D space. For example, source analysis of EEG/ERP (electroencephalogram, EEG; event-related-potentials, ERP) with realistic head models requires the identification of electrode locations on the head model derived from structural MRI recordings. Electrode systems must cover the entire scalp in sufficient density to discriminate EEG activity on the scalp and to complete accurate source analysis. The current study compares techniques for averaging electrode locations from 86 participants with the 128 channel "Geodesic Sensor Net" (GSN; EGI, Inc.), 38 participants with the 128 channel "Hydrocel Geodesic Sensor Net" (HGSN; EGI, Inc.), and 174 participants with the 81 channels in the 10-10 configurations. A point-set registration between the participants and an average MRI template resulted in an average configuration showing small standard errors, which could be transformed back accurately into the participants' original electrode space. Average electrode locations are available for the GSN (86 participants), Hydrocel-GSN (38 participants), and 10-10 and 10-5 systems (174 participants). PMID:25234713

  18. A new absolute reference for atmospheric longwave irradiance measurements with traceability to SI units

    NASA Astrophysics Data System (ADS)

    Gröbner, J.; Reda, I.; Wacker, S.; Nyeki, S.; Behrens, K.; Gorman, J.

    2014-06-01

    Two independently designed and calibrated absolute radiometers measuring downwelling longwave irradiance were compared during two field campaigns in February and October 2013 at Physikalisch Meteorologisches Observatorium Davos/World Radiation Center (PMOD/WRC). One absolute cavity pyrgeometer (ACP) developed by NREL and up to four Integrating Sphere Infrared Radiometers (IRIS) developed by PMOD/WRC took part in these intercomparisons. The internal consistency of the IRIS radiometers and the agreement with the ACP were within ±1 W m-2, providing traceability of atmospheric longwave irradiance to the international system of units with unprecedented accuracy. Measurements performed during the two field campaigns and over the past 4 years have shown that the World Infrared Standard Group (WISG) of pyrgeometers is underestimating clear-sky atmospheric longwave irradiance by 2 to 6 W m-2, depending on the amount of integrated water vapor (IWV). This behavior is an instrument-dependent feature and requires an individual sensitivity calibration of each pyrgeometer with respect to an absolute reference such as IRIS or ACP. For IWV larger than 10 mm, an average sensitivity correction of +6.5% should be applied to the WISG in order to be consistent with the longwave reference represented by the ACP and IRIS radiometers. A concerted effort at international level will need to be implemented in order to correct measurements of atmospheric downwelling longwave irradiance traceable to the WISG.

  19. Absolute protein quantification of the yeast chaperome under conditions of heat shock

    PubMed Central

    Mackenzie, Rebecca J.; Lawless, Craig; Holman, Stephen W.; Lanthaler, Karin; Beynon, Robert J.; Grant, Chris M.; Hubbard, Simon J.

    2016-01-01

    Chaperones are fundamental to regulating the heat shock response, mediating protein recovery from thermal‐induced misfolding and aggregation. Using the QconCAT strategy and selected reaction monitoring (SRM) for absolute protein quantification, we have determined copy per cell values for 49 key chaperones in Saccharomyces cerevisiae under conditions of normal growth and heat shock. This work extends a previous chemostat quantification study by including up to five Q‐peptides per protein to improve confidence in protein quantification. In contrast to the global proteome profile of S. cerevisiae in response to heat shock, which remains largely unchanged as determined by label‐free quantification, many of the chaperones are upregulated with an average two‐fold increase in protein abundance. Interestingly, eight of the significantly upregulated chaperones are direct gene targets of heat shock transcription factor‐1. By performing absolute quantification of chaperones under heat stress for the first time, we were able to evaluate the individual protein‐level response. Furthermore, this SRM data was used to calibrate label‐free quantification values for the proteome in absolute terms, thus improving relative quantification between the two conditions. This study significantly enhances the largely transcriptomic data available in the field and illustrates a more nuanced response at the protein level. PMID:27252046

  20. Absolute protein quantification of the yeast chaperome under conditions of heat shock.

    PubMed

    Mackenzie, Rebecca J; Lawless, Craig; Holman, Stephen W; Lanthaler, Karin; Beynon, Robert J; Grant, Chris M; Hubbard, Simon J; Eyers, Claire E

    2016-08-01

    Chaperones are fundamental to regulating the heat shock response, mediating protein recovery from thermal-induced misfolding and aggregation. Using the QconCAT strategy and selected reaction monitoring (SRM) for absolute protein quantification, we have determined copy per cell values for 49 key chaperones in Saccharomyces cerevisiae under conditions of normal growth and heat shock. This work extends a previous chemostat quantification study by including up to five Q-peptides per protein to improve confidence in protein quantification. In contrast to the global proteome profile of S. cerevisiae in response to heat shock, which remains largely unchanged as determined by label-free quantification, many of the chaperones are upregulated with an average two-fold increase in protein abundance. Interestingly, eight of the significantly upregulated chaperones are direct gene targets of heat shock transcription factor-1. By performing absolute quantification of chaperones under heat stress for the first time, we were able to evaluate the individual protein-level response. Furthermore, this SRM data was used to calibrate label-free quantification values for the proteome in absolute terms, thus improving relative quantification between the two conditions. This study significantly enhances the largely transcriptomic data available in the field and illustrates a more nuanced response at the protein level. PMID:27252046