Sample records for typical measurement errors

  1. Study on the three-station typical network deployments of workspace Measurement and Positioning System

    NASA Astrophysics Data System (ADS)

    Xiong, Zhi; Zhu, J. G.; Xue, B.; Ye, Sh. H.; Xiong, Y.

    2013-10-01

    As a novel network coordinate measurement system based on multi-directional positioning, workspace Measurement and Positioning System (wMPS) has outstanding advantages of good parallelism, wide measurement range and high measurement accuracy, which makes it to be the research hotspots and important development direction in the field of large-scale measurement. Since station deployment has a significant impact on the measurement range and accuracy, and also restricts the use-cost, the optimization method of station deployment was researched in this paper. Firstly, positioning error model was established. Then focusing on the small network consisted of three stations, the typical deployments and error distribution characteristics were studied. Finally, through measuring the simulated fuselage using typical deployments at the industrial spot and comparing the results with Laser Tracker, some conclusions are obtained. The comparison results show that under existing prototype conditions, I_3 typical deployment of which three stations are distributed in a straight line has an average error of 0.30 mm and the maximum error is 0.50 mm in the range of 12 m. Meanwhile, C_3 typical deployment of which three stations are uniformly distributed in the half-circumference of an circle has an average error of 0.17 mm and the maximum error is 0.28 mm. Obviously, C_3 typical deployment has a higher control effect on precision than I_3 type. The research work provides effective theoretical support for global measurement network optimization in the future work.

  2. Word Recognition Error Analysis: Comparing Isolated Word List and Oral Passage Reading

    ERIC Educational Resources Information Center

    Flynn, Lindsay J.; Hosp, John L.; Hosp, Michelle K.; Robbins, Kelly P.

    2011-01-01

    The purpose of this study was to determine the relation between word recognition errors made at a letter-sound pattern level on a word list and on a curriculum-based measurement oral reading fluency measure (CBM-ORF) for typical and struggling elementary readers. The participants were second, third, and fourth grade typical and struggling readers…

  3. Effects of Measurement Errors on Individual Tree Stem Volume Estimates for the Austrian National Forest Inventory

    Treesearch

    Ambros Berger; Thomas Gschwantner; Ronald E. McRoberts; Klemens Schadauer

    2014-01-01

    National forest inventories typically estimate individual tree volumes using models that rely on measurements of predictor variables such as tree height and diameter, both of which are subject to measurement error. The aim of this study was to quantify the impacts of these measurement errors on the uncertainty of the model-based tree stem volume estimates. The impacts...

  4. Accuracy of acoustic velocity metering systems for measurement of low velocity in open channels

    USGS Publications Warehouse

    Laenen, Antonius; Curtis, R. E.

    1989-01-01

    Acoustic velocity meter (AVM) accuracy depends on equipment limitations, the accuracy of acoustic-path length and angle determination, and the stability of the mean velocity to acoustic-path velocity relation. Equipment limitations depend on path length and angle, transducer frequency, timing oscillator frequency, and signal-detection scheme. Typically, the velocity error from this source is about +or-1 to +or-10 mms/sec. Error in acoustic-path angle or length will result in a proportional measurement bias. Typically, an angle error of one degree will result in a velocity error of 2%, and a path-length error of one meter in 100 meter will result in an error of 1%. Ray bending (signal refraction) depends on path length and density gradients present in the stream. Any deviation from a straight acoustic path between transducer will change the unique relation between path velocity and mean velocity. These deviations will then introduce error in the mean velocity computation. Typically, for a 200-meter path length, the resultant error is less than one percent, but for a 1,000 meter path length, the error can be greater than 10%. Recent laboratory and field tests have substantiated assumptions of equipment limitations. Tow-tank tests of an AVM system with a 4.69-meter path length yielded an average standard deviation error of 9.3 mms/sec, and the field tests of an AVM system with a 20.5-meter path length yielded an average standard deviation error of a 4 mms/sec. (USGS)

  5. Conditional Standard Errors of Measurement for Composite Scores Using IRT

    ERIC Educational Resources Information Center

    Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan

    2012-01-01

    Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…

  6. Extinction measurements with low-power hsrl systems—error limits

    NASA Astrophysics Data System (ADS)

    Eloranta, Ed

    2018-04-01

    HSRL measurements of extinction are more difficult than backscatter measurements. This is particularly true for low-power, eye-safe systems. This paper looks at error sources that currently provide an error limit of 10-5 m-1 for boundary layer extinction measurements made with University of Wisconsin HSRL systems. These eye-safe systems typically use 300mW transmitters and 40 cm diameter receivers with a 10-4 radian field-of-view.

  7. Acoustic holography as a metrological tool for characterizing medical ultrasound sources and fields

    PubMed Central

    Sapozhnikov, Oleg A.; Tsysar, Sergey A.; Khokhlova, Vera A.; Kreider, Wayne

    2015-01-01

    Acoustic holography is a powerful technique for characterizing ultrasound sources and the fields they radiate, with the ability to quantify source vibrations and reduce the number of required measurements. These capabilities are increasingly appealing for meeting measurement standards in medical ultrasound; however, associated uncertainties have not been investigated systematically. Here errors associated with holographic representations of a linear, continuous-wave ultrasound field are studied. To facilitate the analysis, error metrics are defined explicitly, and a detailed description of a holography formulation based on the Rayleigh integral is provided. Errors are evaluated both for simulations of a typical therapeutic ultrasound source and for physical experiments with three different ultrasound sources. Simulated experiments explore sampling errors introduced by the use of a finite number of measurements, geometric uncertainties in the actual positions of acquired measurements, and uncertainties in the properties of the propagation medium. Results demonstrate the theoretical feasibility of keeping errors less than about 1%. Typical errors in physical experiments were somewhat larger, on the order of a few percent; comparison with simulations provides specific guidelines for improving the experimental implementation to reduce these errors. Overall, results suggest that holography can be implemented successfully as a metrological tool with small, quantifiable errors. PMID:26428789

  8. Measures of rowing performance.

    PubMed

    Smith, T Brett; Hopkins, Will G

    2012-04-01

    Accurate measures of performance are important for assessing competitive athletes in practi~al and research settings. We present here a review of rowing performance measures, focusing on the errors in these measures and the implications for testing rowers. The yardstick for assessing error in a performance measure is the random variation (typical or standard error of measurement) in an elite athlete's competitive performance from race to race: ∼1.0% for time in 2000 m rowing events. There has been little research interest in on-water time trials for assessing rowing performance, owing to logistic difficulties and environmental perturbations in performance time with such tests. Mobile ergometry via instrumented oars or rowlocks should reduce these problems, but the associated errors have not yet been reported. Measurement of boat speed to monitor on-water training performance is common; one device based on global positioning system (GPS) technology contributes negligible extra random error (0.2%) in speed measured over 2000 m, but extra error is substantial (1-10%) with other GPS devices or with an impeller, especially over shorter distances. The problems with on-water testing have led to widespread use of the Concept II rowing ergometer. The standard error of the estimate of on-water 2000 m time predicted by 2000 m ergometer performance was 2.6% and 7.2% in two studies, reflecting different effects of skill, body mass and environment in on-water versus ergometer performance. However, well trained rowers have a typical error in performance time of only ∼0.5% between repeated 2000 m time trials on this ergometer, so such trials are suitable for tracking changes in physiological performance and factors affecting it. Many researchers have used the 2000 m ergometer performance time as a criterion to identify other predictors of rowing performance. Standard errors of the estimate vary widely between studies even for the same predictor, but the lowest errors (~1-2%) have been observed for peak power output in an incremental test, some measures of lactate threshold and measures of 30-second all-out power. Some of these measures also have typical error between repeated tests suitably low for tracking changes. Combining measures via multiple linear regression needs further investigation. In summary, measurement of boat speed, especially with a good GPS device, has adequate precision for monitoring training performance, but adjustment for environmental effects needs to be investigated. Time trials on the Concept II ergometer provide accurate estimates of a rower's physiological ability to output power, and some submaximal and brief maximal ergometer performance measures can be used frequently to monitor changes in this ability. On-water performance measured via instrumented skiffs that determine individual power output may eventually surpass measures derived from the Concept II.

  9. Strength tests for elite rowers: low- or high-repetition?

    PubMed

    Lawton, Trent W; Cronin, John B; McGuigan, Michael R

    2014-01-01

    The purpose of this project was to evaluate the utility of low- and high-repetition maximum (RM) strength tests used to assess rowers. Twenty elite heavyweight males (age 23.7 ± 4.0 years) performed four tests (5 RM, 30 RM, 60 RM and 120 RM) using leg press and seated arm pulling exercise on a dynamometer. Each test was repeated on two further occasions; 3 and 7 days from the initial trial. Per cent typical error (within-participant variation) and intraclass correlation coefficients (ICCs) were calculated using log-transformed repeated-measures data. High-repetition tests (30 RM, 60 RM and 120 RM), involving seated arm pulling exercise are not recommended to be included in an assessment battery, as they had unsatisfactory measurement precision (per cent typical error > 5% or ICC < 0.9). Conversely, low-repetition tests (5 RM) involving leg press and seated arm pulling exercises could be used to assess elite rowers (per cent typical error ≤ 5% and ICC ≥ 0.9); however, only 5 RM leg pressing met criteria (per cent typical error = 2.7%, ICC = 0.98) for research involving small samples (n = 20). In summary, low-repetition 5 RM strength testing offers greater utility as assessments of rowers, as they can be used to measure upper- and lower-body strength; however, only the leg press exercise is recommended for research involving small squads of elite rowers.

  10. Tilt Error in Cryospheric Surface Radiation Measurements at High Latitudes: A Model Study

    NASA Astrophysics Data System (ADS)

    Bogren, W.; Kylling, A.; Burkhart, J. F.

    2015-12-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250nm to 4500nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60◦, a sensor tilted by 1, 3, and 5◦ can respectively introduce up to 2.6, 7.7, and 12.8% error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  11. A Reassessment of the Precision of Carbonate Clumped Isotope Measurements: Implications for Calibrations and Paleoclimate Reconstructions

    NASA Astrophysics Data System (ADS)

    Fernandez, Alvaro; Müller, Inigo A.; Rodríguez-Sanz, Laura; van Dijk, Joep; Looser, Nathan; Bernasconi, Stefano M.

    2017-12-01

    Carbonate clumped isotopes offer a potentially transformational tool to interpret Earth's history, but the proxy is still limited by poor interlaboratory reproducibility. Here, we focus on the uncertainties that result from the analysis of only a few replicate measurements to understand the extent to which unconstrained errors affect calibration relationships and paleoclimate reconstructions. We find that highly precise data can be routinely obtained with multiple replicate analyses, but this is not always done in many laboratories. For instance, using published estimates of external reproducibilities we find that typical clumped isotope measurements (three replicate analyses) have margins of error at the 95% confidence level (CL) that are too large for many applications. These errors, however, can be systematically reduced with more replicate measurements. Second, using a Monte Carlo-type simulation we demonstrate that the degree of disagreement on published calibration slopes is about what we should expect considering the precision of Δ47 data, the number of samples and replicate analyses, and the temperature range covered in published calibrations. Finally, we show that the way errors are typically reported in clumped isotope data can be problematic and lead to the impression that data are more precise than warranted. We recommend that uncertainties in Δ47 data should no longer be reported as the standard error of a few replicate measurements. Instead, uncertainties should be reported as margins of error at a specified confidence level (e.g., 68% or 95% CL). These error bars are a more realistic indication of the reliability of a measurement.

  12. Kinematic Analysis of Speech Sound Sequencing Errors Induced by Delayed Auditory Feedback.

    PubMed

    Cler, Gabriel J; Lee, Jackson C; Mittelman, Talia; Stepp, Cara E; Bohland, Jason W

    2017-06-22

    Delayed auditory feedback (DAF) causes speakers to become disfluent and make phonological errors. Methods for assessing the kinematics of speech errors are lacking, with most DAF studies relying on auditory perceptual analyses, which may be problematic, as errors judged to be categorical may actually represent blends of sounds or articulatory errors. Eight typical speakers produced nonsense syllable sequences under normal and DAF (200 ms). Lip and tongue kinematics were captured with electromagnetic articulography. Time-locked acoustic recordings were transcribed, and the kinematics of utterances with and without perceived errors were analyzed with existing and novel quantitative methods. New multivariate measures showed that for 5 participants, kinematic variability for productions perceived to be error free was significantly increased under delay; these results were validated by using the spatiotemporal index measure. Analysis of error trials revealed both typical productions of a nontarget syllable and productions with articulatory kinematics that incorporated aspects of both the target and the perceived utterance. This study is among the first to characterize articulatory changes under DAF and provides evidence for different classes of speech errors, which may not be perceptually salient. New methods were developed that may aid visualization and analysis of large kinematic data sets. https://doi.org/10.23641/asha.5103067.

  13. Kinematic Analysis of Speech Sound Sequencing Errors Induced by Delayed Auditory Feedback

    PubMed Central

    Lee, Jackson C.; Mittelman, Talia; Stepp, Cara E.; Bohland, Jason W.

    2017-01-01

    Purpose Delayed auditory feedback (DAF) causes speakers to become disfluent and make phonological errors. Methods for assessing the kinematics of speech errors are lacking, with most DAF studies relying on auditory perceptual analyses, which may be problematic, as errors judged to be categorical may actually represent blends of sounds or articulatory errors. Method Eight typical speakers produced nonsense syllable sequences under normal and DAF (200 ms). Lip and tongue kinematics were captured with electromagnetic articulography. Time-locked acoustic recordings were transcribed, and the kinematics of utterances with and without perceived errors were analyzed with existing and novel quantitative methods. Results New multivariate measures showed that for 5 participants, kinematic variability for productions perceived to be error free was significantly increased under delay; these results were validated by using the spatiotemporal index measure. Analysis of error trials revealed both typical productions of a nontarget syllable and productions with articulatory kinematics that incorporated aspects of both the target and the perceived utterance. Conclusions This study is among the first to characterize articulatory changes under DAF and provides evidence for different classes of speech errors, which may not be perceptually salient. New methods were developed that may aid visualization and analysis of large kinematic data sets. Supplemental Material https://doi.org/10.23641/asha.5103067 PMID:28655038

  14. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve

    2016-03-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.

  15. Improvement of the Error-detection Mechanism in Adults with Dyslexia Following Reading Acceleration Training.

    PubMed

    Horowitz-Kraus, Tzipi

    2016-05-01

    The error-detection mechanism aids in preventing error repetition during a given task. Electroencephalography demonstrates that error detection involves two event-related potential components: error-related and correct-response negativities (ERN and CRN, respectively). Dyslexia is characterized by slow, inaccurate reading. In particular, individuals with dyslexia have a less active error-detection mechanism during reading than typical readers. In the current study, we examined whether a reading training programme could improve the ability to recognize words automatically (lexical representations) in adults with dyslexia, thereby resulting in more efficient error detection during reading. Behavioural and electrophysiological measures were obtained using a lexical decision task before and after participants trained with the reading acceleration programme. ERN amplitudes were smaller in individuals with dyslexia than in typical readers before training but increased following training, as did behavioural reading scores. Differences between the pre-training and post-training ERN and CRN components were larger in individuals with dyslexia than in typical readers. Also, the error-detection mechanism as represented by the ERN/CRN complex might serve as a biomarker for dyslexia and be used to evaluate the effectiveness of reading intervention programmes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Error-tradeoff and error-disturbance relations for incompatible quantum measurements.

    PubMed

    Branciard, Cyril

    2013-04-23

    Heisenberg's uncertainty principle is one of the main tenets of quantum theory. Nevertheless, and despite its fundamental importance for our understanding of quantum foundations, there has been some confusion in its interpretation: Although Heisenberg's first argument was that the measurement of one observable on a quantum state necessarily disturbs another incompatible observable, standard uncertainty relations typically bound the indeterminacy of the outcomes when either one or the other observable is measured. In this paper, we quantify precisely Heisenberg's intuition. Even if two incompatible observables cannot be measured together, one can still approximate their joint measurement, at the price of introducing some errors with respect to the ideal measurement of each of them. We present a tight relation characterizing the optimal tradeoff between the error on one observable vs. the error on the other. As a particular case, our approach allows us to characterize the disturbance of an observable induced by the approximate measurement of another one; we also derive a stronger error-disturbance relation for this scenario.

  17. Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sedlak, Steffen M.; Bruetzel, Linda K.; Lipfert, Jan

    A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ 2(q) = [I(q) + const.]/(kq), whereI(q) is the scattering intensity as a function of the momentum transferq;kand const. are fitting parameters that are characteristic of the experimental setup. The model gives a concrete procedure for calculating realistic measurementmore » errors for simulated SAXS profiles. In addition, the results provide guidelines for optimizing SAXS measurements, which are in line with established procedures for SAXS experiments, and enable a quantitative evaluation of measurement errors.« less

  18. The use of a covariate reduces experimental error in nutrient digestion studies in growing pigs

    USDA-ARS?s Scientific Manuscript database

    Covariance analysis limits error, the degree of nuisance variation, and overparameterizing factors to accurately measure treatment effects. Data dealing with growth, carcass composition, and genetics often utilize covariates in data analysis. In contrast, nutritional studies typically do not. The ob...

  19. Measuring Seebeck Coefficient

    NASA Technical Reports Server (NTRS)

    Snyder, G. Jeffrey (Inventor)

    2015-01-01

    A high temperature Seebeck coefficient measurement apparatus and method with various features to minimize typical sources of errors is described. Common sources of temperature and voltage measurement errors which may impact accurate measurement are identified and reduced. Applying the identified principles, a high temperature Seebeck measurement apparatus and method employing a uniaxial, four-point geometry is described to operate from room temperature up to 1300K. These techniques for non-destructive Seebeck coefficient measurements are simple to operate, and are suitable for bulk samples with a broad range of physical types and shapes.

  20. Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements

    NASA Technical Reports Server (NTRS)

    Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.

    2014-01-01

    This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.

  1. Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements

    NASA Technical Reports Server (NTRS)

    Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.

    2014-01-01

    This paper discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 4x2 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and pi/4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to-ground communication links with enough channel capacity to support voice, data and video links from cubesats, unmanned air vehicles (UAV), and commercial aircraft.

  2. The reliability of running economy expressed as oxygen cost and energy cost in trained distance runners.

    PubMed

    Shaw, Andrew J; Ingham, Stephen A; Fudge, Barry W; Folland, Jonathan P

    2013-12-01

    This study assessed the between-test reliability of oxygen cost (OC) and energy cost (EC) in distance runners, and contrasted it with the smallest worthwhile change (SWC) of these measures. OC and EC displayed similar levels of within-subject variation (typical error < 3.85%). However, the typical error (2.75% vs 2.74%) was greater than the SWC (1.38% vs 1.71%) for both OC and EC, respectively, indicating insufficient sensitivity to confidently detect small, but meaningful, changes in OC and EC.

  3. Retention-error patterns in complex alphanumeric serial-recall tasks.

    PubMed

    Mathy, Fabien; Varré, Jean-Stéphane

    2013-01-01

    We propose a new method based on an algorithm usually dedicated to DNA sequence alignment in order to both reliably score short-term memory performance on immediate serial-recall tasks and analyse retention-error patterns. There can be considerable confusion on how performance on immediate serial list recall tasks is scored, especially when the to-be-remembered items are sampled with replacement. We discuss the utility of sequence-alignment algorithms to compare the stimuli to the participants' responses. The idea is that deletion, substitution, translocation, and insertion errors, which are typical in DNA, are also typical putative errors in short-term memory (respectively omission, confusion, permutation, and intrusion errors). We analyse four data sets in which alphanumeric lists included a few (or many) repetitions. After examining the method on two simple data sets, we show that sequence alignment offers 1) a compelling method for measuring capacity in terms of chunks when many regularities are introduced in the material (third data set) and 2) a reliable estimator of individual differences in short-term memory capacity. This study illustrates the difficulty of arriving at a good measure of short-term memory performance, and also attempts to characterise the primary factors underpinning remembering and forgetting.

  4. Error analysis for relay type satellite-aided search and rescue systems

    NASA Technical Reports Server (NTRS)

    Marini, J. W.

    1977-01-01

    An analysis was made of the errors in the determination of the position of an emergency transmitter in a satellite aided search and rescue system. The satellite was assumed to be at a height of 820 km in a near circular near polar orbit. Short data spans of four minutes or less were used. The error sources considered were measurement noise, transmitter frequency drift, ionospheric effects and error in the assumed height of the transmitter. The errors were calculated for several different transmitter positions, data rates and data spans. The only transmitter frequency used was 406 MHz, but the results can be scaled to different frequencies. In a typical case, in which four Doppler measurements were taken over a span of two minutes, the position error was about 1.2 km.

  5. Dynamic Modeling Accuracy Dependence on Errors in Sensor Measurements, Mass Properties, and Aircraft Geometry

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.

  6. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  7. The impact of experimental measurement errors on long-term viscoelastic predictions. [of structural materials

    NASA Technical Reports Server (NTRS)

    Tuttle, M. E.; Brinson, H. F.

    1986-01-01

    The impact of flight error in measured viscoelastic parameters on subsequent long-term viscoelastic predictions is numerically evaluated using the Schapery nonlinear viscoelastic model. Of the seven Schapery parameters, the results indicated that long-term predictions were most sensitive to errors in the power law parameter n. Although errors in the other parameters were significant as well, errors in n dominated all other factors at long times. The process of selecting an appropriate short-term test cycle so as to insure an accurate long-term prediction was considered, and a short-term test cycle was selected using material properties typical for T300/5208 graphite-epoxy at 149 C. The process of selection is described, and its individual steps are itemized.

  8. Cumulative uncertainty in measured streamflow and water quality data for small watersheds

    USGS Publications Warehouse

    Harmel, R.D.; Cooper, R.J.; Slade, R.M.; Haney, R.L.; Arnold, J.G.

    2006-01-01

    The scientific community has not established an adequate understanding of the uncertainty inherent in measured water quality data, which is introduced by four procedural categories: streamflow measurement, sample collection, sample preservation/storage, and laboratory analysis. Although previous research has produced valuable information on relative differences in procedures within these categories, little information is available that compares the procedural categories or presents the cumulative uncertainty in resulting water quality data. As a result, quality control emphasis is often misdirected, and data uncertainty is typically either ignored or accounted for with an arbitrary margin of safety. Faced with the need for scientifically defensible estimates of data uncertainty to support water resource management, the objectives of this research were to: (1) compile selected published information on uncertainty related to measured streamflow and water quality data for small watersheds, (2) use a root mean square error propagation method to compare the uncertainty introduced by each procedural category, and (3) use the error propagation method to determine the cumulative probable uncertainty in measured streamflow, sediment, and nutrient data. Best case, typical, and worst case "data quality" scenarios were examined. Averaged across all constituents, the calculated cumulative probable uncertainty (??%) contributed under typical scenarios ranged from 6% to 19% for streamflow measurement, from 4% to 48% for sample collection, from 2% to 16% for sample preservation/storage, and from 5% to 21% for laboratory analysis. Under typical conditions, errors in storm loads ranged from 8% to 104% for dissolved nutrients, from 8% to 110% for total N and P, and from 7% to 53% for TSS. Results indicated that uncertainty can increase substantially under poor measurement conditions and limited quality control effort. This research provides introductory scientific estimates of uncertainty in measured water quality data. The results and procedures presented should also assist modelers in quantifying the "quality"of calibration and evaluation data sets, determining model accuracy goals, and evaluating model performance.

  9. Outdoor surface temperature measurement: ground truth or lie?

    NASA Astrophysics Data System (ADS)

    Skauli, Torbjorn

    2004-08-01

    Contact surface temperature measurement in the field is essential in trials of thermal imaging systems and camouflage, as well as for scene modeling studies. The accuracy of such measurements is challenged by environmental factors such as sun and wind, which induce temperature gradients around a surface sensor and lead to incorrect temperature readings. In this work, a simple method is used to test temperature sensors under conditions representative of a surface whose temperature is determined by heat exchange with the environment. The tested sensors are different types of thermocouples and platinum thermistors typically used in field trials, as well as digital temperature sensors. The results illustrate that the actual measurement errors can be much larger than the specified accuracy of the sensors. The measurement error typically scales with the difference between surface temperature and ambient air temperature. Unless proper care is taken, systematic errors can easily reach 10% of this temperature difference, which is often unacceptable. Reasonably accurate readings are obtained using a miniature platinum thermistor. Thermocouples can perform well on bare metal surfaces if the connection to the surface is highly conductive. It is pointed out that digital temperature sensors have many advantages for field trials use.

  10. A toolkit for measurement error correction, with a focus on nutritional epidemiology

    PubMed Central

    Keogh, Ruth H; White, Ian R

    2014-01-01

    Exposure measurement error is a problem in many epidemiological studies, including those using biomarkers and measures of dietary intake. Measurement error typically results in biased estimates of exposure-disease associations, the severity and nature of the bias depending on the form of the error. To correct for the effects of measurement error, information additional to the main study data is required. Ideally, this is a validation sample in which the true exposure is observed. However, in many situations, it is not feasible to observe the true exposure, but there may be available one or more repeated exposure measurements, for example, blood pressure or dietary intake recorded at two time points. The aim of this paper is to provide a toolkit for measurement error correction using repeated measurements. We bring together methods covering classical measurement error and several departures from classical error: systematic, heteroscedastic and differential error. The correction methods considered are regression calibration, which is already widely used in the classical error setting, and moment reconstruction and multiple imputation, which are newer approaches with the ability to handle differential error. We emphasize practical application of the methods in nutritional epidemiology and other fields. We primarily consider continuous exposures in the exposure-outcome model, but we also outline methods for use when continuous exposures are categorized. The methods are illustrated using the data from a study of the association between fibre intake and colorectal cancer, where fibre intake is measured using a diet diary and repeated measures are available for a subset. © 2014 The Authors. PMID:24497385

  11. Dependence of Dynamic Modeling Accuracy on Sensor Measurements, Mass Properties, and Aircraft Geometry

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    The NASA Generic Transport Model (GTM) nonlinear simulation was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of identified parameters in mathematical models describing the flight dynamics and determined from flight data. Measurements from a typical flight condition and system identification maneuver were systematically and progressively deteriorated by introducing noise, resolution errors, and bias errors. The data were then used to estimate nondimensional stability and control derivatives within a Monte Carlo simulation. Based on these results, recommendations are provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using additional flight conditions and parameter estimation methods, as well as a nonlinear flight simulation of the General Dynamics F-16 aircraft, were compared with these recommendations

  12. Variance Analysis if Unevenly Spaced Time Series Data

    DTIC Science & Technology

    1995-12-01

    Daka were subsequently removed from mch simulated data set using typical TWSTFT data patterns to create lwo unevenly spaced sets with average...and techniqw are presented for cowecking errors caused by uneven data spacing in typical TWSTFT daka sets. INTRODUCTION Data points obtained from an...the possible data available. In TWSTFT , the task is less daunting: time transfers are typically measured on Monday, Wednesday, and Friday, so, in a

  13. Refining Field Measurements of Methane Flux Rates from Abandoned Oil and Gas Wells

    NASA Astrophysics Data System (ADS)

    Lagron, C. S.; Kang, M.; Riqueros, N. S.; Jackson, R. B.

    2015-12-01

    Recent studies in Pennsylvania demonstrate the potential for significant methane emissions from abandoned oil and gas wells. A subset of tested wells was high emitting, with methane flux rates up to seven orders of magnitude greater than natural fluxes (up to 105 mg CH4/hour, or about 2.5LPM). These wells contribute disproportionately to the total methane emissions from abandoned oil and gas wells. The principles guiding the chamber design have been developed for lower flux rates, typically found in natural environments, and chamber design modifications may reduce uncertainty in flux rates associated with high-emitting wells. Kang et al. estimate errors of a factor of two in measured values based on previous studies. We conduct controlled releases of methane to refine error estimates and improve chamber design with a focus on high-emitters. Controlled releases of methane are conducted at 0.05 LPM, 0.50 LPM, 1.0 LPM, 2.0 LPM, 3.0 LPM, and 5.0 LPM, and at two chamber dimensions typically used in field measurements studies of abandoned wells. As most sources of error tabulated by Kang et al. tend to bias the results toward underreporting of methane emissions, a flux-targeted chamber design modification can reduce error margins and/or provide grounds for a potential upward revision of emission estimates.

  14. The vocabulary profile of Slovak children with primary language impairment compared to typically developing Slovak children measured by LITMUS-CLT.

    PubMed

    Kapalková, Svetlana; Slančová, Daniela

    2017-01-01

    This study compared a sample of children with primary language impairment (PLI) and typically developing age-matched children using the crosslinguistic lexical tasks (CLT-SK). We also compared the PLI children with typically developing language-matched younger children who were matched on the basis of receptive vocabulary. Overall, statistical testing showed that the vocabulary of the PLI children was significantly different from the vocabulary of the age-matched children, but not statistically different from the younger children who were matched on the basis of their receptive vocabulary size. Qualitative analysis of the correct answers revealed that the PLI children showed higher rigidity compared to the younger language-matched children who are able to use more synonyms or derivations across word class in naming tasks. Similarly, an examination of the children's naming errors indicated that the language-matched children exhibited more semantic errors, whereas PLI children showed more associative errors.

  15. Measurement-free implementations of small-scale surface codes for quantum-dot qubits

    NASA Astrophysics Data System (ADS)

    Ercan, H. Ekmel; Ghosh, Joydip; Crow, Daniel; Premakumar, Vickram N.; Joynt, Robert; Friesen, Mark; Coppersmith, S. N.

    2018-01-01

    The performance of quantum-error-correction schemes depends sensitively on the physical realizations of the qubits and the implementations of various operations. For example, in quantum-dot spin qubits, readout is typically much slower than gate operations, and conventional surface-code implementations that rely heavily on syndrome measurements could therefore be challenging. However, fast and accurate reset of quantum-dot qubits, without readout, can be achieved via tunneling to a reservoir. Here we propose small-scale surface-code implementations for which syndrome measurements are replaced by a combination of Toffoli gates and qubit reset. For quantum-dot qubits, this enables much faster error correction than measurement-based schemes, but requires additional ancilla qubits and non-nearest-neighbor interactions. We have performed numerical simulations of two different coding schemes, obtaining error thresholds on the orders of 10-2 for a one-dimensional architecture that only corrects bit-flip errors and 10-4 for a two-dimensional architecture that corrects bit- and phase-flip errors.

  16. Toward a new culture in verified quantum operations

    NASA Astrophysics Data System (ADS)

    Flammia, Steve

    Measuring error rates of quantum operations has become an indispensable component in any aspiring platform for quantum computation. As the quality of controlled quantum operations increases, the demands on the accuracy and precision with which we measure these error rates also grows. However, well-meaning scientists that report these error measures are faced with a sea of non-standardized methodologies and are often asked during publication for only coarse information about how their estimates were obtained. Moreover, there are serious incentives to use methodologies and measures that will continually produce numbers that improve with time to show progress. These problems will only get exacerbated as our typical error rates go from 1 in 100 to 1 in 1000 or less. This talk will survey existing challenges presented by the current paradigm and offer some suggestions for solutions than can help us move toward fair and standardized methods for error metrology in quantum computing experiments, and towards a culture that values full disclose of methodologies and higher standards for data analysis.

  17. Estimating extreme stream temperatures by the standard deviate method

    NASA Astrophysics Data System (ADS)

    Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz

    2006-02-01

    It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.

  18. Distortion of Digital Image Correlation (DIC) Displacements and Strains from Heat Waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, E. M. C.; Reu, P. L.

    “Heat waves” is a colloquial term used to describe convective currents in air formed when different objects in an area are at different temperatures. In the context of Digital Image Correlation (DIC) and other optical-based image processing techniques, imaging an object of interest through heat waves can significantly distort the apparent location and shape of the object. We present that there are many potential heat sources in DIC experiments, including but not limited to lights, cameras, hot ovens, and sunlight, yet error caused by heat waves is often overlooked. This paper first briefly presents three practical situations in which heatmore » waves contributed significant error to DIC measurements to motivate the investigation of heat waves in more detail. Then the theoretical background of how light is refracted through heat waves is presented, and the effects of heat waves on displacements and strains computed from DIC are characterized in detail. Finally, different filtering methods are investigated to reduce the displacement and strain errors caused by imaging through heat waves. The overarching conclusions from this work are that errors caused by heat waves are significantly higher than typical noise floors for DIC measurements, and that the errors are difficult to filter because the temporal and spatial frequencies of the errors are in the same range as those of typical signals of interest. In conclusion, eliminating or mitigating the effects of heat sources in a DIC experiment is the best solution to minimizing errors caused by heat waves.« less

  19. Distortion of Digital Image Correlation (DIC) Displacements and Strains from Heat Waves

    DOE PAGES

    Jones, E. M. C.; Reu, P. L.

    2017-11-28

    “Heat waves” is a colloquial term used to describe convective currents in air formed when different objects in an area are at different temperatures. In the context of Digital Image Correlation (DIC) and other optical-based image processing techniques, imaging an object of interest through heat waves can significantly distort the apparent location and shape of the object. We present that there are many potential heat sources in DIC experiments, including but not limited to lights, cameras, hot ovens, and sunlight, yet error caused by heat waves is often overlooked. This paper first briefly presents three practical situations in which heatmore » waves contributed significant error to DIC measurements to motivate the investigation of heat waves in more detail. Then the theoretical background of how light is refracted through heat waves is presented, and the effects of heat waves on displacements and strains computed from DIC are characterized in detail. Finally, different filtering methods are investigated to reduce the displacement and strain errors caused by imaging through heat waves. The overarching conclusions from this work are that errors caused by heat waves are significantly higher than typical noise floors for DIC measurements, and that the errors are difficult to filter because the temporal and spatial frequencies of the errors are in the same range as those of typical signals of interest. In conclusion, eliminating or mitigating the effects of heat sources in a DIC experiment is the best solution to minimizing errors caused by heat waves.« less

  20. Reference-free error estimation for multiple measurement methods.

    PubMed

    Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga

    2018-01-01

    We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.

  1. Interpretation of Errors Made by Mandarin-Speaking Children on the Preschool Language Scales--5th Edition Screening Test

    ERIC Educational Resources Information Center

    Ren, Yonggang; Rattanasone, Nan Xu; Wyver, Shirley; Hinton, Amber; Demuth, Katherine

    2016-01-01

    We investigated typical errors made by Mandarin-speaking children when measured by the Preschool Language Scales-fifth edition, Screening Test (PLS-5 Screening Test). The intention was to provide preliminary data for the development of a guideline for early childhood educators and psychologists who use the test with Mandarin-speaking children.…

  2. Accounting for measurement error in human life history trade-offs using structural equation modeling.

    PubMed

    Helle, Samuli

    2018-03-01

    Revealing causal effects from correlative data is very challenging and a contemporary problem in human life history research owing to the lack of experimental approach. Problems with causal inference arising from measurement error in independent variables, whether related either to inaccurate measurement technique or validity of measurements, seem not well-known in this field. The aim of this study is to show how structural equation modeling (SEM) with latent variables can be applied to account for measurement error in independent variables when the researcher has recorded several indicators of a hypothesized latent construct. As a simple example of this approach, measurement error in lifetime allocation of resources to reproduction in Finnish preindustrial women is modelled in the context of the survival cost of reproduction. In humans, lifetime energetic resources allocated in reproduction are almost impossible to quantify with precision and, thus, typically used measures of lifetime reproductive effort (e.g., lifetime reproductive success and parity) are likely to be plagued by measurement error. These results are contrasted with those obtained from a traditional regression approach where the single best proxy of lifetime reproductive effort available in the data is used for inference. As expected, the inability to account for measurement error in women's lifetime reproductive effort resulted in the underestimation of its underlying effect size on post-reproductive survival. This article emphasizes the advantages that the SEM framework can provide in handling measurement error via multiple-indicator latent variables in human life history studies. © 2017 Wiley Periodicals, Inc.

  3. Measuring Effect Sizes: The Effect of Measurement Error. Working Paper 19

    ERIC Educational Resources Information Center

    Boyd, Donald; Grossman, Pamela; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James

    2008-01-01

    Value-added models in education research allow researchers to explore how a wide variety of policies and measured school inputs affect the academic performance of students. Researchers typically quantify the impacts of such interventions in terms of "effect sizes", i.e., the estimated effect of a one standard deviation change in the…

  4. Reliability and measurement error of active knee extension range of motion in a modified slump test position: a pilot study.

    PubMed

    Tucker, Neil; Reid, Duncan; McNair, Peter

    2007-01-01

    The slump test is a tool to assess the mechanosensitivity of the neuromeningeal structures within the vertebral canal. While some studies have investigated the reliability of aspects of this test within the same day, few have assessed the reliability across days. Therefore, the purpose of this pilot study was to investigate reliability when measuring active knee extension range of motion (AROM) in a modified slump test position within trials on a single day and across days. Ten male and ten female asymptomatic subjects, ages 20-49 (mean age 30.1, SD 6.4) participated in the study. Knee extension AROM in a modified slump position with the cervical spine in a flexed position and then in an extended position was measured via three trials on two separate days. Across three trials, knee extension AROM increased significantly with a mean magnitude of 2 degrees within days for both cervical spine positions (P>0.05). The findings showed that there was no statistically significant difference in knee extension AROM measurements across days (P>0.05). The intraclass correlation coefficients for the mean of the three trials across days were 0.96 (lower limit 95% CI: 0.90) with the cervical spine flexed and 0.93 (lower limit 95% CI: 0.83) with cervical extension. Measurement error was calculated by way of the typical error and 95% limits of agreement, and visually represented in Bland and Altman plots. The typical error for the cervical flexed and extended positions averaged across trials was 2.6 degrees and 3.3 degrees , respectively. The limits of agreement were narrow, and the Bland and Altman plots also showed minimal bias in the joint angles across days with a random distribution of errors across the range of measured angles. This study demonstrated that knee extension AROM could be reliably measured across days in subjects without pathology and that the measurement error was acceptable. Implications of variability over multiple trials are discussed. The modified set-up for the test using the Kincom dynamometer and elevated thigh position may be useful to clinical researchers in determining the mechanosensitivity of the nervous system.

  5. Reliability and Measurement Error of Active Knee Extension Range of Motion in a Modified Slump Test Position: A Pilot Study

    PubMed Central

    Tucker, Neil; Reid, Duncan; McNair, Peter

    2007-01-01

    The slump test is a tool to assess the mechanosensitivity of the neuromeningeal structures within the vertebral canal. While some studies have investigated the reliability of aspects of this test within the same day, few have assessed the reliability across days. Therefore, the purpose of this pilot study was to investigate reliability when measuring active knee extension range of motion (AROM) in a modified slump test position within trials on a single day and across days. Ten male and ten female asymptomatic subjects, ages 20–49 (mean age 30.1, SD 6.4) participated in the study. Knee extension AROM in a modified slump position with the cervical spine in a flexed position and then in an extended position was measured via three trials on two separate days. Across three trials, knee extension AROM increased significantly with a mean magnitude of 2° within days for both cervical spine positions (P>0.05). The findings showed that there was no statistically significant difference in knee extension AROM measurements across days (P>0.05). The intraclass correlation coefficients for the mean of the three trials across days were 0.96 (lower limit 95% CI: 0.90) with the cervical spine flexed and 0.93 (lower limit 95% CI: 0.83) with cervical extension. Measurement error was calculated by way of the typical error and 95% limits of agreement, and visually represented in Bland and Altman plots. The typical error for the cervical flexed and extended positions averaged across trials was 2.6° and 3.3°, respectively. The limits of agreement were narrow, and the Bland and Altman plots also showed minimal bias in the joint angles across days with a random distribution of errors across the range of measured angles. This study demonstrated that knee extension AROM could be reliably measured across days in subjects without pathology and that the measurement error was acceptable. Implications of variability over multiple trials are discussed. The modified set-up for the test using the Kincom dynamometer and elevated thigh position may be useful to clinical researchers in determining the mechanosensitivity of the nervous system. PMID:19066666

  6. Prediction and typicality in multiverse cosmology

    NASA Astrophysics Data System (ADS)

    Azhar, Feraz

    2014-02-01

    In the absence of a fundamental theory that precisely predicts values for observable parameters, anthropic reasoning attempts to constrain probability distributions over those parameters in order to facilitate the extraction of testable predictions. The utility of this approach has been vigorously debated of late, particularly in light of theories that claim we live in a multiverse, where parameters may take differing values in regions lying outside our observable horizon. Within this cosmological framework, we investigate the efficacy of top-down anthropic reasoning based on the weak anthropic principle. We argue contrary to recent claims that it is not clear one can either dispense with notions of typicality altogether or presume typicality, in comparing resulting probability distributions with observations. We show in a concrete, top-down setting related to dark matter, that assumptions about typicality can dramatically affect predictions, thereby providing a guide to how errors in reasoning regarding typicality translate to errors in the assessment of predictive power. We conjecture that this dependence on typicality is an integral feature of anthropic reasoning in broader cosmological contexts, and argue in favour of the explicit inclusion of measures of typicality in schemes invoking anthropic reasoning, with a view to extracting predictions from multiverse scenarios.

  7. Correcting intensity loss errors in the absence of texture-free reference samples during pole figure measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saleh, Ahmed A., E-mail: asaleh@uow.edu.au

    Even with the use of X-ray polycapillary lenses, sample tilting during pole figure measurement results in a decrease in the recorded X-ray intensity. The magnitude of this error is affected by the sample size and/or the finite detector size. These errors can be typically corrected by measuring the intensity loss as a function of the tilt angle using a texture-free reference sample (ideally made of the same alloy as the investigated material). Since texture-free reference samples are not readily available for all alloys, the present study employs an empirical procedure to estimate the correction curve for a particular experimental configuration.more » It involves the use of real texture-free reference samples that pre-exist in any X-ray diffraction laboratory to first establish the empirical correlations between X-ray intensity, sample tilt and their Bragg angles and thereafter generate correction curves for any Bragg angle. It will be shown that the empirically corrected textures are in very good agreement with the experimentally corrected ones. - Highlights: •Sample tilting during X-ray pole figure measurement leads to intensity loss errors. •Texture-free reference samples are typically used to correct the pole figures. •An empirical correction procedure is proposed in the absence of reference samples. •The procedure relies on reference samples that pre-exist in any texture laboratory. •Experimentally and empirically corrected textures are in very good agreement.« less

  8. C-band radar calibration using GEOS-3

    NASA Technical Reports Server (NTRS)

    Krabill, W. B.; Martin, C. F.

    1978-01-01

    The various methods of determining tracking radar measurement error parameters are described, along with the projected accuracy of results. Typical examples and results for calibration of radars tracking the GEOS-3 satellite are presented.

  9. Differentiating School-Aged Children with and without Language Impairment Using Tense and Grammaticality Measures from a Narrative Task

    ERIC Educational Resources Information Center

    Guo, Ling-Yu; Schneider, Phyllis

    2016-01-01

    Purpose: To determine the diagnostic accuracy of the finite verb morphology composite (FVMC), number of errors per C-unit (Errors/CU), and percent grammatical C-units (PGCUs) in differentiating school-aged children with language impairment (LI) and those with typical language development (TL). Method: Participants were 61 six-year-olds (50 TL, 11…

  10. Lexical diversity and omission errors as predictors of language ability in the narratives of sequential Spanish-English bilinguals: a cross-language comparison.

    PubMed

    Jacobson, Peggy F; Walden, Patrick R

    2013-08-01

    This study explored the utility of language sample analysis for evaluating language ability in school-age Spanish-English sequential bilingual children. Specifically, the relative potential of lexical diversity and word/morpheme omission as predictors of typical or atypical language status was evaluated. Narrative samples were obtained from 48 bilingual children in both of their languages using the suggested narrative retell protocol and coding conventions as per Systematic Analysis of Language Transcripts (SALT; Miller & Iglesias, 2008) software. An additional lexical diversity measure, VocD, was also calculated. A series of logistical hierarchical regressions explored the utility of the number of different words, VocD statistic, and word and morpheme omissions in each language for predicting language status. Omission errors turned out to be the best predictors of bilingual language impairment at all ages, and this held true across languages. Although lexical diversity measures did not predict typical or atypical language status, the measures were significantly related to oral language proficiency in English and Spanish. The results underscore the significance of omission errors in bilingual language impairment while simultaneously revealing the limitations of lexical diversity measures as indicators of impairment. The relationship between lexical diversity and oral language proficiency highlights the importance of considering relative language proficiency in bilingual assessment.

  11. Measurement uncertainty relations: characterising optimal error bounds for qubits

    NASA Astrophysics Data System (ADS)

    Bullock, T.; Busch, P.

    2018-07-01

    In standard formulations of the uncertainty principle, two fundamental features are typically cast as impossibility statements: two noncommuting observables cannot in general both be sharply defined (for the same state), nor can they be measured jointly. The pioneers of quantum mechanics were acutely aware and puzzled by this fact, and it motivated Heisenberg to seek a mitigation, which he formulated in his seminal paper of 1927. He provided intuitive arguments to show that the values of, say, the position and momentum of a particle can at least be unsharply defined, and they can be measured together provided some approximation errors are allowed. Only now, nine decades later, a working theory of approximate joint measurements is taking shape, leading to rigorous and experimentally testable formulations of associated error tradeoff relations. Here we briefly review this new development, explaining the concepts and steps taken in the construction of optimal joint approximations of pairs of incompatible observables. As a case study, we deduce measurement uncertainty relations for qubit observables using two distinct error measures. We provide an operational interpretation of the error bounds and discuss some of the first experimental tests of such relations.

  12. Mathematical analysis study for radar data processing and enchancement. Part 2: Modeling of propagation path errors

    NASA Technical Reports Server (NTRS)

    James, R.; Brownlow, J. D.

    1985-01-01

    A study is performed under NASA contract to evaluate data from an AN/FPS-16 radar installed for support of flight programs at Dryden Flight Research Facility of NASA Ames Research Center. The purpose of this study is to provide information necessary for improving post-flight data reduction and knowledge of accuracy of derived radar quantities. Tracking data from six flights are analyzed. Noise and bias errors in raw tracking data are determined for each of the flights. A discussion of an altitude bias error during all of the tracking missions is included. This bias error is defined by utilizing pressure altitude measurements made during survey flights. Four separate filtering methods, representative of the most widely used optimal estimation techniques for enhancement of radar tracking data, are analyzed for suitability in processing both real-time and post-mission data. Additional information regarding the radar and its measurements, including typical noise and bias errors in the range and angle measurements, is also presented. This report is in two parts. This is part 2, a discussion of the modeling of propagation path errors.

  13. Boundary overlap for medical image segmentation evaluation

    NASA Astrophysics Data System (ADS)

    Yeghiazaryan, Varduhi; Voiculescu, Irina

    2017-03-01

    All medical image segmentation algorithms need to be validated and compared, and yet no evaluation framework is widely accepted within the imaging community. Collections of segmentation results often need to be compared and ranked by their effectiveness. Evaluation measures which are popular in the literature are based on region overlap or boundary distance. None of these are consistent in the way they rank segmentation results: they tend to be sensitive to one or another type of segmentation error (size, location, shape) but no single measure covers all error types. We introduce a new family of measures, with hybrid characteristics. These measures quantify similarity/difference of segmented regions by considering their overlap around the region boundaries. This family is more sensitive than other measures in the literature to combinations of segmentation error types. We compare measure performance on collections of segmentation results sourced from carefully compiled 2D synthetic data, and also on 3D medical image volumes. We show that our new measure: (1) penalises errors successfully, especially those around region boundaries; (2) gives a low similarity score when existing measures disagree, thus avoiding overly inflated scores; and (3) scores segmentation results over a wider range of values. We consider a representative measure from this family and the effect of its only free parameter on error sensitivity, typical value range, and running time.

  14. Filtering Methods for Error Reduction in Spacecraft Attitude Estimation Using Quaternion Star Trackers

    NASA Technical Reports Server (NTRS)

    Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil

    2011-01-01

    Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.

  15. Evaluation of Two Computational Techniques of Calculating Multipath Using Global Positioning System Carrier Phase Measurements

    NASA Technical Reports Server (NTRS)

    Gomez, Susan F.; Hood, Laura; Panneton, Robert J.; Saunders, Penny E.; Adkins, Antha; Hwu, Shian U.; Lu, Ba P.

    1996-01-01

    Two computational techniques are used to calculate differential phase errors on Global Positioning System (GPS) carrier war phase measurements due to certain multipath-producing objects. The two computational techniques are a rigorous computati electromagnetics technique called Geometric Theory of Diffraction (GTD) and the other is a simple ray tracing method. The GTD technique has been used successfully to predict microwave propagation characteristics by taking into account the dominant multipath components due to reflections and diffractions from scattering structures. The ray tracing technique only solves for reflected signals. The results from the two techniques are compared to GPS differential carrier phase ns taken on the ground using a GPS receiver in the presence of typical International Space Station (ISS) interference structures. The calculations produced using the GTD code compared to the measured results better than the ray tracing technique. The agreement was good, demonstrating that the phase errors due to multipath can be modeled and characterized using the GTD technique and characterized to a lesser fidelity using the DECAT technique. However, some discrepancies were observed. Most of the discrepancies occurred at lower devations and were either due to phase center deviations of the antenna, the background multipath environment, or the receiver itself. Selected measured and predicted differential carrier phase error results are presented and compared. Results indicate that reflections and diffractions caused by the multipath producers, located near the GPS antennas, can produce phase shifts of greater than 10 mm, and as high as 95 mm. It should be noted tl the field test configuration was meant to simulate typical ISS structures, but the two environments are not identical. The GZ and DECAT techniques have been used to calculate phase errors due to multipath o the ISS configuration to quantify the expected attitude determination errors.

  16. Growth models and the expected distribution of fluctuating asymmetry

    USGS Publications Warehouse

    Graham, John H.; Shimizu, Kunio; Emlen, John M.; Freeman, D. Carl; Merkel, John

    2003-01-01

    Multiplicative error accounts for much of the size-scaling and leptokurtosis in fluctuating asymmetry. It arises when growth involves the addition of tissue to that which is already present. Such errors are lognormally distributed. The distribution of the difference between two lognormal variates is leptokurtic. If those two variates are correlated, then the asymmetry variance will scale with size. Inert tissues typically exhibit additive error and have a gamma distribution. Although their asymmetry variance does not exhibit size-scaling, the distribution of the difference between two gamma variates is nevertheless leptokurtic. Measurement error is also additive, but has a normal distribution. Thus, the measurement of fluctuating asymmetry may involve the mixing of additive and multiplicative error. When errors are multiplicative, we recommend computing log E(l) − log E(r), the difference between the logarithms of the expected values of left and right sides, even when size-scaling is not obvious. If l and r are lognormally distributed, and measurement error is nil, the resulting distribution will be normal, and multiplicative error will not confound size-related changes in asymmetry. When errors are additive, such a transformation to remove size-scaling is unnecessary. Nevertheless, the distribution of l − r may still be leptokurtic.

  17. Optimizer convergence and local minima errors and their clinical importance

    NASA Astrophysics Data System (ADS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R.

    2003-09-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  18. Optimizer convergence and local minima errors and their clinical importance.

    PubMed

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-09-07

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  19. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  20. Optimal post-experiment estimation of poorly modeled dynamic systems

    NASA Technical Reports Server (NTRS)

    Mook, D. Joseph

    1988-01-01

    Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.

  1. Math Error Types and Correlates in Adolescents with and without Attention Deficit Hyperactivity Disorder

    PubMed Central

    Capodieci, Agnese; Martinussen, Rhonda

    2017-01-01

    Objective: The aim of this study was to examine the types of errors made by youth with and without a parent-reported diagnosis of attention deficit and hyperactivity disorder (ADHD) on a math fluency task and investigate the association between error types and youths’ performance on measures of processing speed and working memory. Method: Participants included 30 adolescents with ADHD and 39 typically developing peers between 14 and 17 years old matched in age and IQ. All youth completed standardized measures of math calculation and fluency as well as two tests of working memory and processing speed. Math fluency error patterns were examined. Results: Adolescents with ADHD showed less proficient math fluency despite having similar math calculation scores as their peers. Group differences were also observed in error types with youth with ADHD making more switch errors than their peers. Conclusion: This research has important clinical applications for the assessment and intervention on math ability in students with ADHD. PMID:29075227

  2. Math Error Types and Correlates in Adolescents with and without Attention Deficit Hyperactivity Disorder.

    PubMed

    Capodieci, Agnese; Martinussen, Rhonda

    2017-01-01

    Objective: The aim of this study was to examine the types of errors made by youth with and without a parent-reported diagnosis of attention deficit and hyperactivity disorder (ADHD) on a math fluency task and investigate the association between error types and youths' performance on measures of processing speed and working memory. Method: Participants included 30 adolescents with ADHD and 39 typically developing peers between 14 and 17 years old matched in age and IQ. All youth completed standardized measures of math calculation and fluency as well as two tests of working memory and processing speed. Math fluency error patterns were examined. Results: Adolescents with ADHD showed less proficient math fluency despite having similar math calculation scores as their peers. Group differences were also observed in error types with youth with ADHD making more switch errors than their peers. Conclusion: This research has important clinical applications for the assessment and intervention on math ability in students with ADHD.

  3. Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows

    NASA Astrophysics Data System (ADS)

    McClure, Jeffrey; Yarusevych, Serhiy

    2015-11-01

    The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.

  4. Behavioural and neural basis of anomalous motor learning in children with autism.

    PubMed

    Marko, Mollie K; Crocetti, Deana; Hulst, Thomas; Donchin, Opher; Shadmehr, Reza; Mostofsky, Stewart H

    2015-03-01

    Autism spectrum disorder is a developmental disorder characterized by deficits in social and communication skills and repetitive and stereotyped interests and behaviours. Although not part of the diagnostic criteria, individuals with autism experience a host of motor impairments, potentially due to abnormalities in how they learn motor control throughout development. Here, we used behavioural techniques to quantify motor learning in autism spectrum disorder, and structural brain imaging to investigate the neural basis of that learning in the cerebellum. Twenty children with autism spectrum disorder and 20 typically developing control subjects, aged 8-12, made reaching movements while holding the handle of a robotic manipulandum. In random trials the reach was perturbed, resulting in errors that were sensed through vision and proprioception. The brain learned from these errors and altered the motor commands on the subsequent reach. We measured learning from error as a function of the sensory modality of that error, and found that children with autism spectrum disorder outperformed typically developing children when learning from errors that were sensed through proprioception, but underperformed typically developing children when learning from errors that were sensed through vision. Previous work had shown that this learning depends on the integrity of a region in the anterior cerebellum. Here we found that the anterior cerebellum, extending into lobule VI, and parts of lobule VIII were smaller than normal in children with autism spectrum disorder, with a volume that was predicted by the pattern of learning from visual and proprioceptive errors. We suggest that the abnormal patterns of motor learning in children with autism spectrum disorder, showing an increased sensitivity to proprioceptive error and a decreased sensitivity to visual error, may be associated with abnormalities in the cerebellum. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Calibration system for radon EEC measurements.

    PubMed

    Mostafa, Y A M; Vasyanovich, M; Zhukovsky, M; Zaitceva, N

    2015-06-01

    The measurement of radon equivalent equilibrium concentration (EECRn) is very simple and quick technique for the estimation of radon progeny level in dwellings or working places. The most typical methods of EECRn measurements are alpha radiometry or alpha spectrometry. In such technique, the influence of alpha particle absorption in filters and filter effectiveness should be taken into account. In the authors' work, it is demonstrated that more precise and less complicated calibration of EECRn-measuring equipment can be conducted by the use of the gamma spectrometer as a reference measuring device. It was demonstrated that for this calibration technique systematic error does not exceed 3 %. The random error of (214)Bi activity measurements is in the range 3-6 %. In general, both these errors can be decreased. The measurements of EECRn by gamma spectrometry and improved alpha radiometry are in good agreement, but the systematic shift between average values can be observed. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Evaluation of a surface/vegetation parameterization using satellite measurements of surface temperature

    NASA Technical Reports Server (NTRS)

    Taconet, O.; Carlson, T.; Bernard, R.; Vidal-Madjar, D.

    1986-01-01

    Ground measurements of surface-sensible heat flux and soil moisture for a wheat-growing area of Beauce in France were compared with the values derived by inverting two boundary layer models with a surface/vegetation formulation using surface temperature measurements made from NOAA-AVHRR. The results indicated that the trends in the surface heat fluxes and soil moisture observed during the 5 days of the field experiment were effectively captured by the inversion method using the remotely measured radiative temperatures and either of the two boundary layer methods, both of which contain nearly identical vegetation parameterizations described by Taconet et al. (1986). The sensitivity of the results to errors in the initial sounding values or measured surface temperature was tested by varying the initial sounding temperature, dewpoint, and wind speed and the measured surface temperature by amounts corresponding to typical measurement error. In general, the vegetation component was more sensitive to error than the bare soil model.

  7. Validity and reliability of the Hexoskin® wearable biometric vest during maximal aerobic power testing in elite cyclists.

    PubMed

    Elliot, Catherine A; Hamlin, Michael J; Lizamore, Catherine A

    2017-07-28

    The purpose of this study was to investigate the validity and reliability of the Hexoskin® vest for measuring respiration and heart rate (HR) in elite cyclists during a progressive test to exhaustion. Ten male elite cyclists (age 28.8 ± 12.5 yr, height 179.3 ± 6.0 cm, weight 73.2 ± 9.1 kg, V˙ O2max 60.7 ± 7.8 ml.kg.min mean ± SD) conducted a maximal aerobic cycle ergometer test using a ramped protocol (starting at 100W with 25W increments each min to failure) during two separate occasions over a 3-4 day period. Compared to the criterion measure (Metamax 3B) the Hexoskin® vest showed mainly small typical errors (1.3-6.2%) for HR and breathing frequency (f), but larger typical errors (9.5-19.6%) for minute ventilation (V˙E) during the progressive test to exhaustion. The typical error indicating the reliability of the Hexoskin® vest at moderate intensity exercise between tests was small for HR (2.6-2.9%) and f (2.5-3.2%) but slightly larger for V˙E (5.3-7.9%). We conclude that the Hexoskin® vest is sufficiently valid and reliable for measurements of HR and f in elite athletes during high intensity cycling but the calculated V˙E value the Hexoskin® vest produces during such exercise should be used with caution due to the lower validity and reliability of this variable.

  8. Horizontal electric fields from lightning return strokes

    NASA Technical Reports Server (NTRS)

    Thomson, E. M.; Medelius, P. J.; Rubinstein, M.; Uman, M. A.; Johnson, J.

    1988-01-01

    An experiment to measure simultaneously the wideband horizontal and vertical electric fields from lightning return strokes is described. Typical wave shapes of the measured horizontal and vertical fields are presented, and the horizontal fields are characterized. The measured horizontal fields are compared with calculated horizontal fields obtained by applying the wavetilt formula to the vertical fields. The limitations and sources of error in the measurement technique are discussed.

  9. Trajectory prediction for ballistic missiles based on boost-phase LOS measurements

    NASA Astrophysics Data System (ADS)

    Yeddanapudi, Murali; Bar-Shalom, Yaakov

    1997-10-01

    This paper addresses the problem of the estimation of the trajectory of a tactical ballistic missile using line of sight (LOS) measurements from one or more passive sensors (typically satellites). The major difficulties of this problem include: the estimation of the unknown time of launch, incorporation of (inaccurate) target thrust profiles to model the target dynamics during the boost phase and an overall ill-conditioning of the estimation problem due to poor observability of the target motion via the LOS measurements. We present a robust estimation procedure based on the Levenberg-Marquardt algorithm that provides both the target state estimate and error covariance taking into consideration the complications mentioned above. An important consideration in the defense against tactical ballistic missiles is the determination of the target position and error covariance at the acquisition range of a surveillance radar in the vicinity of the impact point. We present a systematic procedure to propagate the target state and covariance to a nominal time, when it is within the detection range of a surveillance radar to obtain a cueing volume. Mont Carlo simulation studies on typical single and two sensor scenarios indicate that the proposed algorithms are accurate in terms of the estimates and the estimator calculated covariances are consistent with the errors.

  10. Error and uncertainty in Raman thermal conductivity measurements

    DOE PAGES

    Thomas Edwin Beechem; Yates, Luke; Graham, Samuel

    2015-04-22

    We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materialsmore » under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.« less

  11. Disrupted prediction errors index social deficits in autism spectrum disorder

    PubMed Central

    Balsters, Joshua H; Apps, Matthew A J; Bolis, Dimitris; Lehner, Rea; Gallagher, Louise; Wenderoth, Nicole

    2017-01-01

    Abstract Social deficits are a core symptom of autism spectrum disorder; however, the perturbed neural mechanisms underpinning these deficits remain unclear. It has been suggested that social prediction errors—coding discrepancies between the predicted and actual outcome of another’s decisions—might play a crucial role in processing social information. While the gyral surface of the anterior cingulate cortex signalled social prediction errors in typically developing individuals, this crucial social signal was altered in individuals with autism spectrum disorder. Importantly, the degree to which social prediction error signalling was aberrant correlated with diagnostic measures of social deficits. Effective connectivity analyses further revealed that, in typically developing individuals but not in autism spectrum disorder, the magnitude of social prediction errors was driven by input from the ventromedial prefrontal cortex. These data provide a novel insight into the neural substrates underlying autism spectrum disorder social symptom severity, and further research into the gyral surface of the anterior cingulate cortex and ventromedial prefrontal cortex could provide more targeted therapies to help ameliorate social deficits in autism spectrum disorder. PMID:28031223

  12. Use of autocorrelation scanning in DNA copy number analysis.

    PubMed

    Zhang, Liangcai; Zhang, Li

    2013-11-01

    Data quality is a critical issue in the analyses of DNA copy number alterations obtained from microarrays. It is commonly assumed that copy number alteration data can be modeled as piecewise constant and the measurement errors of different probes are independent. However, these assumptions do not always hold in practice. In some published datasets, we find that measurement errors are highly correlated between probes that interrogate nearby genomic loci, and the piecewise-constant model does not fit the data well. The correlated errors cause problems in downstream analysis, leading to a large number of DNA segments falsely identified as having copy number gains and losses. We developed a simple tool, called autocorrelation scanning profile, to assess the dependence of measurement error between neighboring probes. Autocorrelation scanning profile can be used to check data quality and refine the analysis of DNA copy number data, which we demonstrate in some typical datasets. lzhangli@mdanderson.org. Supplementary data are available at Bioinformatics online.

  13. Semantic Typicality Effects in Acquired Dyslexia: Evidence for Semantic Impairment in Deep Dyslexia.

    PubMed

    Riley, Ellyn A; Thompson, Cynthia K

    2010-06-01

    BACKGROUND: Acquired deep dyslexia is characterized by impairment in grapheme-phoneme conversion and production of semantic errors in oral reading. Several theories have attempted to explain the production of semantic errors in deep dyslexia, some proposing that they arise from impairments in both grapheme-phoneme and lexical-semantic processing, and others proposing that such errors stem from a deficit in phonological production. Whereas both views have gained some acceptance, the limited evidence available does not clearly eliminate the possibility that semantic errors arise from a lexical-semantic input processing deficit. AIMS: To investigate semantic processing in deep dyslexia, this study examined the typicality effect in deep dyslexic individuals, phonological dyslexic individuals, and controls using an online category verification paradigm. This task requires explicit semantic access without speech production, focusing observation on semantic processing from written or spoken input. METHODS #ENTITYSTARTX00026; PROCEDURES: To examine the locus of semantic impairment, the task was administered in visual and auditory modalities with reaction time as the primary dependent measure. Nine controls, six phonological dyslexic participants, and five deep dyslexic participants completed the study. OUTCOMES #ENTITYSTARTX00026; RESULTS: Controls and phonological dyslexic participants demonstrated a typicality effect in both modalities, while deep dyslexic participants did not demonstrate a typicality effect in either modality. CONCLUSIONS: These findings suggest that deep dyslexia is associated with a semantic processing deficit. Although this does not rule out the possibility of concomitant deficits in other modules of lexical-semantic processing, this finding suggests a direction for treatment of deep dyslexia focused on semantic processing.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer F.; Clifton, Andrew

    Currently, cup anemometers on meteorological towers are used to measure wind speeds and turbulence intensity to make decisions about wind turbine class and site suitability; however, as modern turbine hub heights increase and wind energy expands to complex and remote sites, it becomes more difficult and costly to install meteorological towers at potential sites. As a result, remote-sensing devices (e.g., lidars) are now commonly used by wind farm managers and researchers to estimate the flow field at heights spanned by a turbine. Although lidars can accurately estimate mean wind speeds and wind directions, there is still a large amount ofmore » uncertainty surrounding the measurement of turbulence using these devices. Errors in lidar turbulence estimates are caused by a variety of factors, including instrument noise, volume averaging, and variance contamination, in which the magnitude of these factors is highly dependent on measurement height and atmospheric stability. As turbulence has a large impact on wind power production, errors in turbulence measurements will translate into errors in wind power prediction. The impact of using lidars rather than cup anemometers for wind power prediction must be understood if lidars are to be considered a viable alternative to cup anemometers.In this poster, the sensitivity of power prediction error to typical lidar turbulence measurement errors is assessed. Turbulence estimates from a vertically profiling WINDCUBE v2 lidar are compared to high-resolution sonic anemometer measurements at field sites in Oklahoma and Colorado to determine the degree of lidar turbulence error that can be expected under different atmospheric conditions. These errors are then incorporated into a power prediction model to estimate the sensitivity of power prediction error to turbulence measurement error. Power prediction models, including the standard binning method and a random forest method, were developed using data from the aeroelastic simulator FAST for a 1.5 MW turbine. The impact of lidar turbulence error on the predicted power from these different models is examined to determine the degree of turbulence measurement accuracy needed for accurate power prediction.« less

  15. Causal Inference for fMRI Time Series Data with Systematic Errors of Measurement in a Balanced On/Off Study of Social Evaluative Threat.

    PubMed

    Sobel, Michael E; Lindquist, Martin A

    2014-07-01

    Functional magnetic resonance imaging (fMRI) has facilitated major advances in understanding human brain function. Neuroscientists are interested in using fMRI to study the effects of external stimuli on brain activity and causal relationships among brain regions, but have not stated what is meant by causation or defined the effects they purport to estimate. Building on Rubin's causal model, we construct a framework for causal inference using blood oxygenation level dependent (BOLD) fMRI time series data. In the usual statistical literature on causal inference, potential outcomes, assumed to be measured without systematic error, are used to define unit and average causal effects. However, in general the potential BOLD responses are measured with stimulus dependent systematic error. Thus we define unit and average causal effects that are free of systematic error. In contrast to the usual case of a randomized experiment where adjustment for intermediate outcomes leads to biased estimates of treatment effects (Rosenbaum, 1984), here the failure to adjust for task dependent systematic error leads to biased estimates. We therefore adjust for systematic error using measured "noise covariates" , using a linear mixed model to estimate the effects and the systematic error. Our results are important for neuroscientists, who typically do not adjust for systematic error. They should also prove useful to researchers in other areas where responses are measured with error and in fields where large amounts of data are collected on relatively few subjects. To illustrate our approach, we re-analyze data from a social evaluative threat task, comparing the findings with results that ignore systematic error.

  16. Error analysis of multi-needle Langmuir probe measurement technique.

    PubMed

    Barjatya, Aroh; Merritt, William

    2018-04-01

    Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.

  17. Error analysis of multi-needle Langmuir probe measurement technique

    NASA Astrophysics Data System (ADS)

    Barjatya, Aroh; Merritt, William

    2018-04-01

    Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.

  18. What Randomized Benchmarking Actually Measures

    DOE PAGES

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; ...

    2017-09-28

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less

  19. Determination of the optical properties of semi-infinite turbid media from frequency-domain reflectance close to the source.

    PubMed

    Kienle, A; Patterson, M S

    1997-09-01

    We investigate theoretically the errors in determining the reduced scattering and absorption coefficients of semi-infinite turbid media from frequency-domain reflectance measurements made at small distances between the source and the detector(s). The errors are due to the uncertainties in the measurement of the phase, the modulation and the steady-state reflectance as well as to the diffusion approximation which is used as a theoretical model to describe light propagation in tissue. Configurations using one and two detectors are examined for the measurement of the phase and the modulation and for the measurement of the phase and the steady-state reflectance. Three solutions of the diffusion equation are investigated. We show that measurements of the phase and the steady-state reflectance at two different distances are best suited for the determination of the optical properties close to the source. For this arrangement the errors in the absorption coefficient due to typical uncertainties in the measurement are greater than those resulting from the application of the diffusion approximation at a modulation frequency of 200 MHz. A Monte Carlo approach is also examined; this avoids the errors due to the diffusion approximation.

  20. Aquatic habitat mapping with an acoustic doppler current profiler: Considerations for data quality

    USGS Publications Warehouse

    Gaeuman, David; Jacobson, Robert B.

    2005-01-01

    When mounted on a boat or other moving platform, acoustic Doppler current profilers (ADCPs) can be used to map a wide range of ecologically significant phenomena, including measures of fluid shear, turbulence, vorticity, and near-bed sediment transport. However, the instrument movement necessary for mapping applications can generate significant errors, many of which have not been inadequately described. This report focuses on the mechanisms by which moving-platform errors are generated, and quantifies their magnitudes under typical habitat-mapping conditions. The potential for velocity errors caused by mis-alignment of the instrument?s internal compass are widely recognized, but has not previously been quantified for moving instruments. Numerical analyses show that even relatively minor compass mis-alignments can produce significant velocity errors, depending on the ratio of absolute instrument velocity to the target velocity and on the relative directions of instrument and target motion. A maximum absolute instrument velocity of about 1 m/s is recommended for most mapping applications. Lower velocities are appropriate when making bed velocity measurements, an emerging application that makes use of ADCP bottom-tracking to measure the velocity of sediment particles at the bed. The mechanisms by which heterogeneities in the flow velocity field generate horizontal velocities errors are also quantified, and some basic limitations in the effectiveness of standard error-detection criteria for identifying these errors are described. Bed velocity measurements may be particularly vulnerable to errors caused by spatial variability in the sediment transport field.

  1. Thermocouple Errors when Mounted on Cylindrical Surfaces in Abnormal Thermal Environments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakos, James T.; Suo-Anttila, Jill M.; Zepper, Ethan T.

    Mineral-insulated, metal-sheathed, Type-K thermocouples are used to measure the temperature of various items in high-temperature environments, often exceeding 1000degC (1273 K). The thermocouple wires (chromel and alumel) are protected from the harsh environments by an Inconel sheath and magnesium oxide (MgO) insulation. The sheath and insulation are required for reliable measurements. Due to the sheath and MgO insulation, the temperature registered by the thermocouple is not the temperature of the surface of interest. In some cases, the error incurred is large enough to be of concern because these data are used for model validation, and thus the uncertainties of themore » data need to be well documented. This report documents the error using 0.062" and 0.040" diameter Inconel sheathed, Type-K thermocouples mounted on cylindrical surfaces (inside of a shroud, outside and inside of a mock test unit). After an initial transient, the thermocouple bias errors typically range only about +-1-2% of the reading in K. After all of the uncertainty sources have been included, the total uncertainty to 95% confidence, for shroud or test unit TCs in abnormal thermal environments, is about +-2% of the reading in K, lower than the +-3% typically used for flat shrouds. Recommendations are provided in Section 6 to facilitate interpretation and use of the results. .« less

  2. The Analysis of Ratings Using Generalizability Theory for Student Outcome Assessment. AIR 1988 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Erwin, T. Dary

    Rating scales are a typical method for evaluating a student's performance in outcomes assessment. The analysis of the quality of information from rating scales poses special measurement problems when researchers work with faculty in their development. Generalizability measurement theory offers a set of techniques for estimating errors or…

  3. Droplet Sizing Research.

    DTIC Science & Technology

    1985-04-15

    studies, The measurement volume is defined by the intersection aerosol studies, flue gas desulfurization , spray drying, of apertures in front of two...identify by block numberl --A method to measure the size and velocity of individual particles in a flow is discussed. Results are presented for controlled ... controlled m0 monodisperse sprays and compared to flash photographs. Typical errors between predicted and measured sizes are less than 5%. Experimental

  4. Estimating Aboveground Biomass in Tropical Forests: Field Methods and Error Analysis for the Calibration of Remote Sensing Observations

    DOE PAGES

    Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly; ...

    2017-01-07

    Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less

  5. Estimation of an accuracy index of a diagnostic biomarker when the reference biomarker is continuous and measured with error.

    PubMed

    Wu, Mixia; Zhang, Dianchen; Liu, Aiyi

    2016-01-01

    New biomarkers continue to be developed for the purpose of diagnosis, and their diagnostic performances are typically compared with an existing reference biomarker used for the same purpose. Considerable amounts of research have focused on receiver operating characteristic curves analysis when the reference biomarker is dichotomous. In the situation where the reference biomarker is measured on a continuous scale and dichotomization is not practically appealing, an index was proposed in the literature to measure the accuracy of a continuous biomarker, which is essentially a linear function of the popular Kendall's tau. We consider the issue of estimating such an accuracy index when the continuous reference biomarker is measured with errors. We first investigate the impact of measurement errors on the accuracy index, and then propose methods to correct for the bias due to measurement errors. Simulation results show the effectiveness of the proposed estimator in reducing biases. The methods are exemplified with hemoglobin A1c measurements obtained from both the central lab and a local lab to evaluate the accuracy of the mean data obtained from the metered blood glucose monitoring against the centrally measured hemoglobin A1c from a behavioral intervention study for families of youth with type 1 diabetes.

  6. Attenuation Compensation of Ultrasonic Wave in Soft Tissue for Acoustic Impedance Measurement of In vivo Bone by Transducer Vibration Method

    NASA Astrophysics Data System (ADS)

    Yoshizawa, Masasumi; Nakamura, Yuuta; Ishiguro, Masataka; Moriya, Tadashi

    2007-07-01

    In this paper, we describe a method of compensating the attenuation of the ultrasound caused by soft tissue in the transducer vibration method for the measurement of the acoustic impedance of in vivo bone. In the in vivo measurement, the acoustic impedance of bone is measured through soft tissue; therefore, the amplitude of the ultrasound reflected from the bone is attenuated. This attenuation causes an error of the order of -20 to -30% when the acoustic impedance is determined from the measured signals. To compensate the attenuation, the attenuation coefficient and length of the soft tissue are measured by the transducer vibration method. In the experiment using a phantom, this method allows the measurement of the acoustic impedance typically with an error as small as -8 to 10%.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less

  8. Influence of precision of emission characteristic parameters on model prediction error of VOCs/formaldehyde from dry building material.

    PubMed

    Wei, Wenjuan; Xiong, Jianyin; Zhang, Yinping

    2013-01-01

    Mass transfer models are useful in predicting the emissions of volatile organic compounds (VOCs) and formaldehyde from building materials in indoor environments. They are also useful for human exposure evaluation and in sustainable building design. The measurement errors in the emission characteristic parameters in these mass transfer models, i.e., the initial emittable concentration (C 0), the diffusion coefficient (D), and the partition coefficient (K), can result in errors in predicting indoor VOC and formaldehyde concentrations. These errors have not yet been quantitatively well analyzed in the literature. This paper addresses this by using modelling to assess these errors for some typical building conditions. The error in C 0, as measured in environmental chambers and applied to a reference living room in Beijing, has the largest influence on the model prediction error in indoor VOC and formaldehyde concentration, while the error in K has the least effect. A correlation between the errors in D, K, and C 0 and the error in the indoor VOC and formaldehyde concentration prediction is then derived for engineering applications. In addition, the influence of temperature on the model prediction of emissions is investigated. It shows the impact of temperature fluctuations on the prediction errors in indoor VOC and formaldehyde concentrations to be less than 7% at 23±0.5°C and less than 30% at 23±2°C.

  9. Spirality: A Noval Way to Measure Spiral Arm Pitch Angle

    NASA Astrophysics Data System (ADS)

    Shields, Douglas W.; Boe, Benjamin; Henderson, Casey L.; Hartley, Matthew; Davis, Benjamin L.; Pour Imani, Hamed; Kennefick, Daniel; Kennefick, Julia D.

    2015-01-01

    We present the MATLAB code Spirality, a novel method for measuring spiral arm pitch angles by fitting galaxy images to spiral templates of known pitch. For a given pitch angle template, the mean pixel value is found along each of typically 1000 spiral axes. The fitting function, which shows a local maximum at the best-fit pitch angle, is the variance of these means. Error bars are found by varying the inner radius of the measurement annulus and finding the standard deviation of the best-fit pitches. Computation time is typically on the order of 2 minutes per galaxy, assuming at least 8 GB of working memory. We tested the code using 128 synthetic spiral images of known pitch. These spirals varied in the number of spiral arms, pitch angle, degree of logarithmicity, radius, SNR, inclination angle, bar length, and bulge radius. A correct result is defined as a result that matches the true pitch within the error bars, with error bars no greater than ±7°. For the non-logarithmic spiral sample, the correct answer is similarly defined, with the mean pitch as function of radius in place of the true pitch. For all synthetic spirals, correct results were obtained so long as SNR > 0.25, the bar length was no more than 60% of the spiral's diameter (when the bar was included in the measurement), the input center of the spiral was no more than 6% of the spiral radius away from the true center, and the inclination angle was no more than 30°. The synthetic spirals were not deprojected prior to measurement. The code produced the correct result for all barred spirals when the measurement annulus was placed outside the bar. Additionally, we compared the code's results against 2DFFT results for 203 visually selected spiral galaxies in GOODS North and South. Among the entire sample, Spirality's error bars overlapped 2DFFT's error bars 64% of the time. For those galaxies in which Source code is available by email request from the primary author.

  10. Quantification of LiDAR measurement uncertainty through propagation of errors due to sensor sub-systems and terrain morphology

    NASA Astrophysics Data System (ADS)

    Goulden, T.; Hopkinson, C.

    2013-12-01

    The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future work in LiDAR sensor measurement uncertainty must focus on the development of vegetative error models to create more robust error prediction algorithms. To achieve this objective, comprehensive empirical exploratory analysis is recommended to relate vegetative parameters to observed errors.

  11. Analysis of a range estimator which uses MLS angle measurements

    NASA Technical Reports Server (NTRS)

    Downing, David R.; Linse, Dennis

    1987-01-01

    A concept that uses the azimuth signal from a microwave landing system (MLS) combined with onboard airspeed and heading data to estimate the horizontal range to the runway threshold is investigated. The absolute range error is evaluated for trajectories typical of General Aviation (GA) and commercial airline operations (CAO). These include constant intercept angles for GA and CAO, and complex curved trajectories for CAO. It is found that range errors of 4000 to 6000 feet at the entry of MLS coverage which then reduce to 1000-foot errors at runway centerline intercept are possible for GA operations. For CAO, errors at entry into MLS coverage of 2000 feet which reduce to 300 feet at runway centerline interception are possible.

  12. Profile of refractive errors in European Caucasian children with Autistic Spectrum Disorder; increased prevalence and magnitude of astigmatism.

    PubMed

    Anketell, Pamela M; Saunders, Kathryn J; Gallagher, Stephen; Bailey, Clare; Little, Julie-Anne

    2016-07-01

    Autistic Spectrum Disorder (ASD) is a common neurodevelopmental disorder characterised by impairment of communication, social interaction and repetitive behaviours. Only a small number of studies have investigated fundamental clinical measures of vision including refractive error. The aim of this study was to describe the refractive profile of a population of children with ASD compared to typically developing (TD) children. Refractive error was assessed using the Shin-Nippon NVision-K 5001 open-field autorefractor following the instillation of cyclopentolate hydrochloride 1% eye drops. A total of 128 participants with ASD (mean age 10.9 ± 3.3 years) and 206 typically developing participants (11.5 ± 3.1 years) were recruited. There was no significant difference in median refractive error, either by spherical equivalent or most ametropic meridian between the ASD and TD groups (Spherical equivalent, Mann-Whitney U307 = 1.15, p = 0.25; Most Ametropic Meridian, U305 = 0.52, p = 0.60). Median refractive astigmatism was -0.50DC (range 0.00 to -3.50DC) for the ASD group and -0.50DC (Range 0.00 to -2.25DC) for the TD group. Magnitude and prevalence of refractive astigmatism (defined as astigmatism ≥1.00DC) was significantly greater in the ASD group compared to the typically developing group (ASD 26%, TD 8%, magnitude U305 = 3.86, p = 0.0001; prevalence (χ12=17.71 , p < 0.0001). This is the first study to describe the refractive profile of a population of European Caucasian children with ASD compared to a TD population of children. Unlike other neurodevelopmental conditions, there was no increased prevalence of spherical refractive errors in ASD but astigmatic errors were significantly greater in magnitude and prevalence. This highlights the need to examine refractive errors in this population. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.

  13. Line-of-Sight Data Link Test Set

    DTIC Science & Technology

    1976-06-01

    spheric layer model for layer refraction or a surface reflectivity model for ground reflection paths. Measurement of the channel impulse response...the model is exercised over a path consisting of only a constant direct component. The test would consist of measuring the modem demodulator bit...direct and a fading direct component. The test typically would consist of measuring the bit error-rate over a range of average signal-to-noise

  14. Analysis of real-time numerical integration methods applied to dynamic clamp experiments.

    PubMed

    Butera, Robert J; McCarthy, Maeve L

    2004-12-01

    Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.

  15. A novel device using the Nordic hamstring exercise to assess eccentric knee flexor strength: a reliability and retrospective injury study.

    PubMed

    Opar, David A; Piatkowski, Timothy; Williams, Morgan D; Shield, Anthony J

    2013-09-01

    Reliability and case-control injury study. To determine if a novel device designed to measure eccentric knee flexor strength via the Nordic hamstring exercise displays acceptable test-retest reliability; to determine normative values for eccentric knee flexor strength derived from the device in individuals without a history of hamstring strain injury (HSI); and to determine if the device can detect weakness in elite athletes with a previous history of unilateral HSI. HSI and reinjury are the most common cause of lost playing time in a number of sports. Eccentric knee flexor weakness is a major modifiable risk factor for future HSI. However, at present, there is a lack of easily accessible equipment to assess eccentric knee flexor strength. Thirty recreationally active males without a history of HSI completed the Nordic hamstring exercise on the device on 2 separate occasions. Intraclass correlation coefficients, typical error, typical error as a coefficient of variation, and minimal detectable change at a 95% confidence level were calculated. Normative strength data were determined using the most reliable measurement. An additional 20 elite athletes with a unilateral history of HSI within the previous 12 months performed the Nordic hamstring exercise on the device to determine if residual eccentric muscle weakness existed in the previously injured limb. The device displayed high to moderate reliability (intraclass correlation coefficient = 0.83-0.90; typical error, 21.7-27.5 N; typical error as a coefficient of variation, 5.8%-8.5%; minimal detectable change at a 95% confidence level, 60.1-76.2 N). Mean ± SD normative eccentric flexor strength in the uninjured group was 344.7 ± 61.1 N for the left and 361.2 ± 65.1 N for the right side. The previously injured limb was 15% weaker than the contralateral uninjured limb (mean difference, 50.3 N; 95% confidence interval: 25.7, 74.9; P<.01), 15% weaker than the normative left limb (mean difference, 50.0 N; 95% confidence interval: 1.4, 98.5; P = .04), and 18% weaker than the normative right limb (mean difference, 66.5 N; 95% confidence interval: 18.0, 115.1; P<.01). The experimental device offers a reliable method to measure eccentric knee flexor strength and strength asymmetry and to detect residual weakness in previously injured elite athletes.

  16. Error reduction study employing a pseudo-random binary sequence for use in acoustic pyrometry of gases

    NASA Astrophysics Data System (ADS)

    Ewan, B. C. R.; Ireland, S. N.

    2000-12-01

    Acoustic pyrometry uses the temperature dependence of sound speed in materials to measure temperature. This is normally achieved by measuring the transit time for a sound signal over a known path length and applying the material relation between temperature and velocity to extract an "average" temperature. Sources of error associated with the measurement of mean transit time are discussed in implementing the technique in gases, one of the principal causes being background noise in typical industrial environments. A number of transmitted signal and processing strategies which can be used in the area are examined and the expected error in mean transit time associated with each technique is quantified. Transmitted signals included pulses, pure frequencies, chirps, and pseudorandom binary sequences (prbs), while processing involves edge detection and correlation. Errors arise through the misinterpretation of the positions of edge arrival or correlation peaks due to instantaneous deviations associated with background noise and these become more severe as signal to noise amplitude ratios decrease. Population errors in the mean transit time are estimated for the different measurement strategies and it is concluded that PRBS combined with correlation can provide the lowest errors when operating in high noise environments. The operation of an instrument based on PRBS transmitted signals is described and test results under controlled noise conditions are presented. These confirm the value of the strategy and demonstrate that measurements can be made with signal to noise amplitude ratios down to 0.5.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly

    Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less

  18. Mathematical analysis study for radar data processing and enhancement. Part 1: Radar data analysis

    NASA Technical Reports Server (NTRS)

    James, R.; Brownlow, J. D.

    1985-01-01

    A study is performed under NASA contract to evaluate data from an AN/FPS-16 radar installed for support of flight programs at Dryden Flight Research Facility of NASA Ames Research Center. The purpose of this study is to provide information necessary for improving post-flight data reduction and knowledge of accuracy of derived radar quantities. Tracking data from six flights are analyzed. Noise and bias errors in raw tracking data are determined for each of the flights. A discussion of an altiude bias error during all of the tracking missions is included. This bias error is defined by utilizing pressure altitude measurements made during survey flights. Four separate filtering methods, representative of the most widely used optimal estimation techniques for enhancement of radar tracking data, are analyzed for suitability in processing both real-time and post-mission data. Additional information regarding the radar and its measurements, including typical noise and bias errors in the range and angle measurements, is also presented. This is in two parts. This is part 1, an analysis of radar data.

  19. Addressing Phase Errors in Fat-Water Imaging Using a Mixed Magnitude/Complex Fitting Method

    PubMed Central

    Hernando, D.; Hines, C. D. G.; Yu, H.; Reeder, S.B.

    2012-01-01

    Accurate, noninvasive measurements of liver fat content are needed for the early diagnosis and quantitative staging of nonalcoholic fatty liver disease. Chemical shift-based fat quantification methods acquire images at multiple echo times using a multiecho spoiled gradient echo sequence, and provide fat fraction measurements through postprocessing. However, phase errors, such as those caused by eddy currents, can adversely affect fat quantification. These phase errors are typically most significant at the first echo of the echo train, and introduce bias in complex-based fat quantification techniques. These errors can be overcome using a magnitude-based technique (where the phase of all echoes is discarded), but at the cost of significantly degraded signal-to-noise ratio, particularly for certain choices of echo time combinations. In this work, we develop a reconstruction method that overcomes these phase errors without the signal-to-noise ratio penalty incurred by magnitude fitting. This method discards the phase of the first echo (which is often corrupted) while maintaining the phase of the remaining echoes (where phase is unaltered). We test the proposed method on 104 patient liver datasets (from 52 patients, each scanned twice), where the fat fraction measurements are compared to coregistered spectroscopy measurements. We demonstrate that mixed fitting is able to provide accurate fat fraction measurements with high signal-to-noise ratio and low bias over a wide choice of echo combinations. PMID:21713978

  20. Multiple imputation to account for measurement error in marginal structural models

    PubMed Central

    Edwards, Jessie K.; Cole, Stephen R.; Westreich, Daniel; Crane, Heidi; Eron, Joseph J.; Mathews, W. Christopher; Moore, Richard; Boswell, Stephen L.; Lesko, Catherine R.; Mugavero, Michael J.

    2015-01-01

    Background Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and non-differential measurement error in a marginal structural model. Methods We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. Results In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality [hazard ratio (HR): 1.2 (95% CI: 0.6, 2.3)]. The HR for current smoking and therapy (0.4 (95% CI: 0.2, 0.7)) was similar to the HR for no smoking and therapy (0.4; 95% CI: 0.2, 0.6). Conclusions Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies. PMID:26214338

  1. Accounting for Berkson and Classical Measurement Error in Radon Exposure Using a Bayesian Structural Approach in the Analysis of Lung Cancer Mortality in the French Cohort of Uranium Miners.

    PubMed

    Hoffmann, Sabine; Rage, Estelle; Laurier, Dominique; Laroche, Pierre; Guihenneuc, Chantal; Ancelet, Sophie

    2017-02-01

    Many occupational cohort studies on underground miners have demonstrated that radon exposure is associated with an increased risk of lung cancer mortality. However, despite the deleterious consequences of exposure measurement error on statistical inference, these analyses traditionally do not account for exposure uncertainty. This might be due to the challenging nature of measurement error resulting from imperfect surrogate measures of radon exposure. Indeed, we are typically faced with exposure uncertainty in a time-varying exposure variable where both the type and the magnitude of error may depend on period of exposure. To address the challenge of accounting for multiplicative and heteroscedastic measurement error that may be of Berkson or classical nature, depending on the year of exposure, we opted for a Bayesian structural approach, which is arguably the most flexible method to account for uncertainty in exposure assessment. We assessed the association between occupational radon exposure and lung cancer mortality in the French cohort of uranium miners and found the impact of uncorrelated multiplicative measurement error to be of marginal importance. However, our findings indicate that the retrospective nature of exposure assessment that occurred in the earliest years of mining of this cohort as well as many other cohorts of underground miners might lead to an attenuation of the exposure-risk relationship. More research is needed to address further uncertainties in the calculation of lung dose, since this step will likely introduce important sources of shared uncertainty.

  2. Influence of Familiarization and Competitive Level on the Reliability of Countermovement Vertical Jump Kinetic and Kinematic Variables.

    PubMed

    Nibali, Maria L; Tombleson, Tom; Brady, Philip H; Wagner, Phillip

    2015-10-01

    Understanding typical variation of vertical jump (VJ) performance and confounding sources of its typical variability (i.e., familiarization and competitive level) is pertinent in the routine monitoring of athletes. We evaluated the presence of systematic error (learning effect) and nonuniformity of error (heteroscedasticity) across VJ performances of athletes that differ in competitive level and quantified the reliability of VJ kinetic and kinematic variables relative to the smallest worthwhile change (SWC). One hundred thirteen high school athletes, 30 college athletes, and 35 professional athletes completed repeat VJ trials. Average eccentric rate of force development (RFD), average concentric (CON) force, CON impulse, and jump height measurements were obtained from vertical ground reaction force (VGRF) data. Systematic error was assessed by evaluating changes in the mean of repeat trials. Heteroscedasticity was evaluated by plotting the difference score (trial 2 - trial 1) against the mean of the trials. Variability of jump variables was calculated as the typical error (TE) and coefficient of variation (%CV). No substantial systematic error (effect size range: -0.07 to 0.11) or heteroscedasticity was present for any of the VJ variables. Vertical jump can be performed without the need for familiarization trials, and the variability can be conveyed as either the raw TE or the %CV. Assessment of VGRF variables is an effective and reliable means of assessing VJ performance. Average CON force and CON impulse are highly reliable (%CV: 2.7% ×/÷ 1.10), although jump height was the only variable to display a %CV ≤SWC. Eccentric RFD is highly variable yet should not be discounted from VJ assessments on this factor alone because it may be sensitive to changes in response to training or fatigue that exceed the TE.

  3. The validation of a swimming turn wall-contact-time measurement system: a touchpad application reliability study.

    PubMed

    Brackley, Victoria; Ball, Kevin; Tor, Elaine

    2018-05-12

    The effectiveness of the swimming turn is highly influential to overall performance in competitive swimming. The push-off or wall contact, within the turn phase, is directly involved in determining the speed the swimmer leaves the wall. Therefore, it is paramount to develop reliable methods to measure the wall-contact-time during the turn phase for training and research purposes. The aim of this study was to determine the concurrent validity and reliability of the Pool Pad App to measure wall-contact-time during the freestyle and backstroke tumble turn. The wall-contact-times of nine elite and sub-elite participants were recorded during their regular training sessions. Concurrent validity statistics included the standardised typical error estimate, linear analysis and effect sizes while the intraclass correlating coefficient (ICC) was used for the reliability statistics. The standardised typical error estimate resulted in a moderate Cohen's d effect size with an R 2 value of 0.80 and the ICC between the Pool Pad and 2D video footage was 0.89. Despite these measurement differences, the results from this concurrent validity and reliability analyses demonstrated that the Pool Pad is suitable for measuring wall-contact-time during the freestyle and backstroke tumble turn within a training environment.

  4. Error Modeling of Multibaseline Optical Truss: Part 1: Modeling of System Level Performance

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.; Korechoff, R. E.; Zhang, L. D.

    2004-01-01

    Global astrometry is the measurement of stellar positions and motions. These are typically characterized by five parameters, including two position parameters, two proper motion parameters, and parallax. The Space Interferometry Mission (SIM) will derive these parameters for a grid of approximately 1300 stars covering the celestial sphere to an accuracy of approximately 4uas, representing a two orders of magnitude improvement over the most precise current star catalogues. Narrow angle astrometry will be performed to a 1uas accuracy. A wealth of scientific information will be obtained from these accurate measurements encompassing many aspects of both galactic (and extragalactic science. SIM will be subject to a number of instrument errors that can potentially degrade performance. Many of these errors are systematic in that they are relatively static and repeatable with respect to the time frame and direction of the observation. This paper and its companion define the modeling of the, contributing factors to these errors and the analysis of how they impact SIM's ability to perform astrometric science.

  5. Decision-Making Accuracy of CBM Progress-Monitoring Data

    ERIC Educational Resources Information Center

    Hintze, John M.; Wells, Craig S.; Marcotte, Amanda M.; Solomon, Benjamin G.

    2018-01-01

    This study examined the diagnostic accuracy associated with decision making as is typically conducted with curriculum-based measurement (CBM) approaches to progress monitoring. Using previously published estimates of the standard errors of estimate associated with CBM, 20,000 progress-monitoring data sets were simulated to model student reading…

  6. Quantifying Data Quality for Clinical Trials Using Electronic Data Capture

    PubMed Central

    Nahm, Meredith L.; Pieper, Carl F.; Cunningham, Maureen M.

    2008-01-01

    Background Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. Methods and Principal Findings The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. Conclusions Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks. PMID:18725958

  7. Influence of Precision of Emission Characteristic Parameters on Model Prediction Error of VOCs/Formaldehyde from Dry Building Material

    PubMed Central

    Wei, Wenjuan; Xiong, Jianyin; Zhang, Yinping

    2013-01-01

    Mass transfer models are useful in predicting the emissions of volatile organic compounds (VOCs) and formaldehyde from building materials in indoor environments. They are also useful for human exposure evaluation and in sustainable building design. The measurement errors in the emission characteristic parameters in these mass transfer models, i.e., the initial emittable concentration (C 0), the diffusion coefficient (D), and the partition coefficient (K), can result in errors in predicting indoor VOC and formaldehyde concentrations. These errors have not yet been quantitatively well analyzed in the literature. This paper addresses this by using modelling to assess these errors for some typical building conditions. The error in C 0, as measured in environmental chambers and applied to a reference living room in Beijing, has the largest influence on the model prediction error in indoor VOC and formaldehyde concentration, while the error in K has the least effect. A correlation between the errors in D, K, and C 0 and the error in the indoor VOC and formaldehyde concentration prediction is then derived for engineering applications. In addition, the influence of temperature on the model prediction of emissions is investigated. It shows the impact of temperature fluctuations on the prediction errors in indoor VOC and formaldehyde concentrations to be less than 7% at 23±0.5°C and less than 30% at 23±2°C. PMID:24312497

  8. Shared and unshared exposure measurement error in occupational cohort studies and their effects on statistical inference in proportional hazards models.

    PubMed

    Hoffmann, Sabine; Laurier, Dominique; Rage, Estelle; Guihenneuc, Chantal; Ancelet, Sophie

    2018-01-01

    Exposure measurement error represents one of the most important sources of uncertainty in epidemiology. When exposure uncertainty is not or only poorly accounted for, it can lead to biased risk estimates and a distortion of the shape of the exposure-response relationship. In occupational cohort studies, the time-dependent nature of exposure and changes in the method of exposure assessment may create complex error structures. When a method of group-level exposure assessment is used, individual worker practices and the imprecision of the instrument used to measure the average exposure for a group of workers may give rise to errors that are shared between workers, within workers or both. In contrast to unshared measurement error, the effects of shared errors remain largely unknown. Moreover, exposure uncertainty and magnitude of exposure are typically highest for the earliest years of exposure. We conduct a simulation study based on exposure data of the French cohort of uranium miners to compare the effects of shared and unshared exposure uncertainty on risk estimation and on the shape of the exposure-response curve in proportional hazards models. Our results indicate that uncertainty components shared within workers cause more bias in risk estimation and a more severe attenuation of the exposure-response relationship than unshared exposure uncertainty or exposure uncertainty shared between individuals. These findings underline the importance of careful characterisation and modeling of exposure uncertainty in observational studies.

  9. Shared and unshared exposure measurement error in occupational cohort studies and their effects on statistical inference in proportional hazards models

    PubMed Central

    Laurier, Dominique; Rage, Estelle

    2018-01-01

    Exposure measurement error represents one of the most important sources of uncertainty in epidemiology. When exposure uncertainty is not or only poorly accounted for, it can lead to biased risk estimates and a distortion of the shape of the exposure-response relationship. In occupational cohort studies, the time-dependent nature of exposure and changes in the method of exposure assessment may create complex error structures. When a method of group-level exposure assessment is used, individual worker practices and the imprecision of the instrument used to measure the average exposure for a group of workers may give rise to errors that are shared between workers, within workers or both. In contrast to unshared measurement error, the effects of shared errors remain largely unknown. Moreover, exposure uncertainty and magnitude of exposure are typically highest for the earliest years of exposure. We conduct a simulation study based on exposure data of the French cohort of uranium miners to compare the effects of shared and unshared exposure uncertainty on risk estimation and on the shape of the exposure-response curve in proportional hazards models. Our results indicate that uncertainty components shared within workers cause more bias in risk estimation and a more severe attenuation of the exposure-response relationship than unshared exposure uncertainty or exposure uncertainty shared between individuals. These findings underline the importance of careful characterisation and modeling of exposure uncertainty in observational studies. PMID:29408862

  10. Mars approach navigation using Doppler and range measurements to surface beacons and orbiting spacecraft

    NASA Technical Reports Server (NTRS)

    Thurman, Sam W.; Estefan, Jeffrey A.

    1991-01-01

    Approximate analytical models are developed and used to construct an error covariance analysis for investigating the range of orbit determination accuracies which might be achieved for typical Mars approach trajectories. The sensitivity or orbit determination accuracy to beacon/orbiter position errors and to small spacecraft force modeling errors is also investigated. The results indicate that the orbit determination performance obtained from both Doppler and range data is a strong function of the inclination of the approach trajectory to the Martian equator, for surface beacons, and for orbiters, the inclination relative to the orbital plane. Large variations in performance were also observed for different approach velocity magnitudes; Doppler data in particular were found to perform poorly in determining the downtrack (along the direction of flight) component of spacecraft position. In addition, it was found that small spacecraft acceleration modeling errors can induce large errors in the Doppler-derived downtrack position estimate.

  11. Down's syndrome and the acquisition of phonology by Cantonese-speaking children.

    PubMed

    So, L K; Dodd, B J

    1994-10-01

    The phonological abilities of two groups of 4-9-year-old intellectually impaired Cantonese-speaking children are described. Children with Down's syndrome did not differ from matched non-Down's syndrome controls in terms of a lexical comprehension measure, the size of their phoneme repertoires, the range of sounds affected by articulatory imprecision, or the number of consonants, vowels or tones produced in error. However, the types of errors made by the Down's syndrome children were different from those made by the control subjects. Cantonese-speaking children with Down's syndrome, as compared with controls, made a greater number of inconsistent errors, were more likely to produce non-developmental errors and were better in imitation than in spontaneous production. Despite extensive differences between the phonological structures of Cantonese and English, children with Down's syndrome acquiring these languages show the same characteristic pattern of speech errors. One unexpected finding was that the control group of non-Down's syndrome children failed to present with delayed phonological development typically reported for their English-speaking counterparts. The argument made is that cross-linguistic studies of intellectually impaired children's language acquisition provide evidence concerning language-specific characteristics of impairment, as opposed to those characteristics that, remaining constant across languages, are an integral part of the disorder. The results reported here support the hypothesis that the speech disorder typically associated with Down's syndrome arises from impaired phonological planning, i.e. a cognitive linguistic deficit.

  12. Kinematic markers dissociate error correction from sensorimotor realignment during prism adaptation.

    PubMed

    O'Shea, Jacinta; Gaveau, Valérie; Kandel, Matthieu; Koga, Kazuo; Susami, Kenji; Prablanc, Claude; Rossetti, Yves

    2014-03-01

    This study investigated the motor control mechanisms that enable healthy individuals to adapt their pointing movements during prism exposure to a rightward optical shift. In the prism adaptation literature, two processes are typically distinguished. Strategic motor adjustments are thought to drive the pattern of rapid endpoint error correction typically observed during the early stage of prism exposure. This is distinguished from so-called 'true sensorimotor realignment', normally measured with a different pointing task, at the end of prism exposure, which reveals a compensatory leftward 'prism after-effect'. Here, we tested whether each mode of motor compensation - strategic adjustments versus 'true sensorimotor realignment' - could be distinguished, by analyzing patterns of kinematic change during prism exposure. We hypothesized that fast feedforward versus slower feedback error corrective processes would map onto two distinct phases of the reach trajectory. Specifically, we predicted that feedforward adjustments would drive rapid compensation of the initial (acceleration) phase of the reach, resulting in the rapid reduction of endpoint errors typically observed early during prism exposure. By contrast, we expected visual-proprioceptive realignment to unfold more slowly and to reflect feedback influences during the terminal (deceleration) phase of the reach. The results confirmed these hypotheses. Rapid error reduction during the early stage of prism exposure was achieved by trial-by-trial adjustments of the motor plan, which were proportional to the endpoint error feedback from the previous trial. By contrast, compensation of the terminal reach phase unfolded slowly across the duration of prism exposure. Even after 100 trials of pointing through prisms, adaptation was incomplete, with participants continuing to exhibit a small rightward shift in both the reach endpoints and in the terminal phase of reach trajectories. Individual differences in the degree of adaptation of the terminal reach phase predicted the magnitude of prism after-effects. In summary, this study identifies distinct kinematic signatures of fast strategic versus slow sensorimotor realignment processes, which combine to adjust motor performance to compensate for a prismatic shift. © 2013 Elsevier Ltd. All rights reserved.

  13. Measuring mental illness stigma with diminished social desirability effects.

    PubMed

    Michaels, Patrick J; Corrigan, Patrick W

    2013-06-01

    For persons with mental illness, stigma diminishes employment and independent living opportunities as well as participation in psychiatric care. Public stigma interventions have sought to ameliorate these consequences. Evaluation of anti-stigma programs' impact is typically accomplished with self-report questionnaires. However, cultural mores encourage endorsement of answers that are socially preferred rather than one's true belief. This problem, social desirability, has been circumvented through development of faux knowledge tests (KTs) (i.e., Error-Choice Tests); written to assess prejudice. Our KT uses error-choice test methodology to assess stigmatizing attitudes. Test content was derived from review of typical KTs for façade reinforcement. Answer endorsement suggests bias or stigma; such determinations were based on the empirical literature. KT psychometrics were examined in samples of college students, community members and mental health providers and consumers. Test-retest reliability ranged from fair (0.50) to good (0.70). Construct validity analyses of public stigma indicated a positive relationship with the Attribution Questionnaire and inverse relationships with Self-Determination and Empowerment Scales. No significant relationships were observed with self-stigma measures (recovery, empowerment). This psychometric evaluation study suggests that a self-administered questionnaire may circumvent social desirability and have merit as a stigma measurement tool.

  14. Error-Monitoring in Response to Social Stimuli in Individuals with Higher-Functioning Autism Spectrum Disorder

    PubMed Central

    McMahon, Camilla M.; Henderson, Heather A.

    2014-01-01

    Error-monitoring, or the ability to recognize one's mistakes and implement behavioral changes to prevent further mistakes, may be impaired in individuals with Autism Spectrum Disorder (ASD). Children and adolescents (ages 9-19) with ASD (n = 42) and typical development (n = 42) completed two face processing tasks that required discrimination of either the gender or affect of standardized face stimuli. Post-error slowing and the difference in Error-Related Negativity amplitude between correct and incorrect responses (ERNdiff) were used to index error-monitoring ability. Overall, ERNdiff increased with age. On the Gender Task, individuals with ASD had a smaller ERNdiff than individuals with typical development; however, on the Affect Task, there were no significant diagnostic group differences on ERNdiff. Individuals with ASD may have ERN amplitudes similar to those observed in individuals with typical development in more social contexts compared to less social contexts due to greater consequences for errors, more effortful processing, and/or reduced processing efficiency in these contexts. Across all participants, more post-error slowing on the Affect Task was associated with better social cognitive skills. PMID:25066088

  15. A study of GPS measurement errors due to noise and multipath interference for CGADS

    NASA Technical Reports Server (NTRS)

    Axelrad, Penina; MacDoran, Peter F.; Comp, Christopher J.

    1996-01-01

    This report describes a study performed by the Colorado Center for Astrodynamics Research (CCAR) on GPS measurement errors in the Codeless GPS Attitude Determination System (CGADS) due to noise and multipath interference. Preliminary simulation models fo the CGADS receiver and orbital multipath are described. The standard FFT algorithms for processing the codeless data is described and two alternative algorithms - an auto-regressive/least squares (AR-LS) method, and a combined adaptive notch filter/least squares (ANF-ALS) method, are also presented. Effects of system noise, quantization, baseband frequency selection, and Doppler rates on the accuracy of phase estimates with each of the processing methods are shown. Typical electrical phase errors for the AR-LS method are 0.2 degrees, compared to 0.3 and 0.5 degrees for the FFT and ANF-ALS algorithms, respectively. Doppler rate was found to have the largest effect on the performance.

  16. Forest Resource Measurements by Combination of Terrestrial Laser Scanning and Drone Use

    NASA Astrophysics Data System (ADS)

    Cheung, K.; Katoh, M.; Horisawa, M.

    2017-10-01

    Using terrestrial laser scanning (TLS), forest attributes such as diameter at breast height (DBH) and tree location can be measured accurately. However, due to low penetration of laser pulses to tree tops, tree height measurements are typically underestimated. In this study, data acquired by TLS and drones were combined; DBH and tree locations were determined by TLS, and tree heights were measured by drone use. The average tree height error and root mean square error (RMSE) of tree height were 0.8 and 1.2 m, respectively, for the combined method, and -0.4 and 1.7 m using TLS alone. The tree height difference was compared using airborne laser scanning (ALS). Furthermore, a method to acquire 100 % tree detection rate based on TLS data is suggested in this study.

  17. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE PAGES

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  18. Transient quantum fluctuation theorems and generalized measurements

    NASA Astrophysics Data System (ADS)

    Prasanna Venkatesh, B.; Watanabe, Gentaro; Talkner, Peter

    2014-01-01

    The transient quantum fluctuation theorems of Crooks and Jarzynski restrict and relate the statistics of work performed in forward and backward forcing protocols. So far, these theorems have been obtained under the assumption that the work is determined by two projective energy measurements, one at the end, and the other one at the beginning of each run of the protocol. We found that one can replace these two projective measurements only by special error-free generalized energy measurements with pairs of tailored, protocol-dependent post-measurement states that satisfy detailed balance-like relations. For other generalized measurements, the Crooks relation is typically not satisfied. For the validity of the Jarzynski equality, it is sufficient that the first energy measurements are error-free and the post-measurement states form a complete orthonormal set of elements in the Hilbert space of the considered system. Additionally, the effects of the second energy measurements must have unit trace. We illustrate our results by an example of a two-level system for different generalized measurements.

  19. Transient quantum fluctuation theorems and generalized measurements

    NASA Astrophysics Data System (ADS)

    Prasanna Venkatesh, B.; Watanabe, Gentaro; Talkner, Peter

    2014-05-01

    The transient quantum fluctuation theorems of Crooks and Jarzynski restrict and relate the statistics of work performed in forward and backward forcing protocols. So far, these theorems have been obtained under the assumption that the work is determined by two projective energy measurements, one at the end, and the other one at the beginning of each run of the protocol.We found that one can replace these two projective measurements only by special error-free generalized energy measurements with pairs of tailored, protocol-dependent post-measurement states that satisfy detailed balance-like relations. For other generalized measurements, the Crooks relation is typically not satisfied. For the validity of the Jarzynski equality, it is sufficient that the first energy measurements are error-free and the post-measurement states form a complete orthonormal set of elements in the Hilbert space of the considered system. Additionally, the effects of the second energy measurements must have unit trace. We illustrate our results by an example of a two-level system for different generalized measurements.

  20. English speech sound development in preschool-aged children from bilingual English-Spanish environments.

    PubMed

    Gildersleeve-Neumann, Christina E; Kester, Ellen S; Davis, Barbara L; Peña, Elizabeth D

    2008-07-01

    English speech acquisition by typically developing 3- to 4-year-old children with monolingual English was compared to English speech acquisition by typically developing 3- to 4-year-old children with bilingual English-Spanish backgrounds. We predicted that exposure to Spanish would not affect the English phonetic inventory but would increase error frequency and type in bilingual children. Single-word speech samples were collected from 33 children. Phonetically transcribed samples for the 3 groups (monolingual English children, English-Spanish bilingual children who were predominantly exposed to English, and English-Spanish bilingual children with relatively equal exposure to English and Spanish) were compared at 2 time points and for change over time for phonetic inventory, phoneme accuracy, and error pattern frequencies. Children demonstrated similar phonetic inventories. Some bilingual children produced Spanish phonemes in their English and produced few consonant cluster sequences. Bilingual children with relatively equal exposure to English and Spanish averaged more errors than did bilingual children who were predominantly exposed to English. Both bilingual groups showed higher error rates than English-only children overall, particularly for syllable-level error patterns. All language groups decreased in some error patterns, although the ones that decreased were not always the same across language groups. Some group differences of error patterns and accuracy were significant. Vowel error rates did not differ by language group. Exposure to English and Spanish may result in a higher English error rate in typically developing bilinguals, including the application of Spanish phonological properties to English. Slightly higher error rates are likely typical for bilingual preschool-aged children. Change over time at these time points for all 3 groups was similar, suggesting that all will reach an adult-like system in English with exposure and practice.

  1. The Hubble Constant.

    PubMed

    Jackson, Neal

    2015-01-01

    I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. There are two broad categories of measurements. The first uses individual astrophysical objects which have some property that allows their intrinsic luminosity or size to be determined, or allows the determination of their distance by geometric means. The second category comprises the use of all-sky cosmic microwave background, or correlations between large samples of galaxies, to determine information about the geometry of the Universe and hence the Hubble constant, typically in a combination with other cosmological parameters. Many, but not all, object-based measurements give H 0 values of around 72-74 km s -1 Mpc -1 , with typical errors of 2-3 km s -1 Mpc -1 . This is in mild discrepancy with CMB-based measurements, in particular those from the Planck satellite, which give values of 67-68 km s -1 Mpc -1 and typical errors of 1-2 km s -1 Mpc -1 . The size of the remaining systematics indicate that accuracy rather than precision is the remaining problem in a good determination of the Hubble constant. Whether a discrepancy exists, and whether new physics is needed to resolve it, depends on details of the systematics of the object-based methods, and also on the assumptions about other cosmological parameters and which datasets are combined in the case of the all-sky methods.

  2. Impact of specific language impairment and type of school on different language subsystems.

    PubMed

    Puglisi, Marina Leite; Befi-Lopes, Debora Maria

    2016-01-01

    This study aimed to explore quantitative and qualitative effects of type of school and specific language impairment (SLI) on different language abilities. 204 Brazilian children aged from 4 to 6 years old participated in the study. Children were selected to form three groups: 1) 63 typically developing children studying in private schools (TDPri); 2) 102 typically developing children studying in state schools (TDSta); and 39 children with SLI studying in state schools (SLISta). All individuals were assessed regarding expressive vocabulary, number morphology and morphosyntactic comprehension. All language subsystems were vulnerable to both environmental (type of school) and biological (SLI) effects. The relationship between the three language measures was exactly the same to all groups: vocabulary growth correlated with age and with the development of morphological abilities and morphosyntactic comprehension. Children with SLI showed atypical errors in the comprehension test at the age of 4, but presented a pattern of errors that gradually resembled typical development. The effect of type of school was marked by quantitative differences, while the effect of SLI was characterised by both quantitative and qualitative differences.

  3. Bias Correction and Random Error Characterization for the Assimilation of HRDI Line-of-Sight Wind Measurements

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Menard, Richard; Ortland, David; Einaudi, Franco (Technical Monitor)

    2001-01-01

    A new approach to the analysis of systematic and random observation errors is presented in which the error statistics are obtained using forecast data rather than observations from a different instrument type. The analysis is carried out at an intermediate retrieval level, instead of the more typical state variable space. This method is carried out on measurements made by the High Resolution Doppler Imager (HRDI) on board the Upper Atmosphere Research Satellite (UARS). HRDI, a limb sounder, is the only satellite instrument measuring winds in the stratosphere, and the only instrument of any kind making global wind measurements in the upper atmosphere. HRDI measures doppler shifts in the two different O2 absorption bands (alpha and B) and the retrieved products are tangent point Line-of-Sight wind component (level 2 retrieval) and UV winds (level 3 retrieval). This analysis is carried out on a level 1.9 retrieval, in which the contributions from different points along the line-of-sight have not been removed. Biases are calculated from O-F (observed minus forecast) LOS wind components and are separated into a measurement parameter space consisting of 16 different values. The bias dependence on these parameters (plus an altitude dependence) is used to create a bias correction scheme carried out on the level 1.9 retrieval. The random error component is analyzed by separating the gamma and B band observations and locating observation pairs where both bands are very nearly looking at the same location at the same time. It is shown that the two observation streams are uncorrelated and that this allows the forecast error variance to be estimated. The bias correction is found to cut the effective observation error variance in half.

  4. Accuracy of an IFSAR-derived digital terrain model under a conifer forest canopy.

    Treesearch

    Hans-Erik Andersen; Stephen E. Reutebuch; Robert J. McGaughey

    2005-01-01

    Accurate digital terrain models (DTMs) are necessary for a variety of forest resource management applications, including watershed management, timber harvest planning, and fire management. Traditional methods for acquiring topographic data typically rely on aerial photogrammetry, where measurement of the terrain surface below forest canopy is difficult and error prone...

  5. SAR System for UAV Operation with Motion Error Compensation beyond the Resolution Cell

    PubMed Central

    González-Partida, José-Tomás; Almorox-González, Pablo; Burgos-García, Mateo; Dorta-Naranjo, Blas-Pablo

    2008-01-01

    This paper presents an experimental Synthetic Aperture Radar (SAR) system that is under development in the Universidad Politécnica de Madrid. The system uses Linear Frequency Modulated Continuous Wave (LFM-CW) radar with a two antenna configuration for transmission and reception. The radar operates in the millimeter-wave band with a maximum transmitted bandwidth of 2 GHz. The proposed system is being developed for Unmanned Aerial Vehicle (UAV) operation. Motion errors in UAV operation can be critical. Therefore, this paper proposes a method for focusing SAR images with movement errors larger than the resolution cell. Typically, this problem is solved using two processing steps: first, coarse motion compensation based on the information provided by an Inertial Measuring Unit (IMU); and second, fine motion compensation for the residual errors within the resolution cell based on the received raw data. The proposed technique tries to focus the image without using data of an IMU. The method is based on a combination of the well known Phase Gradient Autofocus (PGA) for SAR imagery and typical algorithms for translational motion compensation on Inverse SAR (ISAR). This paper shows the first real experiments for obtaining high resolution SAR images using a car as a mobile platform for our radar. PMID:27879884

  6. SAR System for UAV Operation with Motion Error Compensation beyond the Resolution Cell.

    PubMed

    González-Partida, José-Tomás; Almorox-González, Pablo; Burgos-Garcia, Mateo; Dorta-Naranjo, Blas-Pablo

    2008-05-23

    This paper presents an experimental Synthetic Aperture Radar (SAR) system that is under development in the Universidad Politécnica de Madrid. The system uses Linear Frequency Modulated Continuous Wave (LFM-CW) radar with a two antenna configuration for transmission and reception. The radar operates in the millimeter-wave band with a maximum transmitted bandwidth of 2 GHz. The proposed system is being developed for Unmanned Aerial Vehicle (UAV) operation. Motion errors in UAV operation can be critical. Therefore, this paper proposes a method for focusing SAR images with movement errors larger than the resolution cell. Typically, this problem is solved using two processing steps: first, coarse motion compensation based on the information provided by an Inertial Measuring Unit (IMU); and second, fine motion compensation for the residual errors within the resolution cell based on the received raw data. The proposed technique tries to focus the image without using data of an IMU. The method is based on a combination of the well known Phase Gradient Autofocus (PGA) for SAR imagery and typical algorithms for translational motion compensation on Inverse SAR (ISAR). This paper shows the first real experiments for obtaining high resolution SAR images using a car as a mobile platform for our radar.

  7. Reliability and Validity of a New Test of Agility and Skill for Female Amateur Soccer Players

    PubMed Central

    Kutlu, Mehmet; Yapici, Hakan; Yilmaz, Abdullah

    2017-01-01

    Abstract The aim of this study was to evaluate the Agility and Skill Test, which had been recently developed to assess agility and skill in female athletes. Following a 10 min warm-up, two trials to test the reliability and validity of the test were conducted one week apart. Measurements were collected to compare soccer players’ physical performance in a 20 m sprint, a T-Drill test, the Illinois Agility Run Test, change-of-direction and acceleration, as well as agility and skill. All tests were completed following the same order. Thirty-four amateur female soccer players were recruited (age = 20.8 ± 1.9 years; body height = 166 ± 6.9 cm; body mass = 55.5 ± 5.8 kg). To determine the reliability and usefulness of these tests, paired sample t-tests, intra-class correlation coefficients, typical error, coefficient of variation, and differences between the typical error and smallest worthwhile change statistics were computed. Test results showed no significant differences between the two sessions (p > 0.01). There were higher intra-class correlations between the test and retest values (r = 0.94–0.99) for all tests. Typical error values were below the smallest worthwhile change, indicating ‘good’ usefulness for these tests. A near perfect Pearson correlation between the Agility and Skill Test (r = 0.98) was found, and there were moderate-to-large levels of correlation between the Agility and Skill Test and other measures (r = 0.37 to r = 0.56). The results of this study suggest that the Agility and Skill Test is a reliable and valid test for female soccer players and has significant value for assessing the integrative agility and skill capability of soccer players. PMID:28469760

  8. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    PubMed

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  9. Integrated Data Analysis for Fusion: A Bayesian Tutorial for Fusion Diagnosticians

    NASA Astrophysics Data System (ADS)

    Dinklage, Andreas; Dreier, Heiko; Fischer, Rainer; Gori, Silvio; Preuss, Roland; Toussaint, Udo von

    2008-03-01

    Integrated Data Analysis (IDA) offers a unified way of combining information relevant to fusion experiments. Thereby, IDA meets with typical issues arising in fusion data analysis. In IDA, all information is consistently formulated as probability density functions quantifying uncertainties in the analysis within the Bayesian probability theory. For a single diagnostic, IDA allows the identification of faulty measurements and improvements in the setup. For a set of diagnostics, IDA gives joint error distributions allowing the comparison and integration of different diagnostics results. Validation of physics models can be performed by model comparison techniques. Typical data analysis applications benefit from IDA capabilities of nonlinear error propagation, the inclusion of systematic effects and the comparison of different physics models. Applications range from outlier detection, background discrimination, model assessment and design of diagnostics. In order to cope with next step fusion device requirements, appropriate techniques are explored for fast analysis applications.

  10. Sensitivity of thermal inertia calculations to variations in environmental factors. [in mapping of Earth's surface by remote sensing

    NASA Technical Reports Server (NTRS)

    Kahle, A. B.; Alley, R. E.; Schieldge, J. P.

    1984-01-01

    The sensitivity of thermal inertia (TI) calculations to errors in the measurement or parameterization of a number of environmental factors is considered here. The factors include effects of radiative transfer in the atmosphere, surface albedo and emissivity, variations in surface turbulent heat flux density, cloud cover, vegetative cover, and topography. The error analysis is based upon data from the Heat Capacity Mapping Mission (HCMM) satellite for July 1978 at three separate test sites in the deserts of the western United States. Results show that typical errors in atmospheric radiative transfer, cloud cover, and vegetative cover can individually cause root-mean-square (RMS) errors of about 10 percent (with atmospheric effects sometimes as large as 30-40 percent) in HCMM-derived thermal inertia images of 20,000-200,000 pixels.

  11. Methods for accurate estimation of net discharge in a tidal channel

    USGS Publications Warehouse

    Simpson, M.R.; Bland, R.

    2000-01-01

    Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three sets of calibration data differed by less than an average of 4 cubic meters per second, or less than 0.5% of a typical peak tidal discharge rate of 750 cubic meters per second.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salvador Palau, A.; Eder, S. D., E-mail: sabrina.eder@uib.no; Kaltenbacher, T.

    Time-of-flight (TOF) is a standard experimental technique for determining, among others, the speed ratio S (velocity spread) of a molecular beam. The speed ratio is a measure for the monochromaticity of the beam and an accurate determination of S is crucial for various applications, for example, for characterising chromatic aberrations in focussing experiments related to helium microscopy or for precise measurements of surface phonons and surface structures in molecular beam scattering experiments. For both of these applications, it is desirable to have as high a speed ratio as possible. Molecular beam TOF measurements are typically performed by chopping the beammore » using a rotating chopper with one or more slit openings. The TOF spectra are evaluated using a standard deconvolution method. However, for higher speed ratios, this method is very sensitive to errors related to the determination of the slit width and the beam diameter. The exact sensitivity depends on the beam diameter, the number of slits, the chopper radius, and the chopper rotation frequency. We present a modified method suitable for the evaluation of TOF measurements of high speed ratio beams. The modified method is based on a systematic variation of the chopper convolution parameters so that a set of independent measurements that can be fitted with an appropriate function are obtained. We show that with this modified method, it is possible to reduce the error by typically one order of magnitude compared to the standard method.« less

  13. Phase Error Correction in Time-Averaged 3D Phase Contrast Magnetic Resonance Imaging of the Cerebral Vasculature

    PubMed Central

    MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard

    2016-01-01

    Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600

  14. Fission cross section of 239Th and 232Th relative to 235U

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meadows, J. W.

    1979-01-01

    The fission cross sections of /sup 230/Th and /sup 232/Th were measured relative to /sup 235/U from near threshold to near 10 MeV. The weights of the thorium samples were determined by isotopic dilution. The weight of the uranium deposit was based on specific activity measurements of a /sup 234/U-/sup 235/U mixture and low geometry alpha counting. Corrections were made for thermal background, loss of fragments in the deposits, neutron scattering in the detector assembly, sample geometry, sample composition and the spectrum of the neutron source. Generally the systematic errors were approx. 1%. The combined systematic and statistical errors weremore » typically 1.5%. 17 references.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  16. Artificial bias typically neglected in comparisons of uncertain atmospheric data

    NASA Astrophysics Data System (ADS)

    Pitkänen, Mikko R. A.; Mikkonen, Santtu; Lehtinen, Kari E. J.; Lipponen, Antti; Arola, Antti

    2016-09-01

    Publications in atmospheric sciences typically neglect biases caused by regression dilution (bias of the ordinary least squares line fitting) and regression to the mean (RTM) in comparisons of uncertain data. We use synthetic observations mimicking real atmospheric data to demonstrate how the biases arise from random data uncertainties of measurements, model output, or satellite retrieval products. Further, we provide examples of typical methods of data comparisons that have a tendency to pronounce the biases. The results show, that data uncertainties can significantly bias data comparisons due to regression dilution and RTM, a fact that is known in statistics but disregarded in atmospheric sciences. Thus, we argue that often these biases are widely regarded as measurement or modeling errors, for instance, while they in fact are artificial. It is essential that atmospheric and geoscience communities become aware of and consider these features in research.

  17. Uses and biases of volunteer water quality data

    USGS Publications Warehouse

    Loperfido, J.V.; Beyer, P.; Just, C.L.; Schnoor, J.L.

    2010-01-01

    State water quality monitoring has been augmented by volunteer monitoring programs throughout the United States. Although a significant effort has been put forth by volunteers, questions remain as to whether volunteer data are accurate and can be used by regulators. In this study, typical volunteer water quality measurements from laboratory and environmental samples in Iowa were analyzed for error and bias. Volunteer measurements of nitrate+nitrite were significantly lower (about 2-fold) than concentrations determined via standard methods in both laboratory-prepared and environmental samples. Total reactive phosphorus concentrations analyzed by volunteers were similar to measurements determined via standard methods in laboratory-prepared samples and environmental samples, but were statistically lower than the actual concentration in four of the five laboratory-prepared samples. Volunteer water quality measurements were successful in identifying and classifying most of the waters which violate United States Environmental Protection Agency recommended water quality criteria for total nitrogen (66%) and for total phosphorus (52%) with the accuracy improving when accounting for error and biases in the volunteer data. An understanding of the error and bias in volunteer water quality measurements can allow regulators to incorporate volunteer water quality data into total maximum daily load planning or state water quality reporting. ?? 2010 American Chemical Society.

  18. Modeling Security Aspects of Network

    NASA Astrophysics Data System (ADS)

    Schoch, Elmar

    With more and more widespread usage of computer systems and networks, dependability becomes a paramount requirement. Dependability typically denotes tolerance or protection against all kinds of failures, errors and faults. Sources of failures can basically be accidental, e.g., in case of hardware errors or software bugs, or intentional due to some kind of malicious behavior. These intentional, malicious actions are subject of security. A more complete overview on the relations between dependability and security can be found in [31]. In parallel to the increased use of technology, misuse also has grown significantly, requiring measures to deal with it.

  19. Approximations to Joint Distributions of Definite Quadratic Forms

    DTIC Science & Technology

    1989-11-21

    with To and 1 [ in partitioned form, we obtain T -i . =1 UOtu2tai23 . t V (A .3) where s’ = s - 1 and the expression inside brackets is the typical (u,v...errors in cephalometric measurement of three-dimensional distances on the maxilla." Angle Orthodont ., 36, 169-175. [27] Pearson, K. (1900). "On a

  20. Sensitivity of disease management decision aids to temperature input errors associated with out-of-canopy and reduced time-resolution measurements

    USDA-ARS?s Scientific Manuscript database

    Plant disease management decision aids typically require inputs of weather elements such as air temperature. Whereas many disease models are created based on weather elements at the crop canopy, and with relatively fine time resolution, the decision aids commonly are implemented with hourly weather...

  1. Using the Kernel Method of Test Equating for Estimating the Standard Errors of Population Invariance Measures

    ERIC Educational Resources Information Center

    Moses, Tim

    2008-01-01

    Equating functions are supposed to be population invariant, meaning that the choice of subpopulation used to compute the equating function should not matter. The extent to which equating functions are population invariant is typically assessed in terms of practical difference criteria that do not account for equating functions' sampling…

  2. New methodology for adjusting rotating shadowband irradiometer measurements

    NASA Astrophysics Data System (ADS)

    Vignola, Frank; Peterson, Josh; Wilbert, Stefan; Blanc, Philippe; Geuder, Norbert; Kern, Chris

    2017-06-01

    A new method is developed for correcting systematic errors found in rotating shadowband irradiometer measurements. Since the responsivity of photodiode-based pyranometers typically utilized for RST sensors is dependent upon the wavelength of the incident radiation and the spectral distribution of the incident radiation is different for the Direct Normal Trradiance and the Diffuse Horizontal Trradiance, spectral effects have to be considered. These cause the most problematic errors when applying currently available correction functions to RST measurements. Hence, direct normal and diffuse contributions are analyzed and modeled separately. An additional advantage of this methodology is that it provides a prescription for how to modify the adjustment algorithms to locations with different atmospheric characteristics from the location where the calibration and adjustment algorithms were developed. A summary of results and areas for future efforts are then discussed.

  3. CCD image sensor induced error in PIV applications

    NASA Astrophysics Data System (ADS)

    Legrand, M.; Nogueira, J.; Vargas, A. A.; Ventas, R.; Rodríguez-Hidalgo, M. C.

    2014-06-01

    The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (˜0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described.

  4. Determination of Barometric Altimeter Errors for the Orion Exploration Flight Test-1 Entry

    NASA Technical Reports Server (NTRS)

    Brown, Denise L.; Munoz, Jean-Philippe; Gay, Robert

    2011-01-01

    The EFT-1 mission is the unmanned flight test for the upcoming Multi-Purpose Crew Vehicle (MPCV). During entry, the EFT-1 vehicle will trigger several Landing and Recovery System (LRS) events, such as parachute deployment, based on onboard altitude information. The primary altitude source is the filtered navigation solution updated with GPS measurement data. The vehicle also has three barometric altimeters that will be used to measure atmospheric pressure during entry. In the event that GPS data is not available during entry, the altitude derived from the barometric altimeter pressure will be used to trigger chute deployment for the drogues and main parachutes. Therefore it is important to understand the impact of error sources on the pressure measured by the barometric altimeters and on the altitude derived from that pressure. There are four primary error sources impacting the sensed pressure: sensor errors, Analog to Digital conversion errors, aerodynamic errors, and atmosphere modeling errors. This last error source is induced by the conversion from pressure to altitude in the vehicle flight software, which requires an atmosphere model such as the US Standard 1976 Atmosphere model. There are several secondary error sources as well, such as waves, tides, and latencies in data transmission. Typically, for error budget calculations it is assumed that all error sources are independent, normally distributed variables. Thus, the initial approach to developing the EFT-1 barometric altimeter altitude error budget was to create an itemized error budget under these assumptions. This budget was to be verified by simulation using high fidelity models of the vehicle hardware and software. The simulation barometric altimeter model includes hardware error sources and a data-driven model of the aerodynamic errors expected to impact the pressure in the midbay compartment in which the sensors are located. The aerodynamic model includes the pressure difference between the midbay compartment and the free stream pressure as a function of altitude, oscillations in sensed pressure due to wake effects, and an acoustics model capturing fluctuations in pressure due to motion of the passive vents separating the barometric altimeters from the outside of the vehicle.

  5. The Gnomon Experiment

    NASA Astrophysics Data System (ADS)

    Krisciunas, Kevin

    2007-12-01

    A gnomon, or vertical pointed stick, can be used to determine the north-south direction at a site, as well as one's latitude. If one has accurate time and knows one's time zone, it is also possible to determine one's longitude. From observations on the first day of winter and the first day of summer one can determine the obliquity of the ecliptic. Since we can obtain accurate geographical coordinates from Google Earth or a GPS device, analysis of set of shadow length measurements can be used by students to learn about astronomical coordinate systems, time systems, systematic errors, and random errors. Systematic latitude errors of student datasets are typically 30 nautical miles (0.5 degree) or more, but with care one can achieve systematic and random errors less than 8 nautical miles. One of the advantages of this experiment is that it can be carried out during the day. Also, it is possible to determine if a student has made up his data.

  6. Optimising in situ gamma measurements to identify the presence of radioactive particles in land areas.

    PubMed

    Rostron, Peter D; Heathcote, John A; Ramsey, Michael H

    2014-12-01

    High-coverage in situ surveys with gamma detectors are the best means of identifying small hotspots of activity, such as radioactive particles, in land areas. Scanning surveys can produce rapid results, but the probabilities of obtaining false positive or false negative errors are often unknown, and they may not satisfy other criteria such as estimation of mass activity concentrations. An alternative is to use portable gamma-detectors that are set up at a series of locations in a systematic sampling pattern, where any positive measurements are subsequently followed up in order to determine the exact location, extent and nature of the target source. The preliminary survey is typically designed using settings of detector height, measurement spacing and counting time that are based on convenience, rather than using settings that have been calculated to meet requirements. This paper introduces the basis of a repeatable method of setting these parameters at the outset of a survey, for pre-defined probabilities of false positive and false negative errors in locating spatially small radioactive particles in land areas. It is shown that an un-collimated detector is more effective than a collimated detector that might typically be used in the field. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. The mean magnetic field of the sun: Observations at Stanford

    NASA Technical Reports Server (NTRS)

    Scherrer, P. H.; Wilcox, J. M.; Svalgaard, L.; Duvall, T. L., Jr.; Dittmer, P. H.; Gustafson, E. K.

    1977-01-01

    A solar telescope was built at Stanford University to study the organization and evolution of large-scale solar magnetic fields and velocities. The observations are made using a Babcock-type magnetograph which is connected to a 22.9 m vertical Littrow spectrograph. Sun-as-a-star integrated light measurements of the mean solar magnetic field were made daily since May 1975. The typical mean field magnitude is about 0.15 gauss with typical measurement error less than 0.05 gauss. The mean field polarity pattern is essentially identical to the interplanetary magnetic field sector structure (seen near the earth with a 4 day lag). The differences in the observed structures can be understood in terms of a warped current sheet model.

  8. Patient safety: honoring advanced directives.

    PubMed

    Tice, Martha A

    2007-02-01

    Healthcare providers typically think of patient safety in the context of preventing iatrogenic injury. Prevention of falls and medication or treatment errors is the typical focus of adverse event analyses. If healthcare providers are committed to honoring the wishes of patients, then perhaps failures to honor advanced directives should be viewed as reportable medical errors.

  9. Computation Error Analysis: Students with Mathematics Difficulty Compared to Typically Achieving Students

    ERIC Educational Resources Information Center

    Nelson, Gena; Powell, Sarah R.

    2018-01-01

    Though proficiency with computation is highly emphasized in national mathematics standards, students with mathematics difficulty (MD) continue to struggle with computation. To learn more about the differences in computation error patterns between typically achieving students and students with MD, we assessed 478 third-grade students on a measure…

  10. Assessment and Calibration of Ultrasonic Measurement Errors in Estimating Weathering Index of Stone Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Lee, Y.; Keehm, Y.

    2011-12-01

    Estimating the degree of weathering in stone cultural heritage, such as pagodas and statues is very important to plan conservation and restoration. The ultrasonic measurement is one of commonly-used techniques to evaluate weathering index of stone cultual properties, since it is easy to use and non-destructive. Typically we use a portable ultrasonic device, PUNDIT with exponential sensors. However, there are many factors to cause errors in measurements such as operators, sensor layouts or measurement directions. In this study, we carried out variety of measurements with different operators (male and female), different sensor layouts (direct and indirect), and sensor directions (anisotropy). For operators bias, we found that there were not significant differences by the operator's sex, while the pressure an operator exerts can create larger error in measurements. Calibrating with a standard sample for each operator is very essential in this case. For the sensor layout, we found that the indirect measurement (commonly used for cultural properties, since the direct measurement is difficult in most cases) gives lower velocity than the real one. We found that the correction coefficient is slightly different for different types of rocks: 1.50 for granite and sandstone and 1.46 for marble. From the sensor directions, we found that many rocks have slight anisotropy in their ultrasonic velocity measurement, though they are considered isotropic in macroscopic scale. Thus averaging four different directional measurement (0°, 45°, 90°, 135°) gives much less errors in measurements (the variance is 2-3 times smaller). In conclusion, we reported the error in ultrasonic meaurement of stone cultural properties by various sources quantitatively and suggested the amount of correction and procedures to calibrate the measurements. Acknowledgement: This study, which forms a part of the project, has been achieved with the support of national R&D project, which has been hosted by National Research Institute of Cultural Heritage of Cultural Heritage Administration(No. NRICH-1107-B01F).

  11. Determination of wavefront structure for a Hartmann wavefront sensor using a phase-retrieval method.

    PubMed

    Polo, A; Kutchoukov, V; Bociort, F; Pereira, S F; Urbach, H P

    2012-03-26

    We apply a phase retrieval algorithm to the intensity pattern of a Hartmann wavefront sensor to measure with enhanced accuracy the phase structure of a Hartmann hole array. It is shown that the rms wavefront error achieved by phase reconstruction is one order of magnitude smaller than the one obtained from a typical centroid algorithm. Experimental results are consistent with a phase measurement performed independently using a Shack-Hartmann wavefront sensor.

  12. Valuing urban open space using the travel-cost method and the implications of measurement error.

    PubMed

    Hanauer, Merlin M; Reid, John

    2017-08-01

    Urbanization has placed pressure on open space within and adjacent to cities. In recent decades, a greater awareness has developed to the fact that individuals derive multiple benefits from urban open space. Given the location, there is often a high opportunity cost to preserving urban open space, thus it is important for both public and private stakeholders to justify such investments. The goals of this study are twofold. First, we use detailed surveys and precise, accessible, mapping methods to demonstrate how travel-cost methods can be applied to the valuation of urban open space. Second, we assess the degree to which typical methods of estimating travel times, and thus travel costs, introduce bias to the estimates of welfare. The site we study is Taylor Mountain Regional Park, a 1100-acre space located immediately adjacent to Santa Rosa, California, which is the largest city (∼170,000 population) in Sonoma County and lies 50 miles north of San Francisco. We estimate that the average per trip access value (consumer surplus) is $13.70. We also demonstrate that typical methods of measuring travel costs significantly understate these welfare measures. Our study provides policy-relevant results and highlights the sensitivity of urban open space travel-cost studies to bias stemming from travel-cost measurement error. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Evaluation and attribution of OCO-2 XCO2 uncertainties

    NASA Astrophysics Data System (ADS)

    Worden, John R.; Doran, Gary; Kulawik, Susan; Eldering, Annmarie; Crisp, David; Frankenberg, Christian; O'Dell, Chris; Bowman, Kevin

    2017-07-01

    Evaluating and attributing uncertainties in total column atmospheric CO2 measurements (XCO2) from the OCO-2 instrument is critical for testing hypotheses related to the underlying processes controlling XCO2 and for developing quality flags needed to choose those measurements that are usable for carbon cycle science.Here we test the reported uncertainties of version 7 OCO-2 XCO2 measurements by examining variations of the XCO2 measurements and their calculated uncertainties within small regions (˜ 100 km × 10.5 km) in which natural CO2 variability is expected to be small relative to variations imparted by noise or interferences. Over 39 000 of these small neighborhoods comprised of approximately 190 observations per neighborhood are used for this analysis. We find that a typical ocean measurement has a precision and accuracy of 0.35 and 0.24 ppm respectively for calculated precisions larger than ˜ 0.25 ppm. These values are approximately consistent with the calculated errors of 0.33 and 0.14 ppm for the noise and interference error, assuming that the accuracy is bounded by the calculated interference error. The actual precision for ocean data becomes worse as the signal-to-noise increases or the calculated precision decreases below 0.25 ppm for reasons that are not well understood. A typical land measurement, both nadir and glint, is found to have a precision and accuracy of approximately 0.75 and 0.65 ppm respectively as compared to the calculated precision and accuracy of approximately 0.36 and 0.2 ppm. The differences in accuracy between ocean and land suggests that the accuracy of XCO2 data is likely related to interferences such as aerosols or surface albedo as they vary less over ocean than land. The accuracy as derived here is also likely a lower bound as it does not account for possible systematic biases between the regions used in this analysis.

  14. Joint modelling of repeated measurement and time-to-event data: an introductory tutorial.

    PubMed

    Asar, Özgür; Ritchie, James; Kalra, Philip A; Diggle, Peter J

    2015-02-01

    The term 'joint modelling' is used in the statistical literature to refer to methods for simultaneously analysing longitudinal measurement outcomes, also called repeated measurement data, and time-to-event outcomes, also called survival data. A typical example from nephrology is a study in which the data from each participant consist of repeated estimated glomerular filtration rate (eGFR) measurements and time to initiation of renal replacement therapy (RRT). Joint models typically combine linear mixed effects models for repeated measurements and Cox models for censored survival outcomes. Our aim in this paper is to present an introductory tutorial on joint modelling methods, with a case study in nephrology. We describe the development of the joint modelling framework and compare the results with those obtained by the more widely used approaches of conducting separate analyses of the repeated measurements and survival times based on a linear mixed effects model and a Cox model, respectively. Our case study concerns a data set from the Chronic Renal Insufficiency Standards Implementation Study (CRISIS). We also provide details of our open-source software implementation to allow others to replicate and/or modify our analysis. The results for the conventional linear mixed effects model and the longitudinal component of the joint models were found to be similar. However, there were considerable differences between the results for the Cox model with time-varying covariate and the time-to-event component of the joint model. For example, the relationship between kidney function as measured by eGFR and the hazard for initiation of RRT was significantly underestimated by the Cox model that treats eGFR as a time-varying covariate, because the Cox model does not take measurement error in eGFR into account. Joint models should be preferred for simultaneous analyses of repeated measurement and survival data, especially when the former is measured with error and the association between the underlying error-free measurement process and the hazard for survival is of scientific interest. © The Author 2015; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  15. On the use of Lineal Energy Measurements to Estimate Linear Energy Transfer Spectra

    NASA Technical Reports Server (NTRS)

    Adams, David A.; Howell, Leonard W., Jr.; Adam, James H., Jr.

    2007-01-01

    This paper examines the error resulting from using a lineal energy spectrum to represent a linear energy transfer spectrum for applications in the space radiation environment. Lineal energy and linear energy transfer spectra are compared in three diverse but typical space radiation environments. Different detector geometries are also studied to determine how they affect the error. LET spectra are typically used to compute dose equivalent for radiation hazard estimation and single event effect rates to estimate radiation effects on electronics. The errors in the estimations of dose equivalent and single event rates that result from substituting lineal energy spectra for linear energy spectra are examined. It is found that this substitution has little effect on dose equivalent estimates in interplanetary quiet-time environment regardless of detector shape. The substitution has more of an effect when the environment is dominated by solar energetic particles or trapped radiation, but even then the errors are minor especially if a spherical detector is used. For single event estimation, the effect of the substitution can be large if the threshold for the single event effect is near where the linear energy spectrum drops suddenly. It is judged that single event rate estimates made from lineal energy spectra are unreliable and the use of lineal energy spectra for single event rate estimation should be avoided.

  16. Performances Study of Interferometric Radar Altimeters: from the Instrument to the Global Mission Definition

    PubMed Central

    Enjolras, Vivien; Vincent, Patrick; Souyris, Jean-Claude; Rodriguez, Ernesto; Phalippou, Laurent; Cazenave, Anny

    2006-01-01

    The main limitations of standard nadir-looking radar altimeters have been known for long. They include the lack of coverage (intertrack distance of typically 150 km for the T/P / Jason tandem), and the spatial resolution (typically 2 km for T/P and Jason), expected to be a limiting factor for the determination of mesoscale phenomena in deep ocean. In this context, various solutions using off-nadir radar interferometry have been proposed by Rodriguez and al to give an answer to oceanographic mission objectives. This paper addresses the performances study of this new generation of instruments, and dedicated mission. A first approach is based on the Wide-Swath Ocean Altimeter (WSOA) intended to be implemented onboard Jason-2 in 2004 but now abandoned. Every error domain has been checked: the physics of the measurement, its geometry, the impact of the platform and external errors like the tropospheric and ionospheric delays. We have especially shown the strong need to move to a sun-synchronous orbit and the non-negligible impact of propagation media errors in the swath, reaching a few centimetres in the worst case. Some changes in the parameters of the instrument have also been discussed to improve the overall error budget. The outcomes have led to the definition and the optimization of such an instrument and its dedicated mission.

  17. MO-FG-BRA-06: Electromagnetic Beacon Insertion in Lung Cancer Patients and Resultant Surrogacy Errors for Dynamic MLC Tumour Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardcastle, N; Booth, J; Caillet, V

    Purpose: To assess endo-bronchial electromagnetic beacon insertion and to quantify the geometric accuracy of using beacons as a surrogate for tumour motion in real-time multileaf collimator (MLC) tracking of lung tumours. Methods: The LIGHT SABR trial is a world-first clinical trial in which the MLC leaves move with lung tumours in real time on a standard linear accelerator. Tracking is performed based on implanted electromagnetic beacons (CalypsoTM, Varian Medical Systems, USA) as a surrogate for tumour motion. Five patients have been treated and have each had three beacons implanted endo-bronchially under fluoroscopic guidance. The centre of mass (C.O.M) has beenmore » used to adapt the MLC in real-time. The geometric error in using the beacon C.O.M as a surrogate for tumour motion was measured by measuring the tumour and beacon C.O.M in all phases of the respiratory cycle of a 4DCT. The surrogacy error was defined as the difference in beacon and tumour C.O.M relative to the reference phase (maximum exhale). Results: All five patients have had three beacons successfully implanted with no migration between simulation and end of treatment. Beacon placement relative to tumour C.O.M varied from 14 to 74 mm and in one patient spanned two lobes. Surrogacy error was measured in each patient on the simulation 4DCT and ranged from 0 to 3 mm. Surrogacy error as measured on 4DCT was subject to artefacts in mid-ventilation phases. Surrogacy error was a function of breathing phase and was typically larger at maximum inhale. Conclusion: Beacon placement and thus surrogacy error is a major component of geometric uncertainty in MLC tracking of lung tumours. Surrogacy error must be measured on each patient and incorporated into margin calculation. Reduction of surrogacy error is limited by airway anatomy, however should be taken into consideration when performing beacon insertion and planning. This research is funded by Varian Medical Systems via a collaborative research agreement.« less

  18. Parametric Modulation of Error-Related ERP Components by the Magnitude of Visuo-Motor Mismatch

    ERIC Educational Resources Information Center

    Vocat, Roland; Pourtois, Gilles; Vuilleumier, Patrik

    2011-01-01

    Errors generate typical brain responses, characterized by two successive event-related potentials (ERP) following incorrect action: the error-related negativity (ERN) and the positivity error (Pe). However, it is unclear whether these error-related responses are sensitive to the magnitude of the error, or instead show all-or-none effects. We…

  19. A Practical Methodology for Quantifying Random and Systematic Components of Unexplained Variance in a Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Deloach, Richard; Obara, Clifford J.; Goodman, Wesley L.

    2012-01-01

    This paper documents a check standard wind tunnel test conducted in the Langley 0.3-Meter Transonic Cryogenic Tunnel (0.3M TCT) that was designed and analyzed using the Modern Design of Experiments (MDOE). The test designed to partition the unexplained variance of typical wind tunnel data samples into two constituent components, one attributable to ordinary random error, and one attributable to systematic error induced by covariate effects. Covariate effects in wind tunnel testing are discussed, with examples. The impact of systematic (non-random) unexplained variance on the statistical independence of sequential measurements is reviewed. The corresponding correlation among experimental errors is discussed, as is the impact of such correlation on experimental results generally. The specific experiment documented herein was organized as a formal test for the presence of unexplained variance in representative samples of wind tunnel data, in order to quantify the frequency with which such systematic error was detected, and its magnitude relative to ordinary random error. Levels of systematic and random error reported here are representative of those quantified in other facilities, as cited in the references.

  20. Statistically Controlling for Confounding Constructs Is Harder than You Think

    PubMed Central

    Westfall, Jacob; Yarkoni, Tal

    2016-01-01

    Social scientists often seek to demonstrate that a construct has incremental validity over and above other related constructs. However, these claims are typically supported by measurement-level models that fail to consider the effects of measurement (un)reliability. We use intuitive examples, Monte Carlo simulations, and a novel analytical framework to demonstrate that common strategies for establishing incremental construct validity using multiple regression analysis exhibit extremely high Type I error rates under parameter regimes common in many psychological domains. Counterintuitively, we find that error rates are highest—in some cases approaching 100%—when sample sizes are large and reliability is moderate. Our findings suggest that a potentially large proportion of incremental validity claims made in the literature are spurious. We present a web application (http://jakewestfall.org/ivy/) that readers can use to explore the statistical properties of these and other incremental validity arguments. We conclude by reviewing SEM-based statistical approaches that appropriately control the Type I error rate when attempting to establish incremental validity. PMID:27031707

  1. Determination of the anaerobic threshold in the pre-operative assessment clinic: inter-observer measurement error.

    PubMed

    Sinclair, R C F; Danjoux, G R; Goodridge, V; Batterham, A M

    2009-11-01

    The variability between observers in the interpretation of cardiopulmonary exercise tests may impact upon clinical decision making and affect the risk stratification and peri-operative management of a patient. The purpose of this study was to quantify the inter-reader variability in the determination of the anaerobic threshold (V-slope method). A series of 21 cardiopulmonary exercise tests from patients attending a surgical pre-operative assessment clinic were read independently by nine experienced clinicians regularly involved in clinical decision making. The grand mean for the anaerobic threshold was 10.5 ml O(2).kg body mass(-1).min(-1). The technical error of measurement was 8.1% (circa 0.9 ml.kg(-1).min(-1); 90% confidence interval, 7.4-8.9%). The mean absolute difference between readers was 4.5% with a typical random error of 6.5% (6.0-7.2%). We conclude that the inter-observer variability for experienced clinicians determining the anaerobic threshold from cardiopulmonary exercise tests is acceptable.

  2. Grazing Incidence Wavefront Sensing and Verification of X-Ray Optics Performance

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.; Rohrbach, Scott; Zhang, William W.

    2011-01-01

    Evaluation of interferometrically measured mirror metrology data and characterization of a telescope wavefront can be powerful tools in understanding of image characteristics of an x-ray optical system. In the development of soft x-ray telescope for the International X-Ray Observatory (IXO), we have developed new approaches to support the telescope development process. Interferometrically measuring the optical components over all relevant spatial frequencies can be used to evaluate and predict the performance of an x-ray telescope. Typically, the mirrors are measured using a mount that minimizes the mount and gravity induced errors. In the assembly and mounting process the shape of the mirror segments can dramatically change. We have developed wavefront sensing techniques suitable for the x-ray optical components to aid us in the characterization and evaluation of these changes. Hartmann sensing of a telescope and its components is a simple method that can be used to evaluate low order mirror surface errors and alignment errors. Phase retrieval techniques can also be used to assess and estimate the low order axial errors of the primary and secondary mirror segments. In this paper we describe the mathematical foundation of our Hartmann and phase retrieval sensing techniques. We show how these techniques can be used in the evaluation and performance prediction process of x-ray telescopes.

  3. Influence of conservative corrections on parameter estimation for extreme-mass-ratio inspirals

    NASA Astrophysics Data System (ADS)

    Huerta, E. A.; Gair, Jonathan R.

    2009-04-01

    We present an improved numerical kludge waveform model for circular, equatorial extreme-mass-ratio inspirals (EMRIs). The model is based on true Kerr geodesics, augmented by radiative self-force corrections derived from perturbative calculations, and in this paper for the first time we include conservative self-force corrections that we derive by comparison to post-Newtonian results. We present results of a Monte Carlo simulation of parameter estimation errors computed using the Fisher matrix and also assess the theoretical errors that would arise from omitting the conservative correction terms we include here. We present results for three different types of system, namely, the inspirals of black holes, neutron stars, or white dwarfs into a supermassive black hole (SMBH). The analysis shows that for a typical source (a 10M⊙ compact object captured by a 106M⊙ SMBH at a signal to noise ratio of 30) we expect to determine the two masses to within a fractional error of ˜10-4, measure the spin parameter q to ˜10-4.5, and determine the location of the source on the sky and the spin orientation to within 10-3 steradians. We show that, for this kludge model, omitting the conservative corrections leads to a small error over much of the parameter space, i.e., the ratio R of the theoretical model error to the Fisher matrix error is R<1 for all ten parameters in the model. For the few systems with larger errors typically R<3 and hence the conservative corrections can be marginally ignored. In addition, we use our model and first-order self-force results for Schwarzschild black holes to estimate the error that arises from omitting the second-order radiative piece of the self-force. This indicates that it may not be necessary to go beyond first order to recover accurate parameter estimates.

  4. Development and Monte Carlo Study of a Procedure for Correcting the Standardized Mean Difference for Measurement Error in the Independent Variable

    ERIC Educational Resources Information Center

    Nugent, William Robert; Moore, Matthew; Story, Erin

    2015-01-01

    The standardized mean difference (SMD) is perhaps the most important meta-analytic effect size. It is typically used to represent the difference between treatment and control population means in treatment efficacy research. It is also used to represent differences between populations with different characteristics, such as persons who are…

  5. Improved guidance hardware study for the scout launch vehicle

    NASA Technical Reports Server (NTRS)

    Schappell, R. T.; Salis, M. L.; Mueller, R.; Best, L. E.; Bradt, A. J.; Harrison, R.; Burrell, J. H.

    1972-01-01

    A market survey and evaluation of inertial guidance systems (inertial measurement units and digital computers) were made. Comparisons were made to determine the candidate systems for use in the Scout launch vehicle. Error analyses were made using typical Scout trajectories. A reaction control system was sized for the fourth stage. The guidance hardware to Scout vehicle interface was listed.

  6. Development of a two-dimensional dual pendulum thrust stand for Hall thrusters.

    PubMed

    Nagao, N; Yokota, S; Komurasaki, K; Arakawa, Y

    2007-11-01

    A two-dimensional dual pendulum thrust stand was developed to measure thrust vectors [axial and horizontal (transverse) direction thrusts] of a Hall thruster. A thruster with a steering mechanism is mounted on the inner pendulum, and thrust is measured from the displacement between inner and outer pendulums, by which a thermal drift effect is canceled out. Two crossover knife-edges support each pendulum arm: one is set on the other at a right angle. They enable the pendulums to swing in two directions. Thrust calibration using a pulley and weight system showed that the measurement errors were less than 0.25 mN (1.4%) in the main thrust direction and 0.09 mN (1.4%) in its transverse direction. The thrust angle of the thrust vector was measured with the stand using the thruster. Consequently, a vector deviation from the main thrust direction of +/-2.3 degrees was measured with the error of +/-0.2 degrees under the typical operating conditions for the thruster.

  7. Absolute method of measuring magnetic susceptibility

    USGS Publications Warehouse

    Thorpe, A.; Senftle, F.E.

    1959-01-01

    An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.

  8. Retrieving Storm Electric Fields From Aircraft Field Mill Data. Part 2; Applications

    NASA Technical Reports Server (NTRS)

    Koshak, W. J.; Mach, D. M.; Christian, H. J.; Stewart, M. F.; Bateman, M. G.

    2005-01-01

    The Lagrange multiplier theory and "pitch down method" developed in Part I of this study are applied to complete the calibration of a Citation aircraft that is instrumented with six field mill sensors. When side constraints related to average fields are used, the method performs well in computer simulations. For mill measurement errors of 1 V/m and a 5 V/m error in the mean fair weather field function, the 3-D storm electric field is retrieved to within an error of about 12%. A side constraint that involves estimating the detailed structure of the fair weather field was also tested using computer simulations. For mill measurement errors of 1 V/m, the method retrieves the 3-D storm field to within an error of about 8% if the fair weather field estimate is typically within 1 V/m of the true fair weather field. Using this side constraint and data from fair weather field maneuvers taken on 29 June 2001, the Citation aircraft was calibrated. The resulting calibration matrix was then used to retrieve storm electric fields during a Citation flight on 2 June 2001. The storm field results are encouraging and agree favorably with the results obtained from earlier calibration analyses that were based on iterative techniques.

  9. Optical truss and retroreflector modeling for picometer laser metrology

    NASA Astrophysics Data System (ADS)

    Hines, Braden E.

    1993-09-01

    Space-based astrometric interferometer concepts typically have a requirement for the measurement of the internal dimensions of the instrument to accuracies in the picometer range. While this level of resolution has already been achieved for certain special types of laser gauges, techniques for picometer-level accuracy need to be developed to enable all the various kinds of laser gauges needed for space-based interferometers. Systematic errors due to retroreflector imperfections become important as soon as the retroreflector is allowed to either translate in position or articulate in angle away from its nominal zero-point. Also, when combining several laser interferometers to form a three-dimensional laser gauge (a laser optical truss), systematic errors due to imperfect knowledge of the truss geometry are important as the retroreflector translates away from its nominal zero-point. In order to assess the astrometric performance of a proposed instrument, it is necessary to determine how the effects of an imperfect laser metrology system impact the astrometric accuracy. This paper show the development of an error propagation model from errors in the 1-D metrology measurements through the impact on the overall astrometric accuracy for OSI. Simulations are then presented based on this development which were used to define a multiplier which determines the 1-D metrology accuracy required to produce a given amount of fringe position error.

  10. Characterization of a multi-axis ion chamber array.

    PubMed

    Simon, Thomas A; Kozelka, Jakub; Simon, William E; Kahler, Darren; Li, Jonathan; Liu, Chihray

    2010-11-01

    The aim of this work was to characterize a multi-axis ion chamber array (IC PROFILER; Sun Nuclear Corporation, Melbourne, FL, USA) that has the potential to simplify the acquisition of LINAC beam data. The IC PROFILER (or panel) measurement response was characterized with respect to radiation beam properties, including dose, dose per pulse, pulse rate frequency (PRF), and energy. Panel properties were also studied, including detector-calibration stability, power-on time, backscatter dependence, and the panel's agreement with water tank measurements [profiles, fractional depth dose (FDD), and output factors]. The panel's relative deviation was typically within (+/-) 1% of an independent (or nominal) response for all properties that were tested. Notable results were (a) a detectable relative field shape change of approximately 1% with linear accelerator PRF changes; (b) a large range in backscatter thickness had a minimal effect on the measured dose distribution (typically less than 1%); (c) the error spread in profile comparison between the panel and scanning water tank (Blue Phantom, CC13; IBA Schwarzenbruck, DE) was approximately (+/-) 0.75%. The ability of the panel to accurately reproduce water tank profiles, FDDs, and output factors is an indication of its abilities as a dosimetry system. The benefits of using the panel versus a scanning water tank are less setup time and less error susceptibility. The same measurements (including device setup and breakdown) for both systems took 180 min with the water tank versus 30 min with the panel. The time-savings increase as the measurement load is increased.

  11. Estimation of uncertainty bounds for individual particle image velocimetry measurements from cross-correlation peak ratio

    NASA Astrophysics Data System (ADS)

    Charonko, John J.; Vlachos, Pavlos P.

    2013-06-01

    Numerous studies have established firmly that particle image velocimetry (PIV) is a robust method for non-invasive, quantitative measurements of fluid velocity, and that when carefully conducted, typical measurements can accurately detect displacements in digital images with a resolution well below a single pixel (in some cases well below a hundredth of a pixel). However, to date, these estimates have only been able to provide guidance on the expected error for an average measurement under specific image quality and flow conditions. This paper demonstrates a new method for estimating the uncertainty bounds to within a given confidence interval for a specific, individual measurement. Here, cross-correlation peak ratio, the ratio of primary to secondary peak height, is shown to correlate strongly with the range of observed error values for a given measurement, regardless of flow condition or image quality. This relationship is significantly stronger for phase-only generalized cross-correlation PIV processing, while the standard correlation approach showed weaker performance. Using an analytical model of the relationship derived from synthetic data sets, the uncertainty bounds at a 95% confidence interval are then computed for several artificial and experimental flow fields, and the resulting errors are shown to match closely to the predicted uncertainties. While this method stops short of being able to predict the true error for a given measurement, knowledge of the uncertainty level for a PIV experiment should provide great benefits when applying the results of PIV analysis to engineering design studies and computational fluid dynamics validation efforts. Moreover, this approach is exceptionally simple to implement and requires negligible additional computational cost.

  12. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part I: Effects of Random Error

    NASA Technical Reports Server (NTRS)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.

  13. The effect of learning on feedback-related potentials in adolescents with dyslexia: an EEG-ERP study.

    PubMed

    Kraus, Dror; Horowitz-Kraus, Tzipi

    2014-01-01

    Individuals with dyslexia exhibit associated learning deficits and impaired executive functions. The Wisconsin Card Sorting Test (WCST) is a learning-based task that relies heavily on executive functioning, in particular, attention shift and working memory. Performance during early and late phases of a series within the task represents learning and implementation of a newly learned rule. Here, we aimed to examine two event-related potentials associated with learning, feedback-related negativity (FRN)-P300 complex, in individuals with dyslexia performing the WCST. Adolescents with dyslexia and age-matched typical readers performed the Madrid card sorting test (MCST), a computerized version of the WCST. Task performance, reading measures, and cognitive measures were collected. FRN and the P300 complex were acquired using the event-related potentials methodology and were compared in early vs late errors within a series. While performing the MCST, both groups showed a significant reduction in average reaction times and a trend toward decreased error rates. Typical readers performed consistently better than individuals with dyslexia. FRN amplitudes in early phases were significantly smaller in dyslexic readers, but were essentially equivalent to typical readers in the late phase. P300 amplitudes were initially smaller among readers with dyslexia and tended to decrease further in late phases. Differences in FRN amplitudes for early vs late phases were positively correlated with those of P300 amplitudes in the entire sample. Individuals with dyslexia demonstrate a behavioral and electrophysiological change within single series of the MCST. However, learning patterns seem to differ between individuals with dyslexia and typical readers. We attribute these differences to the lower baseline performance of individuals with dyslexia. We suggest that these changes represent a fast compensatory mechanism, demonstrating the importance of learning strategies on reading among individuals with dyslexia.

  14. Impact of food and fluid intake on technical and biological measurement error in body composition assessment methods in athletes.

    PubMed

    Kerr, Ava; Slater, Gary J; Byrne, Nuala

    2017-02-01

    Two, three and four compartment (2C, 3C and 4C) models of body composition are popular methods to measure fat mass (FM) and fat-free mass (FFM) in athletes. However, the impact of food and fluid intake on measurement error has not been established. The purpose of this study was to evaluate standardised (overnight fasted, rested and hydrated) v. non-standardised (afternoon and non-fasted) presentation on technical and biological error on surface anthropometry (SA), 2C, 3C and 4C models. In thirty-two athletic males, measures of SA, dual-energy X-ray absorptiometry (DXA), bioelectrical impedance spectroscopy (BIS) and air displacement plethysmography (BOD POD) were taken to establish 2C, 3C and 4C models. Tests were conducted after an overnight fast (duplicate), about 7 h later after ad libitum food and fluid intake, and repeated 24 h later before and after ingestion of a specified meal. Magnitudes of changes in the mean and typical errors of measurement were determined. Mean change scores for non-standardised presentation and post meal tests for FM were substantially large in BIS, SA, 3C and 4C models. For FFM, mean change scores for non-standardised conditions produced large changes for BIS, 3C and 4C models, small for DXA, trivial for BOD POD and SA. Models that included a total body water (TBW) value from BIS (3C and 4C) were more sensitive to TBW changes in non-standardised conditions than 2C models. Biological error is minimised in all models with standardised presentation but DXA and BOD POD are acceptable if acute food and fluid intake remains below 500 g.

  15. Linguistic pattern analysis of misspellings of typically developing writers in grades 1-9.

    PubMed

    Bahr, Ruth Huntley; Sillian, Elaine R; Berninger, Virginia W; Dow, Michael

    2012-12-01

    A mixed-methods approach, evaluating triple word-form theory, was used to describe linguistic patterns of misspellings. Spelling errors were taken from narrative and expository writing samples provided by 888 typically developing students in Grades 1-9. Errors were coded by category (phonological, orthographic, and morphological) and specific linguistic feature affected. Grade-level effects were analyzed with trend analysis. Qualitative analyses determined frequent error types and how use of specific linguistic features varied across grades. Phonological, orthographic, and morphological errors were noted across all grades, but orthographic errors predominated. Linear trends revealed developmental shifts in error proportions for the orthographic and morphological categories between Grades 4 and 5. Similar error types were noted across age groups, but the nature of linguistic feature error changed with age. Triple word-form theory was supported. By Grade 1, orthographic errors predominated, and phonological and morphological error patterns were evident. Morphological errors increased in relative frequency in older students, probably due to a combination of word-formation issues and vocabulary growth. These patterns suggest that normal spelling development reflects nonlinear growth and that it takes a long time to develop a robust orthographic lexicon that coordinates phonology, orthography, and morphology and supports word-specific, conventional spelling.

  16. Early Career Teachers' Ability to Focus on Typical Students Errors in Relation to the Complexity of a Mathematical Topic

    ERIC Educational Resources Information Center

    Pankow, Lena; Kaiser, Gabriele; Busse, Andreas; König, Johannes; Blömeke, Sigrid; Hoth, Jessica; Döhrmann, Martina

    2016-01-01

    The paper presents results from a computer-based assessment in which 171 early career mathematics teachers from Germany were asked to anticipate typical student errors on a given mathematical topic and identify them under time constraints. Fast and accurate perception and knowledge-based judgments are widely accepted characteristics of teacher…

  17. (How) do we learn from errors? A prospective study of the link between the ward's learning practices and medication administration errors.

    PubMed

    Drach-Zahavy, A; Somech, A; Admi, H; Peterfreund, I; Peker, H; Priente, O

    2014-03-01

    Attention in the ward should shift from preventing medication administration errors to managing them. Nevertheless, little is known in regard with the practices nursing wards apply to learn from medication administration errors as a means of limiting them. To test the effectiveness of four types of learning practices, namely, non-integrated, integrated, supervisory and patchy learning practices in limiting medication administration errors. Data were collected from a convenient sample of 4 hospitals in Israel by multiple methods (observations and self-report questionnaires) at two time points. The sample included 76 wards (360 nurses). Medication administration error was defined as any deviation from prescribed medication processes and measured by a validated structured observation sheet. Wards' use of medication administration technologies, location of the medication station, and workload were observed; learning practices and demographics were measured by validated questionnaires. Results of the mixed linear model analysis indicated that the use of technology and quiet location of the medication cabinet were significantly associated with reduced medication administration errors (estimate=.03, p<.05 and estimate=-.17, p<.01 correspondingly), while workload was significantly linked to inflated medication administration errors (estimate=.04, p<.05). Of the learning practices, supervisory learning was the only practice significantly linked to reduced medication administration errors (estimate=-.04, p<.05). Integrated and patchy learning were significantly linked to higher levels of medication administration errors (estimate=-.03, p<.05 and estimate=-.04, p<.01 correspondingly). Non-integrated learning was not associated with it (p>.05). How wards manage errors might have implications for medication administration errors beyond the effects of typical individual, organizational and technology risk factors. Head nurse can facilitate learning from errors by "management by walking around" and monitoring nurses' medication administration behaviors. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Consideration of measurement error when using commercial indoor radon determinations for selecting radon action levels

    USGS Publications Warehouse

    Reimer, G.M.; Szarzi, S.L.; Dolan, Michael P.

    1998-01-01

    An examination of year-long, in-home radon measurement in Colorado from commercial companies applying typical methods indicates that considerable variation in precision exists. This variation can have a substantial impact on any mitigation decisions, either voluntary or mandated by law, especially regarding property sale or exchange. Both long-term exposure (nuclear track greater than 90 days), and short-term (charcoal adsorption 4-7 days) exposure methods were used. In addition, periods of continuous monitoring with a highly calibrated alpha-scintillometer took place for accuracy calibration. The results of duplicate commercial analysis show that typical results are no better than ??25 percent with occasional outliers (up to 5 percent of all analyses) well beyond that limit. Differential seasonal measurements (winter/summer) by short-term methods provide equivalent information to single long-term measurements. Action levels in the U.S. for possible mitigation decisions should be selected so that they consider the measurement variability; specifically, they should reflect a concentration range similar to that adopted by the European Community.

  19. Speckle Interferometry at the Blanco and SOAR Telescopes in 2008 and 2009

    NASA Technical Reports Server (NTRS)

    Tokovinin, Andrei; Mason, Brian D.; Hartkopf, William I.

    2010-01-01

    The results of speckle interferometric measurements of binary and multiple stars conducted in 2008 and 2009 at the Blanco and Southern Astrophysical Research (SOAR) 4 m telescopes in Chile are presented. A tot al of 1898 measurements of 1189 resolved pairs or sub-systems and 394 observations of 285 un-resolved targets are listed. We resolved for the first time 48 new pairs, 21 of which are new sub-systems in close visual multiple stars. Typical internal measurement precision is 0.3 mas in both coordinates, typical companion detection capability is delta m approximately 4.2 at 0.15 degree separation. These data were obtained with a new electron-multiplication CCD camera; data processing is described in detail, including estimation of magnitude difference, observational errors, detection limits, and analysis of artifacts. We comment on some newly discovered pairs and objects of special interest.

  20. SPECKLE INTERFEROMETRY AT THE BLANCO AND SOAR TELESCOPES IN 2008 AND 2009

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tokovinin, Andrei; Mason, Brian D.; Hartkopf, William I.

    2010-02-15

    The results of speckle interferometric measurements of binary and multiple stars conducted in 2008 and 2009 at the Blanco and SOAR 4 m telescopes in Chile are presented. A total of 1898 measurements of 1189 resolved pairs or sub-systems and 394 observations of 285 un-resolved targets are listed. We resolved for the first time 48 new pairs, 21 of which are new sub-systems in close visual multiple stars. Typical internal measurement precision is 0.3 mas in both coordinates, typical companion detection capability is {delta}m {approx} 4.2 at 0.''15 separation. These data were obtained with a new electron-multiplication CCD camera; datamore » processing is described in detail, including estimation of magnitude difference, observational errors, detection limits, and analysis of artifacts. We comment on some newly discovered pairs and objects of special interest.« less

  1. Performance of JT-60SA divertor Thomson scattering diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kajita, Shin, E-mail: kajita.shin@nagoya-u.jp; Hatae, Takaki; Tojo, Hiroshi

    2015-08-15

    For the satellite tokamak JT-60 Super Advanced (JT-60SA), a divertor Thomson scattering measurement system is planning to be installed. In this study, we improved the design of the collection optics based on the previous one, in which it was found that the solid angle of the collection optics became very small, mainly because of poor accessibility to the measurement region. By improvement, the solid angle was increased by up to approximately five times. To accurately assess the measurement performance, background noise was assessed using the plasma parameters in two typical discharges in JT-60SA calculated from the SONIC code. Moreover, themore » influence of the reflection of bremsstrahlung radiation by the wall is simulated by using a ray tracing simulation. The errors in the temperature and the density are assessed based on the simulation results for three typical field of views.« less

  2. Performance of JT-60SA divertor Thomson scattering diagnostics.

    PubMed

    Kajita, Shin; Hatae, Takaki; Tojo, Hiroshi; Enokuchi, Akito; Hamano, Takashi; Shimizu, Katsuhiro; Kawashima, Hisato

    2015-08-01

    For the satellite tokamak JT-60 Super Advanced (JT-60SA), a divertor Thomson scattering measurement system is planning to be installed. In this study, we improved the design of the collection optics based on the previous one, in which it was found that the solid angle of the collection optics became very small, mainly because of poor accessibility to the measurement region. By improvement, the solid angle was increased by up to approximately five times. To accurately assess the measurement performance, background noise was assessed using the plasma parameters in two typical discharges in JT-60SA calculated from the SONIC code. Moreover, the influence of the reflection of bremsstrahlung radiation by the wall is simulated by using a ray tracing simulation. The errors in the temperature and the density are assessed based on the simulation results for three typical field of views.

  3. Increased Perceptual and Conceptual Processing Difficulty Makes the Immeasurable Measurable: Negative Priming in the Absence of Probe Distractors

    ERIC Educational Resources Information Center

    Frings, Christian; Spence, Charles

    2011-01-01

    Negative priming (NP) refers to the finding that people's responses to probe targets previously presented as prime distractors are usually slower and more error prone than to unrepeated stimuli. In a typical NP experiment, each probe target is accompanied by a distractor. It is an accepted, albeit puzzling, finding that the NP effect depends on…

  4. Using the Kernel Method of Test Equating for Estimating the Standard Errors of Population Invariance Measures. Research Report. ETS RR-06-20

    ERIC Educational Resources Information Center

    Moses, Tim

    2006-01-01

    Population invariance is an important requirement of test equating. An equating function is said to be population invariant when the choice of (sub)population used to compute the equating function does not matter. In recent studies, the extent to which equating functions are population invariant is typically addressed in terms of practical…

  5. Sensitivity analysis of periodic errors in heterodyne interferometry

    NASA Astrophysics Data System (ADS)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-03-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.

  6. Impact and quantification of the sources of error in DNA pooling designs.

    PubMed

    Jawaid, A; Sham, P

    2009-01-01

    The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.

  7. Sensitivity of C-Band Polarimetric Radar-Based Drop Size Distribution Measurements to Maximum Diameter Assumptions

    NASA Technical Reports Server (NTRS)

    Carey, Lawrence D.; Petersen, Walter A.

    2011-01-01

    The estimation of rain drop size distribution (DSD) parameters from polarimetric radar observations is accomplished by first establishing a relationship between differential reflectivity (Z(sub dr)) and the central tendency of the rain DSD such as the median volume diameter (D0). Since Z(sub dr) does not provide a direct measurement of DSD central tendency, the relationship is typically derived empirically from rain drop and radar scattering models (e.g., D0 = F[Z (sub dr)] ). Past studies have explored the general sensitivity of these models to temperature, radar wavelength, the drop shape vs. size relation, and DSD variability. Much progress has been made in recent years in measuring the drop shape and DSD variability using surface-based disdrometers, such as the 2D Video disdrometer (2DVD), and documenting their impact on polarimetric radar techniques. In addition to measuring drop shape, another advantage of the 2DVD over earlier impact type disdrometers is its ability to resolve drop diameters in excess of 5 mm. Despite this improvement, the sampling limitations of a disdrometer, including the 2DVD, make it very difficult to adequately measure the maximum drop diameter (D(sub max)) present in a typical radar resolution volume. As a result, D(sub max) must still be assumed in the drop and radar models from which D0 = F[Z(sub dr)] is derived. Since scattering resonance at C-band wavelengths begins to occur in drop diameters larger than about 5 mm, modeled C-band radar parameters, particularly Z(sub dr), can be sensitive to D(sub max) assumptions. In past C-band radar studies, a variety of D(sub max) assumptions have been made, including the actual disdrometer estimate of D(sub max) during a typical sampling period (e.g., 1-3 minutes), D(sub max) = C (where C is constant at values from 5 to 8 mm), and D(sub max) = M*D0 (where the constant multiple, M, is fixed at values ranging from 2.5 to 3.5). The overall objective of this NASA Global Precipitation Measurement Mission (GPM/PMM Science Team)-funded study is to document the sensitivity of DSD measurements, including estimates of D0, from C-band Z(sub dr) and reflectivity to this range of D(sub max) assumptions. For this study, GPM Ground Validation 2DVD's were operated under the scanning domain of the UAHuntsville ARMOR C-band dual-polarimetric radar. Approximately 7500 minutes of DSD data were collected and processed to create gamma size distribution parameters using a truncated method of moments approach. After creating the gamma parameter datasets the DSD's were then used as input to a T-matrix model for computation of polarimetric radar moments at C-band. All necessary model parameterizations, such as temperature, drop shape, and drop fall mode, were fixed at typically accepted values while the D(sub max) assumption was allowed to vary in sensitivity tests. By hypothesizing a DSD model with D(sub max) (fit) from which the empirical fit to D0 = F[Z(sub dr)] was derived via non-linear least squares regression and a separate reference DSD model with D(sub max) (truth), bias and standard error in D0 retrievals were estimated in the presence of Z(sub dr) measurement error and hypothesized mismatch in D(sub max) assumptions. Although the normalized standard error for D0 = F[Z(sub dr)r] can increase slightly (as much as from 11% to 16% for all 7500 DSDs) when the D(sub max) (fit) does not match D(sub max) (truth), the primary impact of uncertainty in D(sub max) is a potential increase in normalized bias error in D0 (from 0% to as much as 10% over all 7500 DSDs, depending on the extent of the mismatch between D(sub max) (fit) and D(sub max) (truth)). For DSDs characterized by large Z(sub dr) (Z(sub dr) > 1.5 to 2.0 dB), the normalized bias error for D0 estimation at C-band is sometimes unacceptably large (> 10%), again depending on the extent of the hypothesized D(sub max) mismatch. Modeled errors in D0 retrievals from Z(sub dr) at C-band are demonstrated in detail and comparedo similar modeled retrieval errors at S-band and X-band where the sensitivity to D(sub max) is expected to be less. The impact of D(sub max) assumptions to the retrieval of other DSD parameters such as Nw, the liquid water content normalized intercept parameter, are also explored. Likely implications for DSD retrievals using C-band polarimetric radar for GPM are assessed by considering current community knowledge regarding D(sub max) and quantifying the statistical distribution of Z(sub dr) from ARMOR over a large variety of meteorological conditions. Based on these results and the prevalence of C-band polarimetric radars worldwide, a call for more emphasis on constraining our observational estimate of D(sub max) within a typical radar resolution volume is made

  8. Placebo non-response measure in sequential parallel comparison design studies.

    PubMed

    Rybin, Denis; Doros, Gheorghe; Pencina, Michael J; Fava, Maurizio

    2015-07-10

    The Sequential Parallel Comparison Design (SPCD) is one of the novel approaches addressing placebo response. The analysis of SPCD data typically classifies subjects as 'placebo responders' or 'placebo non-responders'. Most current methods employed for analysis of SPCD data utilize only a part of the data collected during the trial. A repeated measures model was proposed for analysis of continuous outcomes that permitted the inclusion of information from all subjects into the treatment effect estimation. We describe here a new approach using a weighted repeated measures model that further improves the utilization of data collected during the trial, allowing the incorporation of information that is relevant to the placebo response, and dealing with the problem of possible misclassification of subjects. Our simulations show that when compared to the unweighted repeated measures model method, our approach performs as well or, under certain conditions, better, in preserving the type I error, achieving adequate power and minimizing the mean squared error. Copyright © 2015 John Wiley & Sons, Ltd.

  9. A method for optimizing the cosine response of solar UV diffusers

    NASA Astrophysics Data System (ADS)

    Pulli, Tomi; Kärhä, Petri; Ikonen, Erkki

    2013-07-01

    Instruments measuring global solar ultraviolet (UV) irradiance at the surface of the Earth need to collect radiation from the entire hemisphere. Entrance optics with angular response as close as possible to the ideal cosine response are necessary to perform these measurements accurately. Typically, the cosine response is obtained using a transmitting diffuser. We have developed an efficient method based on a Monte Carlo algorithm to simulate radiation transport in the solar UV diffuser assembly. The algorithm takes into account propagation, absorption, and scattering of the radiation inside the diffuser material. The effects of the inner sidewalls of the diffuser housing, the shadow ring, and the protective weather dome are also accounted for. The software implementation of the algorithm is highly optimized: a simulation of 109 photons takes approximately 10 to 15 min to complete on a typical high-end PC. The results of the simulations agree well with the measured angular responses, indicating that the algorithm can be used to guide the diffuser design process. Cost savings can be obtained when simulations are carried out before diffuser fabrication as compared to a purely trial-and-error-based diffuser optimization. The algorithm was used to optimize two types of detectors, one with a planar diffuser and the other with a spherically shaped diffuser. The integrated cosine errors—which indicate the relative measurement error caused by the nonideal angular response under isotropic sky radiance—of these two detectors were calculated to be f2=1.4% and 0.66%, respectively.

  10. Feedback-tuned, noise resilient gates for encoded spin qubits

    NASA Astrophysics Data System (ADS)

    Bluhm, Hendrik

    Spin 1/2 particles form native two level systems and thus lend themselves as a natural qubit implementation. However, encoding a single qubit in several spins entails benefits, such as reducing the resources necessary for qubit control and protection from certain decoherence channels. While several varieties of such encoded spin qubits have been implemented, accurate control remains challenging, and leakage out of the subspace of valid qubit states is a potential issue. Optimal performance typically requires large pulse amplitudes for fast control, which is prone to systematic errors and prohibits standard control approaches based on Rabi flopping. Furthermore, the exchange interaction typically used to electrically manipulate encoded spin qubits is inherently sensitive to charge noise. I will discuss all-electrical, high-fidelity single qubit operations for a spin qubit encoded in two electrons in a GaAs double quantum dot. Starting from a set of numerically optimized control pulses, we employ an iterative tuning procedure based on measured error syndromes to remove systematic errors.Randomized benchmarking yields an average gate fidelity exceeding 98 % and a leakage rate into invalid states of 0.2 %. These gates exhibit a certain degree of resilience to both slow charge and nuclear spin fluctuations due to dynamical correction analogous to a spin echo. Furthermore, the numerical optimization minimizes the impact of fast charge noise. Both types of noise make relevant contributions to gate errors. The general approach is also adaptable to other qubit encodings and exchange based two-qubit gates.

  11. Application of acoustic-Doppler current profiler and expendable bathythermograph measurements to the study of the velocity structure and transport of the Gulf Stream

    NASA Technical Reports Server (NTRS)

    Joyce, T. M.; Dunworth, J. A.; Schubert, D. M.; Stalcup, M. C.; Barbour, R. L.

    1988-01-01

    The degree to which Acoustic-Doppler Current Profiler (ADCP) and expendable bathythermograph (XBT) data can provide quantitative measurements of the velocity structure and transport of the Gulf Stream is addressed. An algorithm is used to generate salinity from temperature and depth using an historical Temperature/Salinity relation for the NW Atlantic. Results have been simulated using CTD data and comparing real and pseudo salinity files. Errors are typically less than 2 dynamic cm for the upper 800 m out of a total signal of 80 cm (across the Gulf Stream). When combined with ADCP data for a near-surface reference velocity, transport errors in isopycnal layers are less than about 1 Sv (10 to the 6th power cu m/s), as is the difference in total transport for the upper 800 m between real and pseudo data. The method is capable of measuring the real variability of the Gulf Stream, and when combined with altimeter data, can provide estimates of the geoid slope with oceanic errors of a few parts in 10 to the 8th power over horizontal scales of 500 km.

  12. Configuration Analysis of the ERS Points in Large-Volume Metrology System

    PubMed Central

    Jin, Zhangjun; Yu, Cijun; Li, Jiangxiong; Ke, Yinglin

    2015-01-01

    In aircraft assembly, multiple laser trackers are used simultaneously to measure large-scale aircraft components. To combine the independent measurements, the transformation matrices between the laser trackers’ coordinate systems and the assembly coordinate system are calculated, by measuring the enhanced referring system (ERS) points. This article aims to understand the influence of the configuration of the ERS points that affect the transformation matrix errors, and then optimize the deployment of the ERS points to reduce the transformation matrix errors. To optimize the deployment of the ERS points, an explicit model is derived to estimate the transformation matrix errors. The estimation model is verified by the experiment implemented in the factory floor. Based on the proposed model, a group of sensitivity coefficients are derived to evaluate the quality of the configuration of the ERS points, and then several typical configurations of the ERS points are analyzed in detail with the sensitivity coefficients. Finally general guidance is established to instruct the deployment of the ERS points in the aspects of the layout, the volume size and the number of the ERS points, as well as the position and orientation of the assembly coordinate system. PMID:26402685

  13. Evaluation of probe-induced flow distortion of Campbell CSAT3 sonic anemometers by numerical simulation

    NASA Astrophysics Data System (ADS)

    Mauder, M.; Huq, S.; De Roo, F.; Foken, T.; Manhart, M.; Schmid, H. P. E.

    2017-12-01

    The Campbell CSAT3 sonic anemometer is one of the most widely used instruments for eddy-covariance measurement. However, conflicting estimates for the probe-induced flow distortion error of this instrument have been reported recently, and those error estimates range between 3% and 14% for the measurement of vertical velocity fluctuations. This large discrepancy between the different studies can probably be attributed to the different experimental approaches applied. In order to overcome the limitations of both field intercomparison experiments and wind tunnel experiments, we propose a new approach that relies on virtual measurements in a large-eddy simulation (LES) environment. In our experimental set-up, we generate horizontal and vertical velocity fluctuations at frequencies that typically dominate the turbulence spectra of the surface layer. The probe-induced flow distortion error of a CSAT3 is then quantified by this numerical wind tunnel approach while the statistics of the prescribed inflow signal are taken as reference or etalon. The resulting relative error is found to range from 3% to 7% and from 1% to 3% for the standard deviation of the vertical and the horizontal velocity component, respectively, depending on the orientation of the CSAT3 in the flow field. We further demonstrate that these errors are independent of the frequency of fluctuations at the inflow of the simulation. The analytical corrections proposed by Kaimal et al. (Proc Dyn Flow Conf, 551-565, 1978) and Horst et al. (Boundary-Layer Meteorol, 155, 371-395, 2015) are compared against our simulated results, and we find that they indeed reduce the error by up to three percentage points. However, these corrections fail to reproduce the azimuth-dependence of the error that we observe. Moreover, we investigate the general Reynolds number dependence of the flow distortion error by more detailed idealized simulations.

  14. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  15. The Applicability of Standard Error of Measurement and Minimal Detectable Change to Motor Learning Research-A Behavioral Study.

    PubMed

    Furlan, Leonardo; Sterr, Annette

    2018-01-01

    Motor learning studies face the challenge of differentiating between real changes in performance and random measurement error. While the traditional p -value-based analyses of difference (e.g., t -tests, ANOVAs) provide information on the statistical significance of a reported change in performance scores, they do not inform as to the likely cause or origin of that change, that is, the contribution of both real modifications in performance and random measurement error to the reported change. One way of differentiating between real change and random measurement error is through the utilization of the statistics of standard error of measurement (SEM) and minimal detectable change (MDC). SEM is estimated from the standard deviation of a sample of scores at baseline and a test-retest reliability index of the measurement instrument or test employed. MDC, in turn, is estimated from SEM and a degree of confidence, usually 95%. The MDC value might be regarded as the minimum amount of change that needs to be observed for it to be considered a real change, or a change to which the contribution of real modifications in performance is likely to be greater than that of random measurement error. A computer-based motor task was designed to illustrate the applicability of SEM and MDC to motor learning research. Two studies were conducted with healthy participants. Study 1 assessed the test-retest reliability of the task and Study 2 consisted in a typical motor learning study, where participants practiced the task for five consecutive days. In Study 2, the data were analyzed with a traditional p -value-based analysis of difference (ANOVA) and also with SEM and MDC. The findings showed good test-retest reliability for the task and that the p -value-based analysis alone identified statistically significant improvements in performance over time even when the observed changes could in fact have been smaller than the MDC and thereby caused mostly by random measurement error, as opposed to by learning. We suggest therefore that motor learning studies could complement their p -value-based analyses of difference with statistics such as SEM and MDC in order to inform as to the likely cause or origin of any reported changes in performance.

  16. Evaluation of Probe-Induced Flow Distortion of Campbell CSAT3 Sonic Anemometers by Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Huq, Sadiq; De Roo, Frederik; Foken, Thomas; Mauder, Matthias

    2017-10-01

    The Campbell CSAT3 sonic anemometer is one of the most popular instruments for turbulence measurements in basic micrometeorological research and ecological applications. While measurement uncertainty has been characterized by field experiments and wind-tunnel studies in the past, there are conflicting estimates, which motivated us to conduct a numerical experiment using large-eddy simulation to evaluate the probe-induced flow distortion of the CSAT3 anemometer under controlled conditions, and with exact knowledge of the undisturbed flow. As opposed to wind-tunnel studies, we imposed oscillations in both the vertical and horizontal velocity components at the distinct frequencies and amplitudes found in typical turbulence spectra in the surface layer. The resulting flow-distortion errors for the standard deviations of the vertical velocity component range from 3 to 7%, and from 1 to 3% for the horizontal velocity component, depending on the azimuth angle. The magnitude of these errors is almost independent of the frequency of wind speed fluctuations, provided the amplitude is typical for surface-layer turbulence. A comparison of the corrections for transducer shadowing proposed by both Kaimal et al. (Proc Dyn Flow Conf, 551-565, 1978) and Horst et al. (Boundary-Layer Meteorol 155:371-395, 2015) show that both methods compensate for a larger part of the observed error, but do not sufficiently account for the azimuth dependency. Further numerical simulations could be conducted in the future to characterize the flow distortion induced by other existing types of sonic anemometers for the purposes of optimizing their geometry.

  17. Characterization of a multi-axis ion chamber array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simon, Thomas A.; Kozelka, Jakub; Simon, William E.

    Purpose: The aim of this work was to characterize a multi-axis ion chamber array (IC PROFILER; Sun Nuclear Corporation, Melbourne, FL USA) that has the potential to simplify the acquisition of LINAC beam data. Methods: The IC PROFILER (or panel) measurement response was characterized with respect to radiation beam properties, including dose, dose per pulse, pulse rate frequency (PRF), and energy. Panel properties were also studied, including detector-calibration stability, power-on time, backscatter dependence, and the panel's agreement with water tank measurements [profiles, fractional depth dose (FDD), and output factors]. Results: The panel's relative deviation was typically within ({+-}) 1% ofmore » an independent (or nominal) response for all properties that were tested. Notable results were (a) a detectable relative field shape change of {approx}1% with linear accelerator PRF changes; (b) a large range in backscatter thickness had a minimal effect on the measured dose distribution (typically less than 1%); (c) the error spread in profile comparison between the panel and scanning water tank (Blue Phantom, CC13; IBA Schwarzenbruck, DE) was approximately ({+-}) 0.75%. Conclusions: The ability of the panel to accurately reproduce water tank profiles, FDDs, and output factors is an indication of its abilities as a dosimetry system. The benefits of using the panel versus a scanning water tank are less setup time and less error susceptibility. The same measurements (including device setup and breakdown) for both systems took 180 min with the water tank versus 30 min with the panel. The time-savings increase as the measurement load is increased.« less

  18. Linguistic Pattern Analysis of Misspellings of Typically Developing Writers in Grades 1 to 9

    PubMed Central

    Bahr, Ruth Huntley; Silliman, Elaine R.; Berninger, Virginia W.; Dow, Michael

    2012-01-01

    Purpose A mixed methods approach, evaluating triple word form theory, was used to describe linguistic patterns of misspellings. Method Spelling errors were taken from narrative and expository writing samples provided by 888 typically developing students in grades 1–9. Errors were coded by category (phonological, orthographic, and morphological) and specific linguistic feature affected. Grade level effects were analyzed with trend analysis. Qualitative analyses determined frequent error types and how use of specific linguistic features varied across grades. Results Phonological, orthographic, and morphological errors were noted across all grades, but orthographic errors predominated. Linear trends revealed developmental shifts in error proportions for the orthographic and morphological categories between grades 4–5. Similar error types were noted across age groups but the nature of linguistic feature error changed with age. Conclusions Triple word-form theory was supported. By grade 1, orthographic errors predominated and phonological and morphological error patterns were evident. Morphological errors increased in relative frequency in older students, probably due to a combination of word-formation issues and vocabulary growth. These patterns suggest that normal spelling development reflects non-linear growth and that it takes a long time to develop a robust orthographic lexicon that coordinates phonology, orthography, and morphology and supports word-specific, conventional spelling. PMID:22473834

  19. A High-Resolution Measurement of Ball IR Black Paint's Low-Temperature Emissivity

    NASA Technical Reports Server (NTRS)

    Tuttle, Jim; Canavan, Ed; DiPirro, Mike; Li, Xiaoyi; Franck, Randy; Green, Dan

    2011-01-01

    High-emissivity paints are commonly used on thermal control system components. The total hemispheric emissivity values of such paints are typically high (nearly 1) at temperatures above about 100 Kelvin, but they drop off steeply at lower temperatures. A precise knowledge of this temperature-dependence is critical to designing passively-cooled components with low operating temperatures. Notable examples are the coatings on thermal radiators used to cool space-flight instruments to temperatures below 40 Kelvin. Past measurements of low-temperature paint emissivity have been challenging, often requiring large thermal chambers and typically producing data with high uncertainties below about 100 Kelvin. We describe a relatively inexpensive method of performing high-resolution emissivity measurements in a small cryostat. We present the results of such a measurement on Ball InfraRed BlackTM(BIRBTM), a proprietary surface coating produced by Ball Aerospace and Technologies Corp (BATC), which is used in spaceflight applications. We also describe a thermal model used in the error analysis.

  20. Retrieving Storm Electric Fields from Aircrfaft Field Mill Data: Part II: Applications

    NASA Technical Reports Server (NTRS)

    Koshak, William; Mach, D. M.; Christian H. J.; Stewart, M. F.; Bateman M. G.

    2006-01-01

    The Lagrange multiplier theory developed in Part I of this study is applied to complete a relative calibration of a Citation aircraft that is instrumented with six field mill sensors. When side constraints related to average fields are used, the Lagrange multiplier method performs well in computer simulations. For mill measurement errors of 1 V m(sup -1) and a 5 V m(sup -1) error in the mean fair-weather field function, the 3D storm electric field is retrieved to within an error of about 12%. A side constraint that involves estimating the detailed structure of the fair-weather field was also tested using computer simulations. For mill measurement errors of 1 V m(sup -l), the method retrieves the 3D storm field to within an error of about 8% if the fair-weather field estimate is typically within 1 V m(sup -1) of the true fair-weather field. Using this type of side constraint and data from fair-weather field maneuvers taken on 29 June 2001, the Citation aircraft was calibrated. Absolute calibration was completed using the pitch down method developed in Part I, and conventional analyses. The resulting calibration matrices were then used to retrieve storm electric fields during a Citation flight on 2 June 2001. The storm field results are encouraging and agree favorably in many respects with results derived from earlier (iterative) techniques of calibration.

  1. Design of a real-time two-color interferometer for MAST Upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Gorman, T., E-mail: thomas.ogorman@ccfe.ac.uk; Naylor, G.; Scannell, R.

    2014-11-15

    A single chord two-color CO{sub 2}/HeNe (10.6/0.633 μm) heterodyne laser interferometer has been designed to measure the line integral electron density along the mid-plane of the MAST Upgrade tokamak, with a typical error of 1 × 10{sup 18} m{sup −3} (∼2° phase error) at 4 MHz temporal resolution. To ensure this diagnostic system can be restored from any failures without stopping MAST Upgrade operations, it has been located outside of the machine area. The final design and initial testing of this system, including details of the optics, vibration isolation, and a novel phase detection scheme are discussed in this paper.

  2. [Measuring the effect of eyeglasses on determination of squint angle with Purkinje reflexes and the prism cover test].

    PubMed

    Barry, J C; Backes, A

    1998-04-01

    The alternating prism and cover test is the conventional test for the measurement of the angle of strabismus. The error induced by the prismatic effect of glasses is typically about 27-30%/10 D. Alternatively, the angle of strabismus can be measured with methods based on Purkinje reflex positions. This study examines the differences between three such options, taking into account the influence of glasses. The studied system comprised the eyes with or without glasses, a fixation object and a device for recording the eye position: in the case of the alternate prism and cover test, a prism bar was required; in the case of a Purkinje reflex based device, light sources for generation of reflexes and a camera for the documentation of the reflex positions were used. Measurements performed on model eyes and computer ray traces were used to analyze and compare the options. When a single corneal reflex is used, the misalignment of the corneal axis can be measured; the error in this measurement due to the prismatic effect of glasses was 7.6%/10 D, the smallest found in this study. The individual Hirschberg ratio can be determined by monocular measurements in three gaze directions. The angle of strabismus can be measured with Purkinje reflex based methods if the fundamental differences between these methods and the alternate prism and cover test, and if the influence of glasses and other sources of error are accounted for.

  3. Influence of video compression on the measurement error of the television system

    NASA Astrophysics Data System (ADS)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.

  4. Problem solving ability in children with intellectual disability as measured by the Raven's colored progressive matrices.

    PubMed

    Goharpey, Nahal; Crewther, David P; Crewther, Sheila G

    2013-12-01

    This study investigated the developmental trajectory of problem solving ability in children with intellectual disability (ID) of different etiologies (Down Syndrome, Idiopathic ID or low functioning Autism) as measured on the Raven's Colored Progressive Matrices test (RCPM). Children with typical development (TD) and children with ID were matched on total correct performance (i.e., non-verbal mental age) on the RCPM. RCPM total correct performance and the sophistication of error types were found to be associated with receptive vocabulary in all participants, suggesting that verbal ability plays a role in more sophisticated problem solving tasks. Children with ID made similar errors on the RCPM as younger children with TD as well as more positional error types. This result suggests that children with ID who are deficient in their cognitive processing resort to developmentally immature problem solving strategies when unable to determine the correct answer. Overall, the findings support the use of RCPM as a valid means of matching intellectual capacity of children with TD and ID. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Improvement of Parameter Estimations in Tumor Growth Inhibition Models on Xenografted Animals: Handling Sacrifice Censoring and Error Caused by Experimental Measurement on Larger Tumor Sizes.

    PubMed

    Pierrillas, Philippe B; Tod, Michel; Amiel, Magali; Chenel, Marylore; Henin, Emilie

    2016-09-01

    The purpose of this study was to explore the impact of censoring due to animal sacrifice on parameter estimates and tumor volume calculated from two diameters in larger tumors during tumor growth experiments in preclinical studies. The type of measurement error that can be expected was also investigated. Different scenarios were challenged using the stochastic simulation and estimation process. One thousand datasets were simulated under the design of a typical tumor growth study in xenografted mice, and then, eight approaches were used for parameter estimation with the simulated datasets. The distribution of estimates and simulation-based diagnostics were computed for comparison. The different approaches were robust regarding the choice of residual error and gave equivalent results. However, by not considering missing data induced by sacrificing the animal, parameter estimates were biased and led to false inferences in terms of compound potency; the threshold concentration for tumor eradication when ignoring censoring was 581 ng.ml(-1), but the true value was 240 ng.ml(-1).

  6. Improved measurement linearity and precision for AMCW time-of-flight range imaging cameras.

    PubMed

    Payne, Andrew D; Dorrington, Adrian A; Cree, Michael J; Carnegie, Dale A

    2010-08-10

    Time-of-flight range imaging systems utilizing the amplitude modulated continuous wave (AMCW) technique often suffer from measurement nonlinearity due to the presence of aliased harmonics within the amplitude modulation signals. Typically a calibration is performed to correct these errors. We demonstrate an alternative phase encoding approach that attenuates the harmonics during the sampling process, thereby improving measurement linearity in the raw measurements. This mitigates the need to measure the system's response or calibrate for environmental changes. In conjunction with improved linearity, we demonstrate that measurement precision can also be increased by reducing the duty cycle of the amplitude modulated illumination source (while maintaining overall illumination power).

  7. Accuracy of an infrared LED device to measure heart rate and energy expenditure during rest and exercise.

    PubMed

    Lee, C Matthew; Gorelick, Mark; Mendoza, Albert

    2011-12-01

    The purpose of this study was to examine the accuracy of the ePulse Personal Fitness Assistant, a forearm-worn device that provides measures of heart rate and estimates energy expenditure. Forty-six participants engaged in 4-minute periods of standing, 2.0 mph walking, 3.5 mph walking, 4.5 mph jogging, and 6.0 mph running. Heart rate and energy expenditure were simultaneously recorded at 60-second intervals using the ePulse, an electrocardiogram (EKG), and indirect calorimetry. The heart rates obtained from the ePulse were highly correlated (intraclass correlation coefficients [ICCs] ≥0.85) with those from the EKG during all conditions. The typical errors progressively increased with increasing exercise intensity but were <5 bpm only during rest and 2.0 mph. Energy expenditure from the ePulse was poorly correlated with indirect calorimetry (ICCs: 0.01-0.36) and the typical errors for energy expenditure ranged from 0.69-2.97 kcal · min(-1), progressively increasing with exercise intensity. These data suggest that the ePulse Personal Fitness Assistant is a valid device for monitoring heart rate at rest and low-intensity exercise, but becomes less accurate as exercise intensity increases. However, it does not appear to be a valid device to estimate energy expenditure during exercise.

  8. Quantifying errors without random sampling.

    PubMed

    Phillips, Carl V; LaPole, Luwanna M

    2003-06-12

    All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.

  9. Development of a two-dimensional dual pendulum thrust stand for Hall thrusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagao, N.; Yokota, S.; Komurasaki, K.

    A two-dimensional dual pendulum thrust stand was developed to measure thrust vectors (axial and horizontal (transverse) direction thrusts) of a Hall thruster. A thruster with a steering mechanism is mounted on the inner pendulum, and thrust is measured from the displacement between inner and outer pendulums, by which a thermal drift effect is canceled out. Two crossover knife-edges support each pendulum arm: one is set on the other at a right angle. They enable the pendulums to swing in two directions. Thrust calibration using a pulley and weight system showed that the measurement errors were less than 0.25 mN (1.4%)more » in the main thrust direction and 0.09 mN (1.4%) in its transverse direction. The thrust angle of the thrust vector was measured with the stand using the thruster. Consequently, a vector deviation from the main thrust direction of {+-}2.3 deg. was measured with the error of {+-}0.2 deg. under the typical operating conditions for the thruster.« less

  10. Laser absorption-scattering technique applied to asymmetric evaporating fuel sprays for simultaneous measurement of vapor/liquid mass distributions

    NASA Astrophysics Data System (ADS)

    Gao, J.; Nishida, K.

    2010-10-01

    This paper describes an Ultraviolet-Visible Laser Absorption-Scattering (UV-Vis LAS) imaging technique applied to asymmetric fuel sprays. Continuing from the previous studies, the detailed measurement principle was derived. It is demonstrated that, by means of this technique, cumulative masses and mass distributions of vapor/liquid phases can be quantitatively measured no matter what shape the spray is. A systematic uncertainty analysis was performed, and the measurement accuracy was also verified through a series of experiments on the completely vaporized fuel spray. The results show that the Molar Absorption Coefficient (MAC) of the test fuel, which is typically pressure and temperature dependent, is the major error source. The measurement error in the vapor determination has been shown to be approximately 18% under the assumption of constant MAC of the test fuel. Two application examples of the extended LAS technique were presented for exploring the dynamics and physical insight of the evaporating fuel sprays: diesel sprays injected by group-hole nozzles and gasoline sprays impinging on an inclined wall.

  11. Optimal estimation of suspended-sediment concentrations in streams

    USGS Publications Warehouse

    Holtschlag, D.J.

    2001-01-01

    Optimal estimators are developed for computation of suspended-sediment concentrations in streams. The estimators are a function of parameters, computed by use of generalized least squares, which simultaneously account for effects of streamflow, seasonal variations in average sediment concentrations, a dynamic error component, and the uncertainty in concentration measurements. The parameters are used in a Kalman filter for on-line estimation and an associated smoother for off-line estimation of suspended-sediment concentrations. The accuracies of the optimal estimators are compared with alternative time-averaging interpolators and flow-weighting regression estimators by use of long-term daily-mean suspended-sediment concentration and streamflow data from 10 sites within the United States. For sampling intervals from 3 to 48 days, the standard errors of on-line and off-line optimal estimators ranged from 52.7 to 107%, and from 39.5 to 93.0%, respectively. The corresponding standard errors of linear and cubic-spline interpolators ranged from 48.8 to 158%, and from 50.6 to 176%, respectively. The standard errors of simple and multiple regression estimators, which did not vary with the sampling interval, were 124 and 105%, respectively. Thus, the optimal off-line estimator (Kalman smoother) had the lowest error characteristics of those evaluated. Because suspended-sediment concentrations are typically measured at less than 3-day intervals, use of optimal estimators will likely result in significant improvements in the accuracy of continuous suspended-sediment concentration records. Additional research on the integration of direct suspended-sediment concentration measurements and optimal estimators applied at hourly or shorter intervals is needed.

  12. Improved uncertainty quantification in nondestructive assay for nonproliferation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, Tom; Croft, Stephen; Jarman, Ken

    2016-12-01

    This paper illustrates methods to improve uncertainty quantification (UQ) for non-destructive assay (NDA) measurements used in nuclear nonproliferation. First, it is shown that current bottom-up UQ applied to calibration data is not always adequate, for three main reasons: (1) Because there are errors in both the predictors and the response, calibration involves a ratio of random quantities, and calibration data sets in NDA usually consist of only a modest number of samples (3–10); therefore, asymptotic approximations involving quantities needed for UQ such as means and variances are often not sufficiently accurate; (2) Common practice overlooks that calibration implies a partitioningmore » of total error into random and systematic error, and (3) In many NDA applications, test items exhibit non-negligible departures in physical properties from calibration items, so model-based adjustments are used, but item-specific bias remains in some data. Therefore, improved bottom-up UQ using calibration data should predict the typical magnitude of item-specific bias, and the suggestion is to do so by including sources of item-specific bias in synthetic calibration data that is generated using a combination of modeling and real calibration data. Second, for measurements of the same nuclear material item by both the facility operator and international inspectors, current empirical (top-down) UQ is described for estimating operator and inspector systematic and random error variance components. A Bayesian alternative is introduced that easily accommodates constraints on variance components, and is more robust than current top-down methods to the underlying measurement error distributions.« less

  13. A survey of camera error sources in machine vision systems

    NASA Astrophysics Data System (ADS)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  14. Experimental Study on the Axis Line Deflection of Ti6A14V Titanium Alloy in Gun-Drilling Process

    NASA Astrophysics Data System (ADS)

    Li, Liang; Xue, Hu; Wu, Peng

    2018-01-01

    Titanium alloy is widely used in aerospace industry, but it is also a typical difficult-to-cut material. During Deep hole drilling of the shaft parts of a certain large aircraft, there are problems of bad surface roughness, chip control and axis deviation, so experiments on gun-drilling of Ti6A14V titanium alloy were carried out to measure the axis line deflection, diameter error and surface integrity, and the reasons of these errors were analyzed. Then, the optimized process parameter was obtained during gun-drilling of Ti6A14V titanium alloy with deep hole diameter of 17mm. Finally, we finished the deep hole drilling of 860mm while the comprehensive error is smaller than 0.2mm and the surface roughness is less than 1.6μm.

  15. Laser frequency stabilization by combining modulation transfer and frequency modulation spectroscopy.

    PubMed

    Zi, Fei; Wu, Xuejian; Zhong, Weicheng; Parker, Richard H; Yu, Chenghui; Budker, Simon; Lu, Xuanhui; Müller, Holger

    2017-04-01

    We present a hybrid laser frequency stabilization method combining modulation transfer spectroscopy (MTS) and frequency modulation spectroscopy (FMS) for the cesium D2 transition. In a typical pump-probe setup, the error signal is a combination of the DC-coupled MTS error signal and the AC-coupled FMS error signal. This combines the long-term stability of the former with the high signal-to-noise ratio of the latter. In addition, we enhance the long-term frequency stability with laser intensity stabilization. By measuring the frequency difference between two independent hybrid spectroscopies, we investigate the short-and long-term stability. We find a long-term stability of 7.8 kHz characterized by a standard deviation of the beating frequency drift over the course of 10 h and a short-term stability of 1.9 kHz characterized by an Allan deviation of that at 2 s of integration time.

  16. Predicting the random drift of MEMS gyroscope based on K-means clustering and OLS RBF Neural Network

    NASA Astrophysics Data System (ADS)

    Wang, Zhen-yu; Zhang, Li-jie

    2017-10-01

    Measure error of the sensor can be effectively compensated with prediction. Aiming at large random drift error of MEMS(Micro Electro Mechanical System))gyroscope, an improved learning algorithm of Radial Basis Function(RBF) Neural Network(NN) based on K-means clustering and Orthogonal Least-Squares (OLS) is proposed in this paper. The algorithm selects the typical samples as the initial cluster centers of RBF NN firstly, candidates centers with K-means algorithm secondly, and optimizes the candidate centers with OLS algorithm thirdly, which makes the network structure simpler and makes the prediction performance better. Experimental results show that the proposed K-means clustering OLS learning algorithm can predict the random drift of MEMS gyroscope effectively, the prediction error of which is 9.8019e-007°/s and the prediction time of which is 2.4169e-006s

  17. One-dimensional angular-measurement-based stitching interferometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lei; Xue, Junpeng; Gao, Bo

    In this paper, we present one-dimensional stitching interferometry based on the angular measurement for high-precision mirror metrology. The tilt error introduced by the stage motion during the stitching process is measured by an extra angular measurement device. The local profile measured by the interferometer in a single field of view is corrected using the measured angle before the piston adjustment in the stitching process. Comparing to the classical software stitching technique, the angle measuring stitching technique is more reliable and accurate in profiling mirror surface at the nanometer level. Experimental results demonstrate the feasibility of the proposed stitching technique. Basedmore » on our measurements, the typical repeatability within 200 mm scanning range is 0.5 nm RMS or less.« less

  18. One-dimensional angular-measurement-based stitching interferometry

    DOE PAGES

    Huang, Lei; Xue, Junpeng; Gao, Bo; ...

    2018-04-05

    In this paper, we present one-dimensional stitching interferometry based on the angular measurement for high-precision mirror metrology. The tilt error introduced by the stage motion during the stitching process is measured by an extra angular measurement device. The local profile measured by the interferometer in a single field of view is corrected using the measured angle before the piston adjustment in the stitching process. Comparing to the classical software stitching technique, the angle measuring stitching technique is more reliable and accurate in profiling mirror surface at the nanometer level. Experimental results demonstrate the feasibility of the proposed stitching technique. Basedmore » on our measurements, the typical repeatability within 200 mm scanning range is 0.5 nm RMS or less.« less

  19. Simultaneous estimation of human and exoskeleton motion: A simplified protocol.

    PubMed

    Alvarez, M T; Torricelli, D; Del-Ama, A J; Pinto, D; Gonzalez-Vargas, J; Moreno, J C; Gil-Agudo, A; Pons, J L

    2017-07-01

    Adequate benchmarking procedures in the area of wearable robots is gaining importance in order to compare different devices on a quantitative basis, improve them and support the standardization and regulation procedures. Performance assessment usually focuses on the execution of locomotion tasks, and is mostly based on kinematic-related measures. Typical drawbacks of marker-based motion capture systems, gold standard for measure of human limb motion, become challenging when measuring limb kinematics, due to the concomitant presence of the robot. This work answers the question of how to reliably assess the subject's body motion by placing markers over the exoskeleton. Focusing on the ankle joint, the proposed methodology showed that it is possible to reconstruct the trajectory of the subject's joint by placing markers on the exoskeleton, although foot flexibility during walking can impact the reconstruction accuracy. More experiments are needed to confirm this hypothesis, and more subjects and walking conditions are needed to better characterize the errors of the proposed methodology, although our results are promising, indicating small errors.

  20. Systematic errors in the determination of the spectroscopic g-factor in broadband ferromagnetic resonance spectroscopy: A proposed solution

    NASA Astrophysics Data System (ADS)

    Gonzalez-Fuentes, C.; Dumas, R. K.; García, C.

    2018-01-01

    A theoretical and experimental study of the influence of small offsets of the magnetic field (δH) on the measurement accuracy of the spectroscopic g-factor (g) and saturation magnetization (Ms) obtained by broadband ferromagnetic resonance (FMR) measurements is presented. The random nature of δH generates systematic and opposite sign deviations of the values of g and Ms with respect to their true values. A δH on the order of a few Oe leads to a ˜10% error of g and Ms for a typical range of frequencies employed in broadband FMR experiments. We propose a simple experimental methodology to significantly minimize the effect of δH on the fitted values of g and Ms, eliminating their apparent dependence in the range of frequencies employed. Our method was successfully tested using broadband FMR measurements on a 5 nm thick Ni80Fe20 film for frequencies ranging between 3 and 17 GHz.

  1. Similarity Metrics for Closed Loop Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.; Yang, Lee C.; Bedrossian, Naz; Hall, Robert A.

    2008-01-01

    To what extent and in what ways can two closed-loop dynamic systems be said to be "similar?" This question arises in a wide range of dynamic systems modeling and control system design applications. For example, bounds on error models are fundamental to the controller optimization with modern control design methods. Metrics such as the structured singular value are direct measures of the degree to which properties such as stability or performance are maintained in the presence of specified uncertainties or variations in the plant model. Similarly, controls-related areas such as system identification, model reduction, and experimental model validation employ measures of similarity between multiple realizations of a dynamic system. Each area has its tools and approaches, with each tool more or less suited for one application or the other. Similarity in the context of closed-loop model validation via flight test is subtly different from error measures in the typical controls oriented application. Whereas similarity in a robust control context relates to plant variation and the attendant affect on stability and performance, in this context similarity metrics are sought that assess the relevance of a dynamic system test for the purpose of validating the stability and performance of a "similar" dynamic system. Similarity in the context of system identification is much more relevant than are robust control analogies in that errors between one dynamic system (the test article) and another (the nominal "design" model) are sought for the purpose of bounding the validity of a model for control design and analysis. Yet system identification typically involves open-loop plant models which are independent of the control system (with the exception of limited developments in closed-loop system identification which is nonetheless focused on obtaining open-loop plant models from closed-loop data). Moreover the objectives of system identification are not the same as a flight test and hence system identification error metrics are not directly relevant. In applications such as launch vehicles where the open loop plant is unstable it is similarity of the closed-loop system dynamics of a flight test that are relevant.

  2. Multistrip Western blotting: a tool for comparative quantitative analysis of multiple proteins.

    PubMed

    Aksamitiene, Edita; Hoek, Jan B; Kiyatkin, Anatoly

    2015-01-01

    The qualitative and quantitative measurements of protein abundance and modification states are essential in understanding their functions in diverse cellular processes. Typical Western blotting, though sensitive, is prone to produce substantial errors and is not readily adapted to high-throughput technologies. Multistrip Western blotting is a modified immunoblotting procedure based on simultaneous electrophoretic transfer of proteins from multiple strips of polyacrylamide gels to a single membrane sheet. In comparison with the conventional technique, Multistrip Western blotting increases data output per single blotting cycle up to tenfold; allows concurrent measurement of up to nine different total and/or posttranslationally modified protein expression obtained from the same loading of the sample; and substantially improves the data accuracy by reducing immunoblotting-derived signal errors. This approach enables statistically reliable comparison of different or repeated sets of data and therefore is advantageous to apply in biomedical diagnostics, systems biology, and cell signaling research.

  3. The penta-prism LTP: A long-trace-profiler with stationary optical head and moving penta prism (abstract)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qian, S.; Jark, W.; Takacs, P.Z.

    1995-02-01

    Metrology requirements for optical components for third generation synchrotron sources are taxing the state-of-the-art in manufacturing technology. We have investigated a number of effect sources in a commercial figure measurement instrument, the Long Trace Profiler II (LTP II), and have demonstrated that, with some simple modifications, we can significantly reduce the effect of error sources and improve the accuracy and reliability of the measurement. By keeping the optical head stationary and moving a penta prism along the translation stage, the stability of the optical system is greatly improved, and the remaining error signals can be corrected by a simple referencemore » beam subtraction. We illustrate the performance of the modified system by investigating the distortion produced by gravity on a typical synchrotron mirror and demonstrate the repeatability of the instrument despite relaxed tolerances on the translation stage.« less

  4. Mixed-effects location and scale Tobit joint models for heterogeneous longitudinal data with skewness, detection limits, and measurement errors.

    PubMed

    Lu, Tao

    2017-01-01

    The joint modeling of mean and variance for longitudinal data is an active research area. This type of model has the advantage of accounting for heteroscedasticity commonly observed in between and within subject variations. Most of researches focus on improving the estimating efficiency but ignore many data features frequently encountered in practice. In this article, we develop a mixed-effects location scale joint model that concurrently accounts for longitudinal data with multiple features. Specifically, our joint model handles heterogeneity, skewness, limit of detection, measurement errors in covariates which are typically observed in the collection of longitudinal data from many studies. We employ a Bayesian approach for making inference on the joint model. The proposed model and method are applied to an AIDS study. Simulation studies are performed to assess the performance of the proposed method. Alternative models under different conditions are compared.

  5. Extracting latent brain states--Towards true labels in cognitive neuroscience experiments.

    PubMed

    Porbadnigk, Anne K; Görnitz, Nico; Sannelli, Claudia; Binder, Alexander; Braun, Mikio; Kloft, Marius; Müller, Klaus-Robert

    2015-10-15

    Neuroscientific data is typically analyzed based on the behavioral response of the participant. However, the errors made may or may not be in line with the neural processing. In particular in experiments with time pressure or studies where the threshold of perception is measured, the error distribution deviates from uniformity due to the structure in the underlying experimental set-up. When we base our analysis on the behavioral labels as usually done, then we ignore this problem of systematic and structured (non-uniform) label noise and are likely to arrive at wrong conclusions in our data analysis. This paper contributes a remedy to this important scenario: we present a novel approach for a) measuring label noise and b) removing structured label noise. We demonstrate its usefulness for EEG data analysis using a standard d2 test for visual attention (N=20 participants). Copyright © 2015 Elsevier Inc. All rights reserved.

  6. The Greenwich Photo-heliographic Results (1874 - 1976): Initial Corrections to the Printed Publications

    NASA Astrophysics Data System (ADS)

    Erwin, E. H.; Coffey, H. E.; Denig, W. F.; Willis, D. M.; Henwood, R.; Wild, M. N.

    2013-11-01

    A new sunspot and faculae digital dataset for the interval 1874 - 1955 has been prepared under the auspices of the NOAA National Geophysical Data Center (NGDC). This digital dataset contains measurements of the positions and areas of both sunspots and faculae published initially by the Royal Observatory, Greenwich, and subsequently by the Royal Greenwich Observatory (RGO), under the title Greenwich Photo-heliographic Results ( GPR) , 1874 - 1976. Quality control (QC) procedures based on logical consistency have been used to identify the more obvious errors in the RGO publications. Typical examples of identifiable errors are North versus South errors in specifying heliographic latitude, errors in specifying heliographic (Carrington) longitude, errors in the dates and times, errors in sunspot group numbers, arithmetic errors in the summation process, and the occasional omission of solar ephemerides. Although the number of errors in the RGO publications is remarkably small, an initial table of necessary corrections is provided for the interval 1874 - 1917. Moreover, as noted in the preceding companion papers, the existence of two independently prepared digital datasets, which both contain information on sunspot positions and areas, makes it possible to outline a preliminary strategy for the development of an even more accurate digital dataset. Further work is in progress to generate an extremely reliable sunspot digital dataset, based on the long programme of solar observations supported first by the Royal Observatory, Greenwich, and then by the Royal Greenwich Observatory.

  7. Retinal Image Quality During Accommodation

    PubMed Central

    López-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; Díaz-Muñoz, D.; Thibos, L.

    2013-01-01

    Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye’s higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. PMID:23786386

  8. Retinal image quality during accommodation.

    PubMed

    López-Gil, Norberto; Martin, Jesson; Liu, Tao; Bradley, Arthur; Díaz-Muñoz, David; Thibos, Larry N

    2013-07-01

    We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Subjects viewed a monochromatic (552 nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye's higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.

  9. Short-range optical air data measurements for aircraft control using rotational Raman backscatter.

    PubMed

    Fraczek, Michael; Behrendt, Andreas; Schmitt, Nikolaus

    2013-07-15

    A first laboratory prototype of a novel concept for a short-range optical air data system for aircraft control and safety was built. The measurement methodology was introduced in [Appl. Opt. 51, 148 (2012)] and is based on techniques known from lidar detecting elastic and Raman backscatter from air. A wide range of flight-critical parameters, such as air temperature, molecular number density and pressure can be measured as well as data on atmospheric particles and humidity can be collected. In this paper, the experimental measurement performance achieved with the first laboratory prototype using 532 nm laser radiation of a pulse energy of 118 mJ is presented. Systematic measurement errors and statistical measurement uncertainties are quantified separately. The typical systematic temperature, density and pressure measurement errors obtained from the mean of 1000 averaged signal pulses are small amounting to < 0.22 K, < 0.36% and < 0.31%, respectively, for measurements at air pressures varying from 200 hPa to 950 hPa but constant air temperature of 298.95 K. The systematic measurement errors at air temperatures varying from 238 K to 308 K but constant air pressure of 946 hPa are even smaller and < 0.05 K, < 0.07% and < 0.06%, respectively. A focus is put on the system performance at different virtual flight altitudes as a function of the laser pulse energy. The virtual flight altitudes are precisely generated with a custom-made atmospheric simulation chamber system. In this context, minimum laser pulse energies and pulse numbers are experimentally determined, which are required using the measurement system, in order to meet measurement error demands for temperature and pressure specified in aviation standards. The aviation error margins limit the allowable temperature errors to 1.5 K for all measurement altitudes and the pressure errors to 0.1% for 0 m and 0.5% for 13000 m. With regard to 100-pulse-averaged temperature measurements, the pulse energy using 532 nm laser radiation has to be larger than 11 mJ (35 mJ), regarding 1-σ (3-σ) uncertainties at all measurement altitudes. For 100-pulse-averaged pressure measurements, the laser pulse energy has to be larger than 95 mJ (355 mJ), respectively. Based on these experimental results, the laser pulse energy requirements are extrapolated to the ultraviolet wavelength region as well, resulting in significantly lower pulse energy demand of 1.5 - 3 mJ (4-10 mJ) and 12-27 mJ (45-110 mJ) for 1-σ (3-σ) 100-pulse-averaged temperature and pressure measurements, respectively.

  10. Error Pattern Analysis Applied to Technical Writing: An Editor's Guide for Writers.

    ERIC Educational Resources Information Center

    Monagle, E. Brette

    The use of error pattern analysis can reduce the time and money spent on editing and correcting manuscripts. What is required is noting, classifying, and keeping a frequency count of errors. First an editor should take a typical page of writing and circle each error. After the editor has done a sufficiently large number of pages to identify an…

  11. Effects of Head Rotation on Space- and Word-Based Reading Errors in Spatial Neglect

    ERIC Educational Resources Information Center

    Reinhart, Stefan; Keller, Ingo; Kerkhoff, Georg

    2010-01-01

    Patients with right hemisphere lesions often omit or misread words on the left side of a text or the beginning letters of single words which is termed neglect dyslexia (ND). Two types of reading errors are typically observed in ND: omissions and word-based reading errors. The prior are considered as space-based omission errors on the…

  12. Atmospheric Dispersion Effects in Weak Lensing Measurements

    DOE PAGES

    Plazas, Andrés Alejandro; Bernstein, Gary

    2012-10-01

    The wavelength dependence of atmospheric refraction causes elongation of finite-bandwidth images along the elevation vector, which produces spurious signals in weak gravitational lensing shear measurements unless this atmospheric dispersion is calibrated and removed to high precision. Because astrometric solutions and PSF characteristics are typically calibrated from stellar images, differences between the reference stars' spectra and the galaxies' spectra will leave residual errors in both the astrometric positions (dr) and in the second moment (width) of the wavelength-averaged PSF (dv) for galaxies.We estimate the level of dv that will induce spurious weak lensing signals in PSF-corrected galaxy shapes that exceed themore » statistical errors of the DES and the LSST cosmic-shear experiments. We also estimate the dr signals that will produce unacceptable spurious distortions after stacking of exposures taken at different airmasses and hour angles. We also calculate the errors in the griz bands, and find that dispersion systematics, uncorrected, are up to 6 and 2 times larger in g and r bands,respectively, than the requirements for the DES error budget, but can be safely ignored in i and z bands. For the LSST requirements, the factors are about 30, 10, and 3 in g, r, and i bands,respectively. We find that a simple correction linear in galaxy color is accurate enough to reduce dispersion shear systematics to insignificant levels in the r band for DES and i band for LSST,but still as much as 5 times than the requirements for LSST r-band observations. More complex corrections will likely be able to reduce the systematic cosmic-shear errors below statistical errors for LSST r band. But g-band effects remain large enough that it seems likely that induced systematics will dominate the statistical errors of both surveys, and cosmic-shear measurements should rely on the redder bands.« less

  13. Beyond Error Patterns: A Sociocultural View of Fraction Comparison Errors in Students with Mathematical Learning Disabilities

    ERIC Educational Resources Information Center

    Lewis, Katherine E.

    2016-01-01

    Although many students struggle with fractions, students with mathematical learning disabilities (MLDs) experience pervasive difficulties because of neurological differences in how they process numerical information. These students make errors that are qualitatively different than their typically achieving and low-achieving peers. This study…

  14. Evaluation and mitigation of potential errors in radiochromic film dosimetry due to film curvature at scanning.

    PubMed

    Palmer, Antony L; Bradley, David A; Nisbet, Andrew

    2015-03-08

    This work considers a previously overlooked uncertainty present in film dosimetry which results from moderate curvature of films during the scanning process. Small film samples are particularly susceptible to film curling which may be undetected or deemed insignificant. In this study, we consider test cases with controlled induced curvature of film and with film raised horizontally above the scanner plate. We also evaluate the difference in scans of a film irradiated with a typical brachytherapy dose distribution with the film naturally curved and with the film held flat on the scanner. Typical naturally occurring curvature of film at scanning, giving rise to a maximum height 1 to 2 mm above the scan plane, may introduce dose errors of 1% to 4%, and considerably reduce gamma evaluation passing rates when comparing film-measured doses with treatment planning system-calculated dose distributions, a common application of film dosimetry in radiotherapy. The use of a triple-channel dosimetry algorithm appeared to mitigate the error due to film curvature compared to conventional single-channel film dosimetry. The change in pixel value and calibrated reported dose with film curling or height above the scanner plate may be due to variations in illumination characteristics, optical disturbances, or a Callier-type effect. There is a clear requirement for physically flat films at scanning to avoid the introduction of a substantial error source in film dosimetry. Particularly for small film samples, a compression glass plate above the film is recommended to ensure flat-film scanning. This effect has been overlooked to date in the literature.

  15. Near Heterophoria in Early Childhood

    PubMed Central

    Babinsky, Erin; Sreenivasan, Vidhyapriya; Candy, T. Rowan

    2015-01-01

    Purpose. The purpose of this study was to measure near heterophoria in young children to determine the impact of early growth and development on the alignment of the eyes. Methods. Fifty young children (≥2 and <7 years of age; range of spherical equivalent refractive error −1.25 diopters [D] to +3.75 D) and 13 adults participated. Their eye position and accommodation responses, in the absence of optical correction, were measured using simultaneous Purkinje image tracking and photorefraction technology (MCS PowerRefractor, PR). The resulting heterophorias, and both accommodative convergence/accommodation (AC/A) and convergence accommodation/convergence (CA/C) ratios were then computed as a function of age, refractive error, and an alternating cover test. Results. The mean heterophoria after approximately 60 seconds of dissociation at a 33-cm viewing distance was 5.0 prism diopters (pd) of exophoria (SD ± 3.7) in the children (78% of children > 2 pd exophoric) and 5.6 pd of exophoria (SD ± 4.7) in adults (69% of adults > 2pd exophoric; a nonsignificant difference), with no effect of age between 2 and 6 years. In these children, heterophoria was not significantly correlated with AC/A (r = 0.25), CA/C (r = 0.12), or refractive error (r = 0.21). The mean difference between heterophoria measurements from the PR and the clinical cover test was −2.4 pd (SD = ±3.4), with an exophoric bias in the PR measurements. Conclusions. Despite developmental maturation of interpupillary distance, refractive error, and AC/A, in a typical sample of young children the predominant dissociated position is one of exophoria. PMID:25634983

  16. Quantitative, Comparable Coherent Anti-Stokes Raman Scattering (CARS) Spectroscopy: Correcting Errors in Phase Retrieval

    PubMed Central

    Camp, Charles H.; Lee, Young Jong; Cicerone, Marcus T.

    2017-01-01

    Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending and scaling. Classic methods of error-correction, such as baseline detrending, are demonstrated to be inaccurate and to simply mask the underlying errors. The theoretical justification is presented by re-developing the theory of phase retrieval via the Kramers-Kronig relation, and we demonstrate that these results are also applicable to maximum entropy method-based phase retrieval. This new error-correction approach is experimentally applied to glycerol spectra and tissue images, demonstrating marked consistency between spectra obtained using different NRB estimates, and between spectra obtained on different instruments. Additionally, in order to facilitate implementation of these approaches, we have made many of the tools described herein available free for download. PMID:28819335

  17. Quantifying Carbon Flux Estimation Errors

    NASA Astrophysics Data System (ADS)

    Wesloh, D.

    2017-12-01

    Atmospheric Bayesian inversions have been used to estimate surface carbon dioxide (CO2) fluxes from global to sub-continental scales using atmospheric mixing ratio measurements. These inversions use an atmospheric transport model, coupled to a set of fluxes, in order to simulate mixing ratios that can then be compared to the observations. The comparison is then used to update the fluxes to better match the observations in a manner consistent with the uncertainties prescribed for each. However, inversion studies disagree with each other at continental scales, prompting further investigations to examine the causes of these differences. Inter-comparison studies have shown that the errors resulting from atmospheric transport inaccuracies are comparable to those from the errors in the prior fluxes. However, not as much effort has gone into studying the origins of the errors induced by errors in the transport as by errors in the prior distribution. This study uses a mesoscale transport model to evaluate the effects of representation errors in the observations and of incorrect descriptions of the transport. To obtain realizations of these errors, we performed an Observing System Simulation Experiments (OSSEs), with the transport model used for the inversion operating at two resolutions, one typical of a global inversion and the other of a mesoscale, and with various prior flux distributions to. Transport error covariances are inferred from an ensemble of perturbed mesoscale simulations while flux error covariances are computed using prescribed distributions and magnitudes. We examine how these errors can be diagnosed in the inversion process using aircraft, ground-based, and satellite observations of meteorological variables and CO2.

  18. Developing a Weighted Measure of Speech Sound Accuracy

    PubMed Central

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2010-01-01

    Purpose The purpose is to develop a system for numerically quantifying a speaker’s phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, we describe a system for differentially weighting speech sound errors based on various levels of phonetic accuracy with a Weighted Speech Sound Accuracy (WSSA) score. We then evaluate the reliability and validity of this measure. Method Phonetic transcriptions are analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy is compared to existing measures, is used to discriminate typical and disordered speech production, and is evaluated to determine whether it is sensitive to changes in phonetic accuracy over time. Results Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners’ judgments of severity of a child’s speech disorder. The measure separates children with and without speech sound disorders. WSSA scores also capture growth in phonetic accuracy in toddler’s speech over time. Conclusion Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children’s speech. PMID:20699344

  19. The role of semantic complexity in treatment of naming deficits: training semantic categories in fluent aphasia by controlling exemplar typicality.

    PubMed

    Kiran, Swathi; Thompson, Cynthia K

    2003-06-01

    The effect of typicality of category exemplars on naming was investigated using a single subject experimental design across participants and behaviors in 4 patients with fluent aphasia. Participants received a semantic feature treatment to improve naming of either typical or atypical items within semantic categories, while generalization was tested to untrained items of the category. The order of typicality and category trained was counterbalanced across participants. Results indicated that patients trained on naming of atypical exemplars demonstrated generalization to naming of intermediate and typical items. However, patients trained on typical items demonstrated no generalized naming effect to intermediate or atypical examples. Furthermore, analysis of errors indicated an evolution of errors throughout training, from those with no apparent relationship to the target to primarily semantic and phonemic paraphasias. Performance on standardized language tests also showed changes as a function of treatment. Theoretical and clinical implications regarding the impact of considering semantic complexity on rehabilitation of naming deficits in aphasia are discussed.

  20. The role of semantic complexity in treatment of naming deficits: training semantic categories in fluent aphasia by controlling exemplar typicality.

    PubMed

    Kiran, Swathi; Thompson, Cynthia K

    2003-08-01

    The effect of typicality of category exemplars on naming was investigated using a single subject experimental design across participants and behaviors in 4 patients with fluent aphasia. Participants received a semantic feature treatment to improve naming of either typical or atypical items within semantic categories, while generalization was tested to untrained items of the category. The order of typicality and category trained was counterbalanced across participants. Results indicated that patients trained on naming of atypical exemplars demonstrated generalization to naming of intermediate and typical items. However, patients trained on typical items demonstrated no generalized naming effect to intermediate or atypical examples. Furthermore, analysis of errors indicated an evolution of errors throughout training, from those with no apparent relationship to the target to primarily semantic and phonemic paraphasias. Performance on standardized language tests also showed changes as a function of treatment. Theoretical and clinical implications regarding the impact of considering semantic complexity on rehabilitation of naming deficits in aphasia are discussed.

  1. Characterising a holographic modal phase mask for the detection of ocular aberrations

    NASA Astrophysics Data System (ADS)

    Corbett, A. D.; Leyva, D. Gil; Diaz-Santana, L.; Wilkinson, T. D.; Zhong, J. J.

    2005-12-01

    The accurate measurement of the double-pass ocular wave front has been shown to have a broad range of applications from LASIK surgery to adaptively corrected retinal imaging. The ocular wave front can be accurately described by a small number of Zernike circle polynomials. The modal wave front sensor was first proposed by Neil et al. and allows the coefficients of the individual Zernike modes to be measured directly. Typically the aberrations measured with the modal sensor are smaller than those seen in the ocular wave front. In this work, we investigated a technique for adapting a modal phase mask for the sensing of the ocular wave front. This involved extending the dynamic range of the sensor by increasing the pinhole size to 2.4mm and optimising the mask bias to 0.75λ. This was found to decrease the RMS error by up to a factor of three for eye-like aberrations with amplitudes up to 0.2μm. For aberrations taken from a sample of real-eye measurements a 20% decrease in the RMS error was observed.

  2. Uncertainty Analysis of Instrument Calibration and Application

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.

  3. Validity of an ultra-wideband local positioning system to measure locomotion in indoor sports.

    PubMed

    Serpiello, F R; Hopkins, W G; Barnes, S; Tavrou, J; Duthie, G M; Aughey, R J; Ball, K

    2018-08-01

    The validity of an Ultra-wideband (UWB) positioning system was investigated during linear and change-of-direction (COD) running drills. Six recreationally-active men performed ten repetitions of four activities (walking, jogging, maximal acceleration, and 45º COD) on an indoor court. Activities were repeated twice, in the centre of the court and on the side. Participants wore a receiver tag (Clearsky T6, Catapult Sports) and two reflective markers placed on the tag to allow for comparisons with the criterion system (Vicon). Distance, mean and peak velocity, acceleration, and deceleration were assessed. Validity was assessed via percentage least-square means difference (Clearsky-Vicon) with 90% confidence interval and magnitude-based inference; typical error was expressed as within-subject standard deviation. The mean differences for distance, mean/peak speed, and mean/peak accelerations in the linear drills were in the range of 0.2-12%, with typical errors between 1.2 and 9.3%. Mean and peak deceleration had larger differences and errors between systems. In the COD drill, moderate-to-large differences were detected for the activity performed in the centre of the court, increasing to large/very large on the side. When filtered and smoothed following a similar process, the UWB-based positioning system had acceptable validity, compared to Vicon, to assess movements representative of indoor sports.

  4. Experiments and 3D simulations of flow structures in junctions and their influence on location of flowmeters.

    PubMed

    Mignot, E; Bonakdari, H; Knothe, P; Lipeme Kouyi, G; Bessette, A; Rivière, N; Bertrand-Krajewski, J-L

    2012-01-01

    Open-channel junctions are common occurrences in sewer networks and flow rate measurement often occurs near these singularities. Local flow structures are 3D, impact on the representativeness of the local flow measurements and thus lead to deviations in the flow rate estimation. The present study aims (i) to measure and simulate the flow pattern in a junction flow, (ii) to analyse the impact of the junction on the velocity distribution according to the distance from the junction and thus (iii) to evaluate the typical error derived from the computation of the flow rate close to the junction.

  5. Improvements to photometry. Part 1: Better estimation of derivatives in extinction and transformation equations

    NASA Technical Reports Server (NTRS)

    Young, Andrew T.

    1988-01-01

    Atmospheric extinction in wideband photometry is examined both analytically and through numerical simulations. If the derivatives that appear in the Stromgren-King theory are estimated carefully, it appears that wideband measurements can be transformed to outside the atmosphere with errors no greater than a millimagnitude. A numerical analysis approach is used to estimate derivatives of both the stellar and atmospheric extinction spectra, avoiding previous assumptions that the extinction follows a power law. However, it is essential to satify the requirements of the sampling theorem to keep aliasing errors small. Typically, this means that band separations cannot exceed half of the full width at half-peak response. Further work is needed to examine higher order effects, which may well be significant.

  6. Variability of Retinal Thickness Measurements in Tilted or Stretched Optical Coherence Tomography Images

    PubMed Central

    Uji, Akihito; Abdelfattah, Nizar Saleh; Boyer, David S.; Balasubramanian, Siva; Lei, Jianqin; Sadda, SriniVas R.

    2017-01-01

    Purpose To investigate the level of inaccuracy of retinal thickness measurements in tilted and axially stretched optical coherence tomography (OCT) images. Methods A consecutive series of 50 eyes of 50 patients with age-related macular degeneration were included in this study, and Cirrus HD-OCT images through the foveal center were used for the analysis. The foveal thickness was measured in three ways: (1) parallel to the orientation of the A-scan (Tx), (2) perpendicular to the retinal pigment epithelium (RPE) surface in the instrument-displayed aspect ratio image (Ty), and (3) thickness measured perpendicular to the RPE surface in a native aspect ratio image (Tz). Mathematical modeling was performed to estimate the measurement error. Results The measurement error was larger in tilted images with a greater angle of tilt. In the simulation, with axial stretching by a factor of 2, Ty/Tz ratio was > 1.05 at a tilt angle between 13° to 18° and 72° to 77°, > 1.10 at a tilt angle between 19° to 31° and 59° to 71°, and > 1.20 at an angle ranging from 32° to 58°. Of note with even more axial stretching, the Ty/Tz ratio is even larger. Tx/Tz ratio was smaller than the Ty/Tz ratio at angles ranging from 0° to 54°. The actual patient data showed good agreement with the simulation. The Ty/Tz ratio was greater than 1.05 (5% error) at angles ranging from 13° to 18° and 72° to 77°, greater than 1.10 (10% error) angles ranging from 19° to 31° and 59° to 71°, and greater than 1.20 (20% error) angles ranging from 32° to 58° in the images axially stretched by a factor of 2 (b/a = 2), which is typical of most OCT instrument displays. Conclusions Retinal thickness measurements obtained perpendicular to the RPE surface were overestimated when using tilted and axially stretched OCT images. Translational Relevance If accurate measurements are to be obtained, images with a native aspect ratio similar to microscopy must be used. PMID:28299239

  7. Evaluation of alignment error due to a speed artifact in stereotactic ultrasound image guidance.

    PubMed

    Salter, Bill J; Wang, Brian; Szegedi, Martin W; Rassiah-Szegedi, Prema; Shrieve, Dennis C; Cheng, Roger; Fuss, Martin

    2008-12-07

    Ultrasound (US) image guidance systems used in radiotherapy are typically calibrated for soft tissue applications, thus introducing errors in depth-from-transducer representation when used in media with a different speed of sound propagation (e.g. fat). This error is commonly referred to as the speed artifact. In this study we utilized a standard US phantom to demonstrate the existence of the speed artifact when using a commercial US image guidance system to image through layers of simulated body fat, and we compared the results with calculated/predicted values. A general purpose US phantom (speed of sound (SOS) = 1540 m s(-1)) was imaged on a multi-slice CT scanner at a 0.625 mm slice thickness and 0.5 mm x 0.5 mm axial pixel size. Target-simulating wires inside the phantom were contoured and later transferred to the US guidance system. Layers of various thickness (1-8 cm) of commercially manufactured fat-simulating material (SOS = 1435 m s(-1)) were placed on top of the phantom to study the depth-related alignment error. In order to demonstrate that the speed artifact is not caused by adding additional layers on top of the phantom, we repeated these measurements in an identical setup using commercially manufactured tissue-simulating material (SOS = 1540 m s(-1)) for the top layers. For the fat-simulating material used in this study, we observed the magnitude of the depth-related alignment errors resulting from the speed artifact to be 0.7 mm cm(-1) of fat imaged through. The measured alignment errors caused by the speed artifact agreed with the calculated values within one standard deviation for all of the different thicknesses of fat-simulating material studied here. We demonstrated the depth-related alignment error due to the speed artifact when using US image guidance for radiation treatment alignment and note that the presence of fat causes the target to be aliased to a depth greater than it actually is. For typical US guidance systems in use today, this will lead to delivery of the high dose region at a position slightly posterior to the intended region for a supine patient. When possible, care should be taken to avoid imaging through a thick layer of fat for larger patients in US alignments or, if unavoidable, the spatial inaccuracies introduced by the artifact should be considered by the physician during the formulation of the treatment plan.

  8. Techniques for precise energy calibration of particle pixel detectors

    NASA Astrophysics Data System (ADS)

    Kroupa, M.; Campbell-Ricketts, T.; Bahadori, A.; Empl, A.

    2017-03-01

    We demonstrate techniques to improve the accuracy of the energy calibration of Timepix pixel detectors, used for the measurement of energetic particles. The typical signal from such particles spreads among many pixels due to charge sharing effects. As a consequence, the deposited energy in each pixel cannot be reconstructed unless the detector is calibrated, limiting the usability of such signals for calibration. To avoid this shortcoming, we calibrate using low energy X-rays. However, charge sharing effects still occur, resulting in part of the energy being deposited in adjacent pixels and possibly lost. This systematic error in the calibration process results in an error of about 5% in the energy measurements of calibrated devices. We use FLUKA simulations to assess the magnitude of charge sharing effects, allowing a corrected energy calibration to be performed on several Timepix pixel detectors and resulting in substantial improvement in energy deposition measurements. Next, we address shortcomings in calibration associated with the huge range (from kiloelectron-volts to megaelectron-volts) of energy deposited per pixel which result in a nonlinear energy response over the full range. We introduce a new method to characterize the non-linear response of the Timepix detectors at high input energies. We demonstrate improvement using a broad range of particle types and energies, showing that the new method reduces the energy measurement errors, in some cases by more than 90%.

  9. Techniques for precise energy calibration of particle pixel detectors.

    PubMed

    Kroupa, M; Campbell-Ricketts, T; Bahadori, A; Empl, A

    2017-03-01

    We demonstrate techniques to improve the accuracy of the energy calibration of Timepix pixel detectors, used for the measurement of energetic particles. The typical signal from such particles spreads among many pixels due to charge sharing effects. As a consequence, the deposited energy in each pixel cannot be reconstructed unless the detector is calibrated, limiting the usability of such signals for calibration. To avoid this shortcoming, we calibrate using low energy X-rays. However, charge sharing effects still occur, resulting in part of the energy being deposited in adjacent pixels and possibly lost. This systematic error in the calibration process results in an error of about 5% in the energy measurements of calibrated devices. We use FLUKA simulations to assess the magnitude of charge sharing effects, allowing a corrected energy calibration to be performed on several Timepix pixel detectors and resulting in substantial improvement in energy deposition measurements. Next, we address shortcomings in calibration associated with the huge range (from kiloelectron-volts to megaelectron-volts) of energy deposited per pixel which result in a nonlinear energy response over the full range. We introduce a new method to characterize the non-linear response of the Timepix detectors at high input energies. We demonstrate improvement using a broad range of particle types and energies, showing that the new method reduces the energy measurement errors, in some cases by more than 90%.

  10. [Errors in medicine. Causes, impact and improvement measures to improve patient safety].

    PubMed

    Waeschle, R M; Bauer, M; Schmidt, C E

    2015-09-01

    The guarantee of quality of care and patient safety is of major importance in hospitals even though increased economic pressure and work intensification are ubiquitously present. Nevertheless, adverse events still occur in 3-4 % of hospital stays and of these 25-50 % are estimated to be avoidable. The identification of possible causes of error and the development of measures for the prevention of medical errors are essential for patient safety. The implementation and continuous development of a constructive culture of error tolerance are fundamental.The origins of errors can be differentiated into systemic latent and individual active causes and components of both categories are typically involved when an error occurs. Systemic causes are, for example out of date structural environments, lack of clinical standards and low personnel density. These causes arise far away from the patient, e.g. management decisions and can remain unrecognized for a long time. Individual causes involve, e.g. confirmation bias, error of fixation and prospective memory failure. These causes have a direct impact on patient care and can result in immediate injury to patients. Stress, unclear information, complex systems and a lack of professional experience can promote individual causes. Awareness of possible causes of error is a fundamental precondition to establishing appropriate countermeasures.Error prevention should include actions directly affecting the causes of error and includes checklists and standard operating procedures (SOP) to avoid fixation and prospective memory failure and team resource management to improve communication and the generation of collective mental models. Critical incident reporting systems (CIRS) provide the opportunity to learn from previous incidents without resulting in injury to patients. Information technology (IT) support systems, such as the computerized physician order entry system, assist in the prevention of medication errors by providing information on dosage, pharmacological interactions, side effects and contraindications of medications.The major challenges for quality and risk management, for the heads of departments and the executive board is the implementation and support of the described actions and a sustained guidance of the staff involved in the modification management process. The global trigger tool is suitable for improving transparency and objectifying the frequency of medical errors.

  11. Application of Consider Covariance to the Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Lundberg, John B.

    1996-01-01

    The extended Kalman filter (EKF) is the basis for many applications of filtering theory to real-time problems where estimates of the state of a dynamical system are to be computed based upon some set of observations. The form of the EKF may vary somewhat from one application to another, but the fundamental principles are typically unchanged among these various applications. As is the case in many filtering applications, models of the dynamical system (differential equations describing the state variables) and models of the relationship between the observations and the state variables are created. These models typically employ a set of constants whose values are established my means of theory or experimental procedure. Since the estimates of the state are formed assuming that the models are perfect, any modeling errors will affect the accuracy of the computed estimates. Note that the modeling errors may be errors of commission (errors in terms included in the model) or omission (errors in terms excluded from the model). Consequently, it becomes imperative when evaluating the performance of real-time filters to evaluate the effect of modeling errors on the estimates of the state.

  12. Characterizing the impact of model error in hydrologic time series recovery inverse problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Scott K.; He, Jiachuan; Vesselinov, Velimir V.

    Hydrologic models are commonly over-smoothed relative to reality, owing to computational limitations and to the difficulty of obtaining accurate high-resolution information. When used in an inversion context, such models may introduce systematic biases which cannot be encapsulated by an unbiased “observation noise” term of the type assumed by standard regularization theory and typical Bayesian formulations. Despite its importance, model error is difficult to encapsulate systematically and is often neglected. In this paper, model error is considered for an important class of inverse problems that includes interpretation of hydraulic transients and contaminant source history inference: reconstruction of a time series thatmore » has been convolved against a transfer function (i.e., impulse response) that is only approximately known. Using established harmonic theory along with two results established here regarding triangular Toeplitz matrices, upper and lower error bounds are derived for the effect of systematic model error on time series recovery for both well-determined and over-determined inverse problems. It is seen that use of additional measurement locations does not improve expected performance in the face of model error. A Monte Carlo study of a realistic hydraulic reconstruction problem is presented, and the lower error bound is seen informative about expected behavior. Finally, a possible diagnostic criterion for blind transfer function characterization is also uncovered.« less

  13. Characterizing the impact of model error in hydrologic time series recovery inverse problems

    DOE PAGES

    Hansen, Scott K.; He, Jiachuan; Vesselinov, Velimir V.

    2017-10-28

    Hydrologic models are commonly over-smoothed relative to reality, owing to computational limitations and to the difficulty of obtaining accurate high-resolution information. When used in an inversion context, such models may introduce systematic biases which cannot be encapsulated by an unbiased “observation noise” term of the type assumed by standard regularization theory and typical Bayesian formulations. Despite its importance, model error is difficult to encapsulate systematically and is often neglected. In this paper, model error is considered for an important class of inverse problems that includes interpretation of hydraulic transients and contaminant source history inference: reconstruction of a time series thatmore » has been convolved against a transfer function (i.e., impulse response) that is only approximately known. Using established harmonic theory along with two results established here regarding triangular Toeplitz matrices, upper and lower error bounds are derived for the effect of systematic model error on time series recovery for both well-determined and over-determined inverse problems. It is seen that use of additional measurement locations does not improve expected performance in the face of model error. A Monte Carlo study of a realistic hydraulic reconstruction problem is presented, and the lower error bound is seen informative about expected behavior. Finally, a possible diagnostic criterion for blind transfer function characterization is also uncovered.« less

  14. Using MERRA Gridded Innovations for Quantifying Uncertainties in Analysis Fields and Diagnosing Observing System Inhomogeneities

    NASA Technical Reports Server (NTRS)

    da Silva, Arlindo; Redder, Christopher

    2010-01-01

    MERRA is a NASA reanalysis for the satellite era using a major new version of the Goddard Earth Observing System Data Assimilation System Version 5 (GEOS-5). The project focuses on historical analyses of the hydrological cycle on a broad range of weather and climate time scales and places the NASA EOS suite of observations in a climate context. The characterization of uncertainty in reanalysis fields is a commonly requested feature by users of such data. While intercomparison with reference data sets is common practice for ascertaining the realism of the datasets, such studies typically are restricted to long term climatological statistics and seldom provide state dependent measures of the uncertainties involved. In principle, variational data assimilation algorithms have the ability of producing error estimates for the analysis variables (typically surface pressure, winds, temperature, moisture and ozone) consistent with the assumed background and observation error statistics. However, these "perceived error estimates" are expensive to obtain and are limited by the somewhat simplistic errors assumed in the algorithm. The observation minus forecast residuals (innovations) by-product of any assimilation system constitutes a powerful tool for estimating the systematic and random errors in the analysis fields. Unfortunately, such data is usually not readily available with reanalysis products, often requiring the tedious decoding of large datasets and not so-user friendly file formats. With MERRA we have introduced a gridded version of the observations/innovations used in the assimilation process, using the same grid and data formats as the regular datasets. Such dataset empowers the user with the ability of conveniently performing observing system related analysis and error estimates. The scope of this dataset will be briefly described. We will present a systematic analysis of MERRA innovation time series for the conventional observing system, including maximum-likelihood estimates of background and observation errors, as well as global bias estimates. Starting with the joint PDF of innovations and analysis increments at observation locations we propose a technique for diagnosing bias among the observing systems, and document how these contextual biases have evolved during the satellite era covered by MERRA.

  15. Using MERRA Gridded Innovation for Quantifying Uncertainties in Analysis Fields and Diagnosing Observing System Inhomogeneities

    NASA Astrophysics Data System (ADS)

    da Silva, A.; Redder, C. R.

    2010-12-01

    MERRA is a NASA reanalysis for the satellite era using a major new version of the Goddard Earth Observing System Data Assimilation System Version 5 (GEOS-5). The Project focuses on historical analyses of the hydrological cycle on a broad range of weather and climate time scales and places the NASA EOS suite of observations in a climate context. The characterization of uncertainty in reanalysis fields is a commonly requested feature by users of such data. While intercomparison with reference data sets is common practice for ascertaining the realism of the datasets, such studies typically are restricted to long term climatological statistics and seldom provide state dependent measures of the uncertainties involved. In principle, variational data assimilation algorithms have the ability of producing error estimates for the analysis variables (typically surface pressure, winds, temperature, moisture and ozone) consistent with the assumed background and observation error statistics. However, these "perceived error estimates" are expensive to obtain and are limited by the somewhat simplistic errors assumed in the algorithm. The observation minus forecast residuals (innovations) by-product of any assimilation system constitutes a powerful tool for estimating the systematic and random errors in the analysis fields. Unfortunately, such data is usually not readily available with reanalysis products, often requiring the tedious decoding of large datasets and not so-user friendly file formats. With MERRA we have introduced a gridded version of the observations/innovations used in the assimilation process, using the same grid and data formats as the regular datasets. Such dataset empowers the user with the ability of conveniently performing observing system related analysis and error estimates. The scope of this dataset will be briefly described. We will present a systematic analysis of MERRA innovation time series for the conventional observing system, including maximum-likelihood estimates of background and observation errors, as well as global bias estimates. Starting with the joint PDF of innovations and analysis increments at observation locations we propose a technique for diagnosing bias among the observing systems, and document how these contextual biases have evolved during the satellite era covered by MERRA.

  16. Comparing the spelling and reading abilities of students with cochlear implants and students with typical hearing.

    PubMed

    Apel, Kenn; Masterson, Julie J

    2015-04-01

    The purpose of this study was to determine whether students with and without hearing loss (HL) differed in their spelling abilities and, specifically, in the underlying linguistic awareness skills that support spelling ability. Furthermore, we examined whether there were differences between the two groups in the relationship between reading and spelling. We assessed the spelling, word-level reading, and reading comprehension skills of nine students with cochlear implants and nine students with typical hearing who were matched for reading age. The students' spellings were analyzed to determine whether the misspellings were due to errors with phonemic awareness, orthographic pattern or morphological awareness, or poor mental graphemic representations. The students with HL demonstrated markedly less advanced spelling abilities than the students with typical hearing. For the students with HL, the misspellings were primarily due to deficiencies in orthographic pattern and morphological awareness. Correlations between measures of spelling and both real word reading and reading comprehension were lower for the students with HL. With additional investigations using a similar approach to spelling analysis that captures the underlying causes for spelling errors, researchers will better understand the linguistic awareness abilities that students with HL bring to the task of reading and spelling. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Retrieval of carbon dioxide vertical profiles from solar occultation observations and associated error budgets for ACE-FTS and CASS-FTS

    NASA Astrophysics Data System (ADS)

    Sioris, C. E.; Boone, C. D.; Nassar, R.; Sutton, K. J.; Gordon, I. E.; Walker, K. A.; Bernath, P. F.

    2014-02-01

    An algorithm is developed to retrieve the vertical profile of carbon dioxide in the 5 to 25 km altitude range using mid-infrared solar occultation spectra from the main instrument of the ACE (Atmospheric Chemistry Experiment) mission, namely the Fourier Transform Spectrometer (FTS). The main challenge is to find an atmospheric phenomenon which can be used for accurate tangent height determination in the lower atmosphere, where the tangent heights (THs) calculated from geometric and timing information is not of sufficient accuracy. Error budgets for the retrieval of CO2 from ACE-FTS and the FTS on a potential follow-on mission named CASS (Chemical and Aerosol Sounding Satellite) are calculated and contrasted. Retrieved THs are typically within 60 m of those retrieved using the ACE version 3.x software after revisiting the temperature dependence of the N2 CIA (Collision-Induced Absorption) laboratory measurements and accounting for sulfate aerosol extinction. After correcting for the known residual high bias of ACE version 3.x THs expected from CO2 spectroscopic/isotopic inconsistencies, the remaining bias for tangent heights determined with the N2 CIA is -20m. CO2 in the 5-13 km range in the 2009-2011 time frame is validated against aircraft measurements from CARIBIC, CONTRAIL and HIPPO, yielding typical biases of -1.7 ppm in the 5-13 km range. The standard error of these biases in this vertical range is 0.4 ppm. The multi-year ACE-FTS dataset is valuable in determining the seasonal variation of the latitudinal gradient which arises from the strong seasonal cycle in the Northern Hemisphere troposphere. The annual growth of CO2 in this time frame is determined to be 2.5 ± 0.7 ppm yr-1, in agreement with the currently accepted global growth rate based on ground-based measurements.

  18. The effect of bandwidth on filter instrument total ozone accuracy

    NASA Technical Reports Server (NTRS)

    Basher, R. E.

    1977-01-01

    The effect of the width and shape of the New Zealand filter instrument's passbands on measured total-ozone accuracy is determined using a numerical model of the spectral measurement process. The model enables the calculation of corrections for the 'bandwidth-effect' error and shows that highly attenuating passband skirts and well-suppressed leakage bands are at least as important as narrow half-bandwidths. Over typical ranges of airmass and total ozone, the range in the bandwidth-effect correction is about 2% in total ozone for the filter instrument, compared with about 1% for the Dobson instrument.

  19. GRANULATION IN THE PHOTOSPHERE OF {zeta} CYGNI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gray, David F., E-mail: dfgray@uwo.ca

    2012-05-15

    A series of 35 high-resolution spectra are used to measure the third-signature plot of the G8 III star, {zeta} Cygni, which shows convective velocities only 8% larger than the Sun. Bisector mapping yields a flux deficit, a measure of granulation contrast, typical of other giants. The observations also give radial velocities with errors {approx}30 m s{sup -1} and allow the orbit to be refined. Velocity excursions relative to the smooth orbital motion, possibly from the granulation, have values exceeding 200 m s{sup -1}. Temperature variations were looked for using line-depth ratios, but none were found.

  20. Typicality and Misinformation: Two Sources of Distortion

    ERIC Educational Resources Information Center

    Luna, Karlos; Migueles, Malen

    2008-01-01

    This study examined the effect of two sources of memory error: exposure to post-event information and extracting typical contents from schemata. Participants were shown a video of a bank robbery and presented with high-and low-typicality misinformation extracted from two normative studies. The misleading suggestions consisted of either changes in…

  1. Errors in Focus? Native and Non-Native Perceptions of Error Salience in Hong Kong Student English - A Case Study.

    ERIC Educational Resources Information Center

    Newbrook, Mark

    1990-01-01

    A study compared the perceptions of two experts from different cultural backgrounds concerning salience of a variety of errors typical of the English written by Hong Kong secondary and college students. A book on English error types written by a Hong-Kong born, fluent Chinese-English bilingual linguist was analyzed for its emphases, and a list of…

  2. Integrating different tracking systems in football: multiple camera semi-automatic system, local position measurement and GPS technologies.

    PubMed

    Buchheit, Martin; Allen, Adam; Poon, Tsz Kit; Modonutti, Mattia; Gregson, Warren; Di Salvo, Valter

    2014-12-01

    Abstract During the past decade substantial development of computer-aided tracking technology has occurred. Therefore, we aimed to provide calibration equations to allow the interchangeability of different tracking technologies used in soccer. Eighty-two highly trained soccer players (U14-U17) were monitored during training and one match. Player activity was collected simultaneously with a semi-automatic multiple-camera (Prozone), local position measurement (LPM) technology (Inmotio) and two global positioning systems (GPSports and VX). Data were analysed with respect to three different field dimensions (small, <30 m 2 to full-pitch, match). Variables provided by the systems were compared, and calibration equations (linear regression models) between each system were calculated for each field dimension. Most metrics differed between the 4 systems with the magnitude of the differences dependant on both pitch size and the variable of interest. Trivial-to-small between-system differences in total distance were noted. However, high-intensity running distance (>14.4 km · h -1 ) was slightly-to-moderately greater when tracked with Prozone, and accelerations, small-to-very largely greater with LPM. For most of the equations, the typical error of the estimate was of a moderate magnitude. Interchangeability of the different tracking systems is possible with the provided equations, but care is required given their moderate typical error of the estimate.

  3. Results of the NIST National Ball Plate Round Robin.

    PubMed

    Caskey, G W; Phillips, S D; Borchardt, B R

    1997-01-01

    This report examines the results of the ball plate round robin administered by NIST. The round robin was part of an effort to assess the current state of industry practices for measurements made using coordinate measuring machines. Measurements of a two-dimensional ball plate (240 mm by 240 mm) on 41 coordinate measuring machines were collected and analyzed. Typically, the deviations of the reported X and Y coordinates from the calibrated values were within ± 5 μm, with some coordinate deviations exceeding 20.0 μm. One of the most significant observations from these data was that over 75 % of the participants failed to correctly estimate their measurement error on one or more of the ball plate spheres.

  4. Insights from Synthetic Star-forming Regions. II. Verifying Dust Surface Density, Dust Temperature, and Gas Mass Measurements With Modified Blackbody Fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koepferl, Christine M.; Robitaille, Thomas P.; Dale, James E., E-mail: koepferl@usm.lmu.de

    We use a large data set of realistic synthetic observations (produced in Paper I of this series) to assess how observational techniques affect the measurement physical properties of star-forming regions. In this part of the series (Paper II), we explore the reliability of the measured total gas mass, dust surface density and dust temperature maps derived from modified blackbody fitting of synthetic Herschel observations. We find from our pixel-by-pixel analysis of the measured dust surface density and dust temperature a worrisome error spread especially close to star formation sites and low-density regions, where for those “contaminated” pixels the surface densitiesmore » can be under/overestimated by up to three orders of magnitude. In light of this, we recommend to treat the pixel-based results from this technique with caution in regions with active star formation. In regions of high background typical in the inner Galactic plane, we are not able to recover reliable surface density maps of individual synthetic regions, since low-mass regions are lost in the far-infrared background. When measuring the total gas mass of regions in moderate background, we find that modified blackbody fitting works well (absolute error: + 9%; −13%) up to 10 kpc distance (errors increase with distance). Commonly, the initial images are convolved to the largest common beam-size, which smears contaminated pixels over large areas. The resulting information loss makes this commonly used technique less verifiable as now χ {sup 2} values cannot be used as a quality indicator of a fitted pixel. Our control measurements of the total gas mass (without the step of convolution to the largest common beam size) produce similar results (absolute error: +20%; −7%) while having much lower median errors especially for the high-mass stellar feedback phase. In upcoming papers (Paper III; Paper IV) of this series we test the reliability of measured star formation rate with direct and indirect techniques.« less

  5. Error Patterns in Research Papers by Pacific Rim Students.

    ERIC Educational Resources Information Center

    Crowe, Chris

    By looking for patterns of errors in the research papers of Asian students, educators can uncover pedagogical strategies to help students avoid repeating such errors. While a good deal of research has identified a number of sentence-level problems which are typical of Asian students writing in English, little attempt has been made to consider the…

  6. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data.

    PubMed

    Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C

    2015-12-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials. (c) 2016 APA, all rights reserved).

  7. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data

    PubMed Central

    Holsclaw, Tracy; Hallgren, Kevin A.; Steyvers, Mark; Smyth, Padhraic; Atkins, David C.

    2015-01-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non-normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased type-I and type-II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally-technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in supplementary materials. PMID:26098126

  8. Estimating Coastal Digital Elevation Model (DEM) Uncertainty

    NASA Astrophysics Data System (ADS)

    Amante, C.; Mesick, S.

    2017-12-01

    Integrated bathymetric-topographic digital elevation models (DEMs) are representations of the Earth's solid surface and are fundamental to the modeling of coastal processes, including tsunami, storm surge, and sea-level rise inundation. Deviations in elevation values from the actual seabed or land surface constitute errors in DEMs, which originate from numerous sources, including: (i) the source elevation measurements (e.g., multibeam sonar, lidar), (ii) the interpolative gridding technique (e.g., spline, kriging) used to estimate elevations in areas unconstrained by source measurements, and (iii) the datum transformation used to convert bathymetric and topographic data to common vertical reference systems. The magnitude and spatial distribution of the errors from these sources are typically unknown, and the lack of knowledge regarding these errors represents the vertical uncertainty in the DEM. The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) has developed DEMs for more than 200 coastal communities. This study presents a methodology developed at NOAA NCEI to derive accompanying uncertainty surfaces that estimate DEM errors at the individual cell-level. The development of high-resolution (1/9th arc-second), integrated bathymetric-topographic DEMs along the southwest coast of Florida serves as the case study for deriving uncertainty surfaces. The estimated uncertainty can then be propagated into the modeling of coastal processes that utilize DEMs. Incorporating the uncertainty produces more reliable modeling results, and in turn, better-informed coastal management decisions.

  9. Frequency-domain gravitational waveform models for inspiraling binary neutron stars

    NASA Astrophysics Data System (ADS)

    Kawaguchi, Kyohei; Kiuchi, Kenta; Kyutoku, Koutarou; Sekiguchi, Yuichiro; Shibata, Masaru; Taniguchi, Keisuke

    2018-02-01

    We develop a model for frequency-domain gravitational waveforms from inspiraling binary neutron stars. Our waveform model is calibrated by comparison with hybrid waveforms constructed from our latest high-precision numerical-relativity waveforms and the SEOBNRv2T waveforms in the frequency range of 10-1000 Hz. We show that the phase difference between our waveform model and the hybrid waveforms is always smaller than 0.1 rad for the binary tidal deformability Λ ˜ in the range 300 ≲Λ ˜ ≲1900 and for a mass ratio between 0.73 and 1. We show that, for 10-1000 Hz, the distinguishability for the signal-to-noise ratio ≲50 and the mismatch between our waveform model and the hybrid waveforms are always smaller than 0.25 and 1.1 ×10-5 , respectively. The systematic error of our waveform model in the measurement of Λ ˜ is always smaller than 20 with respect to the hybrid waveforms for 300 ≲Λ ˜≲1900 . The statistical error in the measurement of binary parameters is computed employing our waveform model, and we obtain results consistent with the previous studies. We show that the systematic error of our waveform model is always smaller than 20% (typically smaller than 10%) of the statistical error for events with a signal-to-noise ratio of 50.

  10. Spatial distribution of errors associated with multistatic meteor radar

    NASA Astrophysics Data System (ADS)

    Hocking, W. K.

    2018-06-01

    With the recent increase in numbers of small and versatile low-power meteor radars, the opportunity exists to benefit from simultaneous application of multiple systems spaced by only a few hundred km and less. Transmissions from one site can be recorded at adjacent receiving sites using various degrees of forward scatter, potentially allowing atmospheric conditions in the mesopause regions between stations to be diagnosed. This can allow a better spatial overview of the atmospheric conditions at any time. Such studies have been carried out using a small version of such so-called multistatic meteor radars, e.g. Chau et al. (Radio Sci 52:811-828, 2017, https://doi.org/10.1002/2016rs006225 ). These authors were able to also make measurements of vorticity and divergence. However, measurement uncertainties arise which need to be considered in any application of such techniques. Some errors are so severe that they prohibit useful application of the technique in certain locations, particularly for zones at the midpoints of the radars sites. In this paper, software is developed to allow these errors to be determined, and examples of typical errors involved are discussed. The software should be of value to others who wish to optimize their own MMR systems.

  11. Parametric decadal climate forecast recalibration (DeFoReSt 1.0)

    NASA Astrophysics Data System (ADS)

    Pasternack, Alexander; Bhend, Jonas; Liniger, Mark A.; Rust, Henning W.; Müller, Wolfgang A.; Ulbrich, Uwe

    2018-01-01

    Near-term climate predictions such as decadal climate forecasts are increasingly being used to guide adaptation measures. For near-term probabilistic predictions to be useful, systematic errors of the forecasting systems have to be corrected. While methods for the calibration of probabilistic forecasts are readily available, these have to be adapted to the specifics of decadal climate forecasts including the long time horizon of decadal climate forecasts, lead-time-dependent systematic errors (drift) and the errors in the representation of long-term changes and variability. These features are compounded by small ensemble sizes to describe forecast uncertainty and a relatively short period for which typically pairs of reforecasts and observations are available to estimate calibration parameters. We introduce the Decadal Climate Forecast Recalibration Strategy (DeFoReSt), a parametric approach to recalibrate decadal ensemble forecasts that takes the above specifics into account. DeFoReSt optimizes forecast quality as measured by the continuous ranked probability score (CRPS). Using a toy model to generate synthetic forecast observation pairs, we demonstrate the positive effect on forecast quality in situations with pronounced and limited predictability. Finally, we apply DeFoReSt to decadal surface temperature forecasts from the MiKlip prototype system and find consistent, and sometimes considerable, improvements in forecast quality compared with a simple calibration of the lead-time-dependent systematic errors.

  12. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error.

    PubMed

    Carroll, Raymond J; Delaigle, Aurore; Hall, Peter

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  13. Evaluation and mitigation of potential errors in radiochromic film dosimetry due to film curvature at scanning

    PubMed Central

    Bradley, David A.; Nisbet, Andrew

    2015-01-01

    This work considers a previously overlooked uncertainty present in film dosimetry which results from moderate curvature of films during the scanning process. Small film samples are particularly susceptible to film curling which may be undetected or deemed insignificant. In this study, we consider test cases with controlled induced curvature of film and with film raised horizontally above the scanner plate. We also evaluate the difference in scans of a film irradiated with a typical brachytherapy dose distribution with the film naturally curved and with the film held flat on the scanner. Typical naturally occurring curvature of film at scanning, giving rise to a maximum height 1 to 2 mm above the scan plane, may introduce dose errors of 1% to 4%, and considerably reduce gamma evaluation passing rates when comparing film‐measured doses with treatment planning system‐calculated dose distributions, a common application of film dosimetry in radiotherapy. The use of a triple‐channel dosimetry algorithm appeared to mitigate the error due to film curvature compared to conventional single‐channel film dosimetry. The change in pixel value and calibrated reported dose with film curling or height above the scanner plate may be due to variations in illumination characteristics, optical disturbances, or a Callier‐type effect. There is a clear requirement for physically flat films at scanning to avoid the introduction of a substantial error source in film dosimetry. Particularly for small film samples, a compression glass plate above the film is recommended to ensure flat‐film scanning. This effect has been overlooked to date in the literature. PACS numbers: 87.55.Qr, 87.56.bg, 87.55.km PMID:26103181

  14. Developing a weighted measure of speech sound accuracy.

    PubMed

    Preston, Jonathan L; Ramsdell, Heather L; Oller, D Kimbrough; Edwards, Mary Louise; Tobin, Stephen J

    2011-02-01

    To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound Accuracy (WSSA) score. The authors then evaluate the reliability and validity of this measure. Phonetic transcriptions were analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy was validated against existing measures, was used to discriminate typical and disordered speech production, and was evaluated to examine sensitivity to changes in phonetic accuracy over time. Reliability between transcribers and consistency of scores among different word sets and testing points are compared. Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners' judgments of the severity of a child's speech disorder. The measure separates children with and without speech sound disorders and captures growth in phonetic accuracy in toddlers' speech over time. The measure correlates highly across transcribers, word lists, and testing points. Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children's speech.

  15. The Trojan Lifetime Champions Health Survey: Development, Validity, and Reliability

    PubMed Central

    Sorenson, Shawn C.; Romano, Russell; Scholefield, Robin M.; Schroeder, E. Todd; Azen, Stanley P.; Salem, George J.

    2015-01-01

    Context Self-report questionnaires are an important method of evaluating lifespan health, exercise, and health-related quality of life (HRQL) outcomes among elite, competitive athletes. Few instruments, however, have undergone formal characterization of their psychometric properties within this population. Objective To evaluate the validity and reliability of a novel health and exercise questionnaire, the Trojan Lifetime Champions (TLC) Health Survey. Design Descriptive laboratory study. Setting A large National Collegiate Athletic Association Division I university. Patients or Other Participants A total of 63 university alumni (age range, 24 to 84 years), including former varsity collegiate athletes and a control group of nonathletes. Intervention(s) Participants completed the TLC Health Survey twice at a mean interval of 23 days with randomization to the paper or electronic version of the instrument. Main Outcome Measure(s) Content validity, feasibility of administration, test-retest reliability, parallel-form reliability between paper and electronic forms, and estimates of systematic and typical error versus differences of clinical interest were assessed across a broad range of health, exercise, and HRQL measures. Results Correlation coefficients, including intraclass correlation coefficients (ICCs) for continuous variables and κ agreement statistics for ordinal variables, for test-retest reliability averaged 0.86, 0.90, 0.80, and 0.74 for HRQL, lifetime health, recent health, and exercise variables, respectively. Correlation coefficients, again ICCs and κ, for parallel-form reliability (ie, equivalence) between paper and electronic versions averaged 0.90, 0.85, 0.85, and 0.81 for HRQL, lifetime health, recent health, and exercise variables, respectively. Typical measurement error was less than the a priori thresholds of clinical interest, and we found minimal evidence of systematic test-retest error. We found strong evidence of content validity, convergent construct validity with the Short-Form 12 Version 2 HRQL instrument, and feasibility of administration in an elite, competitive athletic population. Conclusions These data suggest that the TLC Health Survey is a valid and reliable instrument for assessing lifetime and recent health, exercise, and HRQL, among elite competitive athletes. Generalizability of the instrument may be enhanced by additional, larger-scale studies in diverse populations. PMID:25611315

  16. Evaluation of airborne topographic lidar for quantifying beach changes

    USGS Publications Warehouse

    Sallenger, A.H.; Krabill, W.B.; Swift, R.N.; Brock, J.; List, J.; Hansen, M.; Holman, R.A.; Manizade, S.; Sontag, J.; Meredith, A.; Morgan, K.; Yunkel, J.K.; Frederick, E.B.; Stockdon, H.

    2003-01-01

    A scanning airborne topographic lidar was evaluated for its ability to quantify beach topography and changes during the Sandy Duck experiment in 1997 along the North Carolina coast. Elevation estimates, acquired with NASA's Airborne Topographic Mapper (ATM), were compared to elevations measured with three types of ground-based measurements - 1) differential GPS equipped all-terrain vehicle (ATV) that surveyed a 3-km reach of beach from the shoreline to the dune, 2) GPS antenna mounted on a stadia rod used to intensely survey a different 100 m reach of beach, and 3) a second GPS-equipped ATV that surveyed a 70-km-long transect along the coast. Over 40,000 individual intercomparisons between ATM and ground surveys were calculated. RMS vertical differences associated with the ATM when compared to ground measurements ranged from 13 to 19 cm. Considering all of the intercomparisons together, RMS ??? 15 cm. This RMS error represents a total error for individual elevation estimates including uncertainties associated with random and mean errors. The latter was the largest source of error and was attributed to drift in differential GPS. The ??? 15 cm vertical accuracy of the ATM is adequate to resolve beach-change signals typical of the impact of storms. For example, ATM surveys of Assateague Island (spanning the border of MD and VA) prior to and immediately following a severe northeaster showed vertical beach changes in places greater than 2 m, much greater than expected errors associated with the ATM. A major asset of airborne lidar is the high spatial data density. Measurements of elevation are acquired every few m2 over regional scales of hundreds of kilometers. Hence, many scales of beach morphology and change can be resolved, from beach cusps tens of meters in wavelength to entire coastal cells comprising tens to hundreds of kilometers of coast. Topographic lidars similar to the ATM are becoming increasingly available from commercial vendors and should, in the future, be widely used in beach surveying.

  17. APOLLO clock performance and normal point corrections

    NASA Astrophysics Data System (ADS)

    Liang, Y.; Murphy, T. W., Jr.; Colmenares, N. R.; Battat, J. B. R.

    2017-12-01

    The Apache point observatory lunar laser-ranging operation (APOLLO) has produced a large volume of high-quality lunar laser ranging (LLR) data since it began operating in 2006. For most of this period, APOLLO has relied on a GPS-disciplined, high-stability quartz oscillator as its frequency and time standard. The recent addition of a cesium clock as part of a timing calibration system initiated a comparison campaign between the two clocks. This has allowed correction of APOLLO range measurements—called normal points—during the overlap period, but also revealed a mechanism to correct for systematic range offsets due to clock errors in historical APOLLO data. Drift of the GPS clock on  ∼1000 s timescales contributed typically 2.5 mm of range error to APOLLO measurements, and we find that this may be reduced to  ∼1.6 mm on average. We present here a characterization of APOLLO clock errors, the method by which we correct historical data, and the resulting statistics.

  18. Lower limb muscle volume estimation from maximum cross-sectional area and muscle length in cerebral palsy and typically developing individuals.

    PubMed

    Vanmechelen, Inti M; Shortland, Adam P; Noble, Jonathan J

    2018-01-01

    Deficits in muscle volume may be a significant contributor to physical disability in young people with cerebral palsy. However, 3D measurements of muscle volume using MRI or 3D ultrasound may be difficult to make routinely in the clinic. We wished to establish whether accurate estimates of muscle volume could be made from a combination of anatomical cross-sectional area and length measurements in samples of typically developing young people and young people with bilateral cerebral palsy. Lower limb MRI scans were obtained from the lower limbs of 21 individuals with cerebral palsy (14.7±3years, 17 male) and 23 typically developing individuals (16.8±3.3years, 16 male). The volume, length and anatomical cross-sectional area were estimated from six muscles of the left lower limb. Analysis of Covariance demonstrated that the relationship between the length*cross-sectional area and volume was not significantly different depending on the subject group. Linear regression analysis demonstrated that the product of anatomical cross-sectional area and length bore a strong and significant relationship to the measured muscle volume (R 2 values between 0.955 and 0.988) with low standard error of the estimates of 4.8 to 8.9%. This study demonstrates that muscle volume may be estimated accurately in typically developing individuals and individuals with cerebral palsy by a combination of anatomical cross-sectional area and muscle length. 2D ultrasound may be a convenient method of making these measurements routinely in the clinic. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. ATM traffic experiments: A laboratory study of service interaction, loss fairness and loss characteristics

    NASA Astrophysics Data System (ADS)

    Helvik, B. E.; Stol, N.

    1995-04-01

    A reference measurement scenario is defined, where an ATM switch (OCTOPUS) is offered traffic from three source types representing the traffic resulting from typical services to be carried by an ATM network. These are high quality video (HQTV), high speed data (HSD) and constant bitrate transfer (CBR). In addition to be typical, these have widely different characteristics. Detailed definitions for these, and other actual source types, are made and entered into the Synthetic Traffic Generator (STG) database. Recommended traffic mixes of these sources are also made. Based on the above, laboratory measurements are carried out to study how the various kinds of traffic influence each other, how fairly the loss is distributed over services and connections, and what are the loss characteristics experienced. (Due to a software error detected in the measurement equipment after the work was concluded, the measurements are carried out with a HSD source with a load less 'aggressive' than intended.) The main findings are: Cell loss is very unfairly distributed among the various connections. During a loss burst, which occurs less frequently than the duration of a typical connection, affects mainly one or a few connections; Cell loss is unfairly distributed among the services. The ratios in the range from HSD: HQTV: CBR = 5 : 1 : 0.85 are observed, and unfairness increases with decreasing load burstiness; The loss characteristics vary during a loss burst, from one burst to the next and between services. Hence, it does not seem feasible to use 'typical-loss-statistics' to study the impairments on various services. In addition some supplementing work is reported.

  20. Power/Sample Size Calculations for Assessing Correlates of Risk in Clinical Efficacy Trials

    PubMed Central

    Gilbert, Peter B.; Janes, Holly E.; Huang, Yunda

    2016-01-01

    In a randomized controlled clinical trial that assesses treatment efficacy, a common objective is to assess the association of a measured biomarker response endpoint with the primary study endpoint in the active treatment group, using a case-cohort, case-control, or two-phase sampling design. Methods for power and sample size calculations for such biomarker association analyses typically do not account for the level of treatment efficacy, precluding interpretation of the biomarker association results in terms of biomarker effect modification of treatment efficacy, with detriment that the power calculations may tacitly and inadvertently assume that the treatment harms some study participants. We develop power and sample size methods accounting for this issue, and the methods also account for inter-individual variability of the biomarker that is not biologically relevant (e.g., due to technical measurement error). We focus on a binary study endpoint and on a biomarker subject to measurement error that is normally distributed or categorical with two or three levels. We illustrate the methods with preventive HIV vaccine efficacy trials, and include an R package implementing the methods. PMID:27037797

  1. Shuttle program: Ground tracking data program document shuttle OFT launch/landing

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1977-01-01

    The equations for processing ground tracking data during a space shuttle ascent or entry, or any nonfree flight phase of a shuttle mission are given. The resulting computer program processes data from up to three stations simultaneously: C-band station number 1; C-band station number 2; and an S-band station. The C-band data consists of range, azimuth, and elevation angle measurements. The S-band data consists of range, two angles, and integrated Doppler data in the form of cycle counts. A nineteen element state vector is used in Kalman filter to process the measurements. The acceleration components of the shuttle are taken to be independent exponentially-correlated random variables. Nine elements of the state vector are the measurement bias errors associated with range and two angles for each tracking station. The biases are all modeled as exponentially-correlated random variables with a typical time constant of 108 seconds. All time constants are taken to be the same for all nine state variables. This simplifies the logic in propagating the state error covariance matrix ahead in time.

  2. On Using Taylor's Hypothesis for Three-Dimensional Mixing Layers

    NASA Technical Reports Server (NTRS)

    LeBoeuf, Richard L.; Mehta, Rabindra D.

    1995-01-01

    In the present study, errors in using Taylor's hypothesis to transform measurements obtained in a temporal (or phase) frame onto a spatial one were evaluated. For the first time, phase-averaged ('real') spanwise and streamwise vorticity data measured on a three-dimensional grid were compared directly to those obtained using Taylor's hypothesis. The results show that even the qualitative features of the spanwise and streamwise vorticity distributions given by the two techniques can be very different. This is particularly true in the region of the spanwise roller pairing. The phase-averaged spanwise and streamwise peak vorticity levels given by Taylor's hypothesis are typically lower (by up to 40%) compared to the real measurements.

  3. Imaging issues for interferometry with CGH null correctors

    NASA Astrophysics Data System (ADS)

    Burge, James H.; Zhao, Chunyu; Zhou, Ping

    2010-07-01

    Aspheric surfaces, such as telescope mirrors, are commonly measured using interferometry with computer generated hologram (CGH) null correctors. The interferometers can be made with high precision and low noise, and CGHs can control wavefront errors to accuracy approaching 1 nm for difficult aspheric surfaces. However, such optical systems are typically poorly suited for high performance imaging. The aspheric surface must be viewed through a CGH that was intentionally designed to introduce many hundreds of waves of aberration. The imaging aberrations create difficulties for the measurements by coupling both geometric and diffraction effects into the measurement. These issues are explored here, and we show how the use of larger holograms can mitigate these effects.

  4. Simultaneous Multiwavelength Variability Characterization of the Free-floating Planetary-mass Object PSO J318.5‑22

    NASA Astrophysics Data System (ADS)

    Biller, Beth A.; Vos, Johanna; Buenzli, Esther; Allers, Katelyn; Bonnefoy, Mickaël; Charnay, Benjamin; Bézard, Bruno; Allard, France; Homeier, Derek; Bonavita, Mariangela; Brandner, Wolfgang; Crossfield, Ian; Dupuy, Trent; Henning, Thomas; Kopytova, Taisiya; Liu, Michael C.; Manjavacas, Elena; Schlieder, Joshua

    2018-02-01

    We present simultaneous Hubble Space Telescope (HST) WFC3+Spitzer IRAC variability monitoring for the highly variable young (∼20 Myr) planetary-mass object PSO J318.5‑22. Our simultaneous HST + Spitzer observations covered approximately two rotation periods with Spitzer and most of a rotation period with the HST. We derive a period of 8.6 ± 0.1 hr from the Spitzer light curve. Combining this period with the measured v\\sin i for this object, we find an inclination of 56.°2 ± 8.°1. We measure peak-to-trough variability amplitudes of 3.4% ± 0.1% for Spitzer Channel 2 and 4.4%–5.8% (typical 68% confidence errors of ∼0.3%) in the near-IR bands (1.07–1.67 μm) covered by the WFC3 G141 prism—the mid-IR variability amplitude for PSO J318.5‑22 is one of the highest variability amplitudes measured in the mid-IR for any brown dwarf or planetary-mass object. Additionally, we detect phase offsets ranging from 200° to 210° (typical error of ∼4°) between synthesized near-IR light curves and the Spitzer mid-IR light curve, likely indicating depth-dependent longitudinal atmospheric structure in this atmosphere. The detection of similar variability amplitudes in wide spectral bands relative to absorption features suggests that the driver of the variability may be inhomogeneous clouds (perhaps a patchy haze layer over thick clouds), as opposed to hot spots or compositional inhomogeneities at the top-of-atmosphere level.

  5. A misleading review of response bias: comment on McGrath, Mitchell, Kim, and Hough (2010).

    PubMed

    Rohling, Martin L; Larrabee, Glenn J; Greiffenstein, Manfred F; Ben-Porath, Yossef S; Lees-Haley, Paul; Green, Paul; Greve, Kevin W

    2011-07-01

    In the May 2010 issue of Psychological Bulletin, R. E. McGrath, M. Mitchell, B. H. Kim, and L. Hough published an article entitled "Evidence for Response Bias as a Source of Error Variance in Applied Assessment" (pp. 450-470). They argued that response bias indicators used in a variety of settings typically have insufficient data to support such use in everyday clinical practice. Furthermore, they claimed that despite 100 years of research into the use of response bias indicators, "a sufficient justification for [their] use… in applied settings remains elusive" (p. 450). We disagree with McGrath et al.'s conclusions. In fact, we assert that the relevant and voluminous literature that has addressed the issues of response bias substantiates validity of these indicators. In addition, we believe that response bias measures should be used in clinical and research settings on a regular basis. Finally, the empirical evidence for the use of response bias measures is strongest in clinical neuropsychology. We argue that McGrath et al.'s erroneous perspective on response bias measures is a result of 3 errors in their research methodology: (a) inclusion criteria for relevant studies that are too narrow; (b) errors in interpreting results of the empirical research they did include; (c) evidence of a confirmatory bias in selectively citing the literature, as evidence of moderation appears to have been overlooked. Finally, their acknowledging experts in the field who might have highlighted these errors prior to publication may have prevented critiques during the review process.

  6. 2D/3D fetal cardiac dataset segmentation using a deformable model.

    PubMed

    Dindoyal, Irving; Lambrou, Tryphon; Deng, Jing; Todd-Pokropek, Andrew

    2011-07-01

    To segment the fetal heart in order to facilitate the 3D assessment of the cardiac function and structure. Ultrasound acquisition typically results in drop-out artifacts of the chamber walls. The authors outline a level set deformable model to automatically delineate the small fetal cardiac chambers. The level set is penalized from growing into an adjacent cardiac compartment using a novel collision detection term. The region based model allows simultaneous segmentation of all four cardiac chambers from a user defined seed point placed in each chamber. The segmented boundaries are automatically penalized from intersecting at walls with signal dropout. Root mean square errors of the perpendicular distances between the algorithm's delineation and manual tracings are within 2 mm which is less than 10% of the length of a typical fetal heart. The ejection fractions were determined from the 3D datasets. We validate the algorithm using a physical phantom and obtain volumes that are comparable to those from physically determined means. The algorithm segments volumes with an error of within 13% as determined using a physical phantom. Our original work in fetal cardiac segmentation compares automatic and manual tracings to a physical phantom and also measures inter observer variation.

  7. Contact Versus Non-Contact Measurement of a Helicopter Main Rotor Composite Blade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luczak, Marcin; Dziedziech, Kajetan; Peeters, Bart

    2010-05-28

    The dynamic characterization of lightweight structures is particularly complex as the impact of the weight of sensors and instrumentation (cables, mounting of exciters...) can distort the results. Varying mass loading or constraint effects between partial measurements may determine several errors on the final conclusions. Frequency shifts can lead to erroneous interpretations of the dynamics parameters. Typically these errors remain limited to a few percent. Inconsistent data sets however can result in major processing errors, with all related consequences towards applications based on the consistency assumption, such as global modal parameter identification, model-based damage detection and FRF-based matrix inversion in substructuring,more » load identification and transfer path analysis [1]. This paper addresses the subject of accuracy in the context of the measurement of the dynamic properties of a particular lightweight structure. It presents a comprehensive comparative study between the use of accelerometer, laser vibrometer (scanning LDV) and PU-probe (acoustic particle velocity and pressure) measurements to measure the structural responses, with as final aim the comparison of modal model quality assessment. The object of the investigation is a composite material blade from the main rotor of a helicopter. The presented results are part of an extensive test campaign performed with application of SIMO, MIMO, random and harmonic excitation, and the use of the mentioned contact and non-contact measurement techniques. The advantages and disadvantages of the applied instrumentation are discussed. Presented are real-life measurement problems related to the different set up conditions. Finally an analysis of estimated models is made in view of assessing the applicability of the various measurement approaches for successful fault detection based on modal parameters observation as well as in uncertain non-deterministic numerical model updating.« less

  8. Contact Versus Non-Contact Measurement of a Helicopter Main Rotor Composite Blade

    NASA Astrophysics Data System (ADS)

    Luczak, Marcin; Dziedziech, Kajetan; Vivolo, Marianna; Desmet, Wim; Peeters, Bart; Van der Auweraer, Herman

    2010-05-01

    The dynamic characterization of lightweight structures is particularly complex as the impact of the weight of sensors and instrumentation (cables, mounting of exciters…) can distort the results. Varying mass loading or constraint effects between partial measurements may determine several errors on the final conclusions. Frequency shifts can lead to erroneous interpretations of the dynamics parameters. Typically these errors remain limited to a few percent. Inconsistent data sets however can result in major processing errors, with all related consequences towards applications based on the consistency assumption, such as global modal parameter identification, model-based damage detection and FRF-based matrix inversion in substructuring, load identification and transfer path analysis [1]. This paper addresses the subject of accuracy in the context of the measurement of the dynamic properties of a particular lightweight structure. It presents a comprehensive comparative study between the use of accelerometer, laser vibrometer (scanning LDV) and PU-probe (acoustic particle velocity and pressure) measurements to measure the structural responses, with as final aim the comparison of modal model quality assessment. The object of the investigation is a composite material blade from the main rotor of a helicopter. The presented results are part of an extensive test campaign performed with application of SIMO, MIMO, random and harmonic excitation, and the use of the mentioned contact and non-contact measurement techniques. The advantages and disadvantages of the applied instrumentation are discussed. Presented are real-life measurement problems related to the different set up conditions. Finally an analysis of estimated models is made in view of assessing the applicability of the various measurement approaches for successful fault detection based on modal parameters observation as well as in uncertain non-deterministic numerical model updating.

  9. Acquisition of Pragmatic Routines by Learners of L2 English: Investigating Common Errors and Sources of Pragmatic Fossilization

    ERIC Educational Resources Information Center

    Tajeddin, Zia; Alemi, Minoo; Pashmforoosh, Roya

    2017-01-01

    Unlike linguistic fossilization, pragmatic fossilization has received scant attention in fossilization research. To bridge this gap, the present study adopted a typical-error method of fossilization research to identify the most frequent errors in pragmatic routines committed by Persian-speaking learners of L2 English and explore the sources of…

  10. Speech abilities in preschool children with speech sound disorder with and without co-occurring language impairment.

    PubMed

    Macrae, Toby; Tyler, Ann A

    2014-10-01

    The authors compared preschool children with co-occurring speech sound disorder (SSD) and language impairment (LI) to children with SSD only in their numbers and types of speech sound errors. In this post hoc quasi-experimental study, independent samples t tests were used to compare the groups in the standard score from different tests of articulation/phonology, percent consonants correct, and the number of omission, substitution, distortion, typical, and atypical error patterns used in the production of different wordlists that had similar levels of phonetic and structural complexity. In comparison with children with SSD only, children with SSD and LI used similar numbers but different types of errors, including more omission patterns ( p < .001, d = 1.55) and fewer distortion patterns ( p = .022, d = 1.03). There were no significant differences in substitution, typical, and atypical error pattern use. Frequent omission error pattern use may reflect a more compromised linguistic system characterized by absent phonological representations for target sounds (see Shriberg et al., 2005). Research is required to examine the diagnostic potential of early frequent omission error pattern use in predicting later diagnoses of co-occurring SSD and LI and/or reading problems.

  11. Cognitive emotion regulation enhances aversive prediction error activity while reducing emotional responses.

    PubMed

    Mulej Bratec, Satja; Xie, Xiyao; Schmid, Gabriele; Doll, Anselm; Schilbach, Leonhard; Zimmer, Claus; Wohlschläger, Afra; Riedl, Valentin; Sorg, Christian

    2015-12-01

    Cognitive emotion regulation is a powerful way of modulating emotional responses. However, despite the vital role of emotions in learning, it is unknown whether the effect of cognitive emotion regulation also extends to the modulation of learning. Computational models indicate prediction error activity, typically observed in the striatum and ventral tegmental area, as a critical neural mechanism involved in associative learning. We used model-based fMRI during aversive conditioning with and without cognitive emotion regulation to test the hypothesis that emotion regulation would affect prediction error-related neural activity in the striatum and ventral tegmental area, reflecting an emotion regulation-related modulation of learning. Our results show that cognitive emotion regulation reduced emotion-related brain activity, but increased prediction error-related activity in a network involving ventral tegmental area, hippocampus, insula and ventral striatum. While the reduction of response activity was related to behavioral measures of emotion regulation success, the enhancement of prediction error-related neural activity was related to learning performance. Furthermore, functional connectivity between the ventral tegmental area and ventrolateral prefrontal cortex, an area involved in regulation, was specifically increased during emotion regulation and likewise related to learning performance. Our data, therefore, provide first-time evidence that beyond reducing emotional responses, cognitive emotion regulation affects learning by enhancing prediction error-related activity, potentially via tegmental dopaminergic pathways. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Automatic alignment for three-dimensional tomographic reconstruction

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Tristan; Maretzke, Simon; Joost Batenburg, K.

    2018-02-01

    In tomographic reconstruction, the goal is to reconstruct an unknown object from a collection of line integrals. Given a complete sampling of such line integrals for various angles and directions, explicit inverse formulas exist to reconstruct the object. Given noisy and incomplete measurements, the inverse problem is typically solved through a regularized least-squares approach. A challenge for both approaches is that in practice the exact directions and offsets of the x-rays are only known approximately due to, e.g. calibration errors. Such errors lead to artifacts in the reconstructed image. In the case of sufficient sampling and geometrically simple misalignment, the measurements can be corrected by exploiting so-called consistency conditions. In other cases, such conditions may not apply and we have to solve an additional inverse problem to retrieve the angles and shifts. In this paper we propose a general algorithmic framework for retrieving these parameters in conjunction with an algebraic reconstruction technique. The proposed approach is illustrated by numerical examples for both simulated data and an electron tomography dataset.

  13. The penta-prism LTP: A long-trace-profiler with stationary optical head and moving penta prism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qian, S.; Jark, W.; Takacs, P.Z.

    1995-03-01

    Metrology requirements for optical components for third-generation synchrotron sources are taxing the state of the art in manufacturing technology. We have investigated a number of error sources in a commercial figure measurement instrument, the Long-Trace-Profiler II, and have demonstrated that, with some simple modifications, we can significantly reduce the effect of error sources and improve the accuracy and reliability of the measurement. By keeping the optical head stationary and moving a penta prism along the translation stage, as in the original pencil-beam interferometer design of von Bieren, the stability of the optical system is greatly improved, and the remaining errormore » signals can be corrected by a simple reference beam subtraction. We illustrate the performance of the modified system by investigating the distortion produced by gravity on a typical synchrotron mirror and demonstrate the repeatability of the instrument despite relaxed tolerances on the translation stage.« less

  14. Doppler Global Velocimetry at NASA Glenn Research Center: System Discussion and Results

    NASA Technical Reports Server (NTRS)

    Lant, Christian T.

    2003-01-01

    A ruggedized Doppler Global Velocimetry system has been built and tested at NASA Glenn Research Center. One component of planar velocity measurements of subsonic and supersonic flows from an under-expanded free jet are reported, which agree well with predicted values. An error analysis evaluates geometric and spectral error terms, and characterizes speckle noise in isotropic data. A multimode, fused fiber optic bundle is demonstrated to couple up to 650 mJ/pulse of laser light without burning or fiber ablation, and without evidence of Stimulated Brillouin Scattering or other spectral-broadening problems. Comparisons are made between spinning wheel data using illumination by freespace beam propagation and fiber optic beam delivery. The fiber bundle illumination is found to provide more spatially even and stable illumination than is typically available from pulsed Nd:YAG laser beams. The fiber bundle beam delivery is also a step toward making remote measurements and automatic real-time plume sectioning feasible in wind tunnel environments.

  15. Performance of a Space-Based Wavelet Compressor for Plasma Count Data on the MMS Fast Plasma Investigation

    NASA Technical Reports Server (NTRS)

    Barrie, A. C.; Smith, S. E.; Dorelli, J. C.; Gershman, D. J.; Yeh, P.; Schiff, C.; Avanov, L. A.

    2017-01-01

    Data compression has been a staple of imaging instruments for years. Recently, plasma measurements have utilized compression with relatively low compression ratios. The Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale (MMS) mission generates data roughly 100 times faster than previous plasma instruments, requiring a higher compression ratio to fit within the telemetry allocation. This study investigates the performance of a space-based compression standard employing a Discrete Wavelet Transform and a Bit Plane Encoder (DWT/BPE) in compressing FPI plasma count data. Data from the first 6 months of FPI operation are analyzed to explore the error modes evident in the data and how to adapt to them. While approximately half of the Dual Electron Spectrometer (DES) maps had some level of loss, it was found that there is little effect on the plasma moments and that errors present in individual sky maps are typically minor. The majority of Dual Ion Spectrometer burst sky maps compressed in a lossless fashion, with no error introduced during compression. Because of induced compression error, the size limit for DES burst images has been increased for Phase 1B. Additionally, it was found that the floating point compression mode yielded better results when images have significant compression error, leading to floating point mode being used for the fast survey mode of operation for Phase 1B. Despite the suggested tweaks, it was found that wavelet-based compression, and a DWT/BPE algorithm in particular, is highly suitable to data compression for plasma measurement instruments and can be recommended for future missions.

  16. From the Lab to the real world : sources of error in UF {sub 6} gas enrichment monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lombardi, Marcie L.

    2012-03-01

    Safeguarding uranium enrichment facilities is a serious concern for the International Atomic Energy Agency (IAEA). Safeguards methods have changed over the years, most recently switching to an improved safeguards model that calls for new technologies to help keep up with the increasing size and complexity of today’s gas centrifuge enrichment plants (GCEPs). One of the primary goals of the IAEA is to detect the production of uranium at levels greater than those an enrichment facility may have declared. In order to accomplish this goal, new enrichment monitors need to be as accurate as possible. This dissertation will look at themore » Advanced Enrichment Monitor (AEM), a new enrichment monitor designed at Los Alamos National Laboratory. Specifically explored are various factors that could potentially contribute to errors in a final enrichment determination delivered by the AEM. There are many factors that can cause errors in the determination of uranium hexafluoride (UF{sub 6}) gas enrichment, especially during the period when the enrichment is being measured in an operating GCEP. To measure enrichment using the AEM, a passive 186-keV (kiloelectronvolt) measurement is used to determine the {sup 235}U content in the gas, and a transmission measurement or a gas pressure reading is used to determine the total uranium content. A transmission spectrum is generated using an x-ray tube and a “notch” filter. In this dissertation, changes that could occur in the detection efficiency and the transmission errors that could result from variations in pipe-wall thickness will be explored. Additional factors that could contribute to errors in enrichment measurement will also be examined, including changes in the gas pressure, ambient and UF{sub 6} temperature, instrumental errors, and the effects of uranium deposits on the inside of the pipe walls will be considered. The sensitivity of the enrichment calculation to these various parameters will then be evaluated. Previously, UF{sub 6} gas enrichment monitors have required empty pipe measurements to accurately determine the pipe attenuation (the pipe attenuation is typically much larger than the attenuation in the gas). This dissertation reports on a method for determining the thickness of a pipe in a GCEP when obtaining an empty pipe measurement may not be feasible. This dissertation studies each of the components that may add to the final error in the enrichment measurement, and the factors that were taken into account to mitigate these issues are also detailed and tested. The use of an x-ray generator as a transmission source and the attending stability issues are addressed. Both analytical calculations and experimental measurements have been used. For completeness, some real-world analysis results from the URENCO Capenhurst enrichment plant have been included, where the final enrichment error has remained well below 1% for approximately two months.« less

  17. The Reliability of a Three-Dimensional Photo System- (3dMDface-) Based Evaluation of the Face in Cleft Lip Infants

    PubMed Central

    Ort, Rebecca; Metzler, Philipp; Kruse, Astrid L.; Matthews, Felix; Zemann, Wolfgang; Grätz, Klaus W.; Luebbers, Heinz-Theo

    2012-01-01

    Ample data exists about the high precision of three-dimensional (3D) scanning devices and their data acquisition of the facial surface. However, a question remains regarding which facial landmarks are reliable if identified in 3D images taken under clinical circumstances. Sources of error to be addressed could be technical, user dependent, or patient respectively anatomy related. Based on clinical 3D photos taken with the 3dMDface system, the intra observer repeatability of 27 facial landmarks in six cleft lip (CL) infants and one non-CL infant was evaluated based on a total of over 1,100 measurements. Data acquisition was sometimes challenging but successful in all patients. The mean error was 0.86 mm, with a range of 0.39 mm (Exocanthion) to 2.21 mm (soft gonion). Typically, landmarks provided a small mean error but still showed quite a high variance in measurements, for example, exocanthion from 0.04 mm to 0.93 mm. Vice versa, relatively imprecise landmarks still provide accurate data regarding specific spatial planes. One must be aware of the fact that the degree of precision is dependent on landmarks and spatial planes in question. In clinical investigations, the degree of reliability for landmarks evaluated should be taken into account. Additional reliability can be achieved via multiple measuring. PMID:22919476

  18. The Red Edge Problem in asteroid band parameter analysis

    NASA Astrophysics Data System (ADS)

    Lindsay, Sean S.; Dunn, Tasha L.; Emery, Joshua P.; Bowles, Neil E.

    2016-04-01

    Near-infrared reflectance spectra of S-type asteroids contain two absorptions at 1 and 2 μm (band I and II) that are diagnostic of mineralogy. A parameterization of these two bands is frequently employed to determine the mineralogy of S(IV) asteroids through the use of ordinary chondrite calibration equations that link the mineralogy to band parameters. The most widely used calibration study uses a Band II terminal wavelength point (red edge) at 2.50 μm. However, due to the limitations of the NIR detectors on prominent telescopes used in asteroid research, spectral data for asteroids are typically only reliable out to 2.45 μm. We refer to this discrepancy as "The Red Edge Problem." In this report, we evaluate the associated errors for measured band area ratios (BAR = Area BII/BI) and calculated relative abundance measurements. We find that the Red Edge Problem is often not the dominant source of error for the observationally limited red edge set at 2.45 μm, but it frequently is for a red edge set at 2.40 μm. The error, however, is one sided and therefore systematic. As such, we provide equations to adjust measured BARs to values with a different red edge definition. We also provide new ol/(ol+px) calibration equations for red edges set at 2.40 and 2.45 μm.

  19. BEAM-FORMING ERRORS IN MURCHISON WIDEFIELD ARRAY PHASED ARRAY ANTENNAS AND THEIR EFFECTS ON EPOCH OF REIONIZATION SCIENCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neben, Abraham R.; Hewitt, Jacqueline N.; Dillon, Joshua S.

    2016-03-20

    Accurate antenna beam models are critical for radio observations aiming to isolate the redshifted 21 cm spectral line emission from the Dark Ages and the Epoch of Reionization (EOR) and unlock the scientific potential of 21 cm cosmology. Past work has focused on characterizing mean antenna beam models using either satellite signals or astronomical sources as calibrators, but antenna-to-antenna variation due to imperfect instrumentation has remained unexplored. We characterize this variation for the Murchison Widefield Array (MWA) through laboratory measurements and simulations, finding typical deviations of the order of ±10%–20% near the edges of the main lobe and in themore » sidelobes. We consider the ramifications of these results for image- and power spectrum-based science. In particular, we simulate visibilities measured by a 100 m baseline and find that using an otherwise perfect foreground model, unmodeled beam-forming errors severely limit foreground subtraction accuracy within the region of Fourier space contaminated by foreground emission (the “wedge”). This region likely contains much of the cosmological signal, and accessing it will require measurement of per-antenna beam patterns. However, unmodeled beam-forming errors do not contaminate the Fourier space region expected to be free of foreground contamination (the “EOR window”), showing that foreground avoidance remains a viable strategy.« less

  20. Verb inflection in monolingual Dutch and sequential bilingual Turkish-Dutch children with and without SLI.

    PubMed

    Blom, Elma; de Jong, Jan; Orgassa, Antje; Baker, Anne; Weerman, Fred

    2013-01-01

    Both children with specific language impairment (SLI) and children who acquire a second language (L2) make errors with verb inflection. This overlap between SLI and L2 raises the question if verb inflection can discriminate between L2 children with and without SLI. In this study we addressed this question for Dutch. The secondary goal of the study was to investigate variation in error types and error profiles across groups. Data were collected from 6-8-year-old children with SLI who acquire Dutch as their first language (L1), Dutch L1 children with a typical development (TD), Dutch L2 children with SLI, and Dutch L1 TD children who were on average 2 years younger. An experimental elicitation task was employed that tested use of verb inflection; context (3SG, 3PL) was manipulated and word order and verb type were controlled. Accuracy analyses revealed effects of impairment in both L1 and L2 children with SLI. However, individual variation indicated that there is no specific error profile for SLI. Verb inflection use as measured in our study discriminated fairly well in the L1 group but classification was less accurate in the L2 group. Between-group differences emerged furthermore for certain types of errors, but all groups also showed considerable variation in errors and there was not a specific error profile that distinguished SLI from TD. © 2013 Royal College of Speech and Language Therapists.

  1. Is the Speech Transmission Index (STI) a robust measure of sound system speech intelligibility performance?

    NASA Astrophysics Data System (ADS)

    Mapp, Peter

    2002-11-01

    Although RaSTI is a good indicator of the speech intelligibility capability of auditoria and similar spaces, during the past 2-3 years it has been shown that RaSTI is not a robust predictor of sound system intelligibility performance. Instead, it is now recommended, within both national and international codes and standards, that full STI measurement and analysis be employed. However, new research is reported, that indicates that STI is not as flawless, nor robust as many believe. The paper highlights a number of potential error mechanisms. It is shown that the measurement technique and signal excitation stimulus can have a significant effect on the overall result and accuracy, particularly where DSP-based equipment is employed. It is also shown that in its current state of development, STI is not capable of appropriately accounting for a number of fundamental speech and system attributes, including typical sound system frequency response variations and anomalies. This is particularly shown to be the case when a system is operating under reverberant conditions. Comparisons between actual system measurements and corresponding word score data are reported where errors of up to 50 implications for VA and PA system performance verification will be discussed.

  2. Numerical correction of the phase error due to electromagnetic coupling effects in 1D EIT borehole measurements

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Zimmermann, E.; Huisman, J. A.; Treichel, A.; Wolters, B.; van Waasen, S.; Kemna, A.

    2012-12-01

    Spectral Electrical Impedance Tomography (EIT) allows obtaining images of the complex electrical conductivity for a broad frequency range (mHz to kHz). It has recently received increased interest in the field of near-surface geophysics and hydrogeophysics because of the relationships between complex electrical properties and hydrogeological and biogeochemical properties and processes observed in the laboratory with Spectral Induced Polarization (SIP). However, these laboratory results have also indicated that a high phase accuracy is required for surface and borehole EIT measurements because many soils and sediments are only weakly polarizable and show phase angles between 1 and 20 mrad. In the case of borehole EIT measurements, long cables and electrode chains (>10 meters) are typically used, which leads to undesired inductive coupling between the electric loops for current injection and potential measurement and capacitive coupling between the electrically conductive cable shielding and the soil. Depending on the electrical properties of the subsurface and the measured transfer impedances, both coupling effects can cause large phase errors that have typically limited the frequency bandwidth of field EIT measurement to the mHz to Hz range. The aim of this study is i) to develop correction procedures for these coupling effects to extend the applicability of EIT to the kHz range and ii) to validate these corrections using controlled laboratory measurements and field measurements. In order to do so, the inductive coupling effect was modeled using electronic circuit models and the capacitive coupling effect was modeled by integrating discrete capacitances in the electrical forward model describing the EIT measurement process. The correction methods were successfully verified with measurements under controlled conditions in a water-filled rain barrel, where a high phase accuracy of 2 mrad in the frequency range up to 10 kHz was achieved. In a field demonstration using a 25 m borehole chain with 8 electrodes with 1 m electrode separation, the corrections were also applied within a 1D inversion of the borehole EIT measurements. The results show that the correction methods increased the measurement accuracy considerably.

  3. Using Measured Plane-of-Array Data Directly in Photovoltaic Modeling: Methodology and Validation: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, Janine; Freestate, David; Riley, Cameron

    2016-11-01

    Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results frommore » both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.« less

  4. Using Measured Plane-of-Array Data Directly in Photovoltaic Modeling: Methodology and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, Janine; Freestate, David; Hobbs, William

    2016-11-21

    Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results frommore » both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.« less

  5. Using Measured Plane-of-Array Data Directly in Photovoltaic Modeling: Methodology and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, Janine; Freestate, David; Hobbs, William

    2016-06-05

    Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results frommore » both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.« less

  6. Sampling problems: The small scale structure of precipitation

    NASA Technical Reports Server (NTRS)

    Crane, R. K.

    1981-01-01

    The quantitative measurement of precipitation characteristics for any area on the surface of the Earth is not an easy task. Precipitation is rather variable in both space and time, and the distribution of surface rainfall data given location typically is substantially skewed. There are a number of precipitation process at work in the atmosphere, and few of them are well understood. The formal theory on sampling and estimating precipitation appears considerably deficient. Little systematic attention is given to nonsampling errors that always arise in utilizing any measurement system. Although the precipitation measurement problem is an old one, it continues to be one that is in need of systematic and careful attention. A brief history of the presently competing measurement technologies should aid us in understanding the problem inherent in this measurement task.

  7. Archie's law - a reappraisal

    NASA Astrophysics Data System (ADS)

    Glover, Paul W. J.

    2016-07-01

    When scientists apply Archie's first law they often include an extra parameter a, which was introduced about 10 years after the equation's first publication by Winsauer et al. (1952), and which is sometimes called the "tortuosity" or "lithology" parameter. This parameter is not, however, theoretically justified. Paradoxically, the Winsauer et al. (1952) form of Archie's law often performs better than the original, more theoretically correct version. The difference in the cementation exponent calculated from these two forms of Archie's law is important, and can lead to a misestimation of reserves by at least 20 % for typical reservoir parameter values. We have examined the apparent paradox, and conclude that while the theoretical form of the law is correct, the data that we have been analysing with Archie's law have been in error. There are at least three types of systematic error that are present in most measurements: (i) a porosity error, (ii) a pore fluid salinity error, and (iii) a temperature error. Each of these systematic errors is sufficient to ensure that a non-unity value of the parameter a is required in order to fit the electrical data well. Fortunately, the inclusion of this parameter in the fit has compensated for the presence of the systematic errors in the electrical and porosity data, leading to a value of cementation exponent that is correct. The exceptions are those cementation exponents that have been calculated for individual core plugs. We make a number of recommendations for reducing the systematic errors that contribute to the problem and suggest that the value of the parameter a may now be used as an indication of data quality.

  8. Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Castano, Diego J.

    1987-01-01

    Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.

  9. Precision rectifier detectors for ac resistance bridge measurements with application to temperature control systems for irradiation creep experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duncan, M. G.

    The suitability of several temperature measurement schemes for an irradiation creep experiment is examined. It is found that the specimen resistance can be used to measure and control the sample temperature if compensated for resistance drift due to radiation and annealing effects. A modified Kelvin bridge is presented that allows compensation for resistance drift by periodically checking the sample resistance at a controlled ambient temperature. A new phase-insensitive method for detecting the bridge error signals is presented. The phase-insensitive detector is formed by averaging the magnitude of two bridge voltages. Although this method is substantially less sensitive to stray reactancesmore » in the bridge than conventional phase-sensitive detectors, it is sensitive to gain stability and linearity of the rectifier circuits. Accuracy limitations of rectifier circuits are examined both theoretically and experimentally in great detail. Both hand analyses and computer simulations of rectifier errors are presented. Finally, the design of a temperature control system based on sample resistance measurement is presented. The prototype is shown to control a 316 stainless steel sample to within a 0.15/sup 0/C short term (10 sec) and a 0.03/sup 0/C long term (10 min) standard deviation at temperatures between 150 and 700/sup 0/C. The phase-insensitive detector typically contributes less than 10 ppM peak resistance measurement error (0.04/sup 0/C at 700/sup 0/C for 316 stainless steel or 0.005/sup 0/C at 150/sup 0/C for zirconium).« less

  10. Improving UWB-Based Localization in IoT Scenarios with Statistical Models of Distance Error.

    PubMed

    Monica, Stefania; Ferrari, Gianluigi

    2018-05-17

    Interest in the Internet of Things (IoT) is rapidly increasing, as the number of connected devices is exponentially growing. One of the application scenarios envisaged for IoT technologies involves indoor localization and context awareness. In this paper, we focus on a localization approach that relies on a particular type of communication technology, namely Ultra Wide Band (UWB). UWB technology is an attractive choice for indoor localization, owing to its high accuracy. Since localization algorithms typically rely on estimated inter-node distances, the goal of this paper is to evaluate the improvement brought by a simple (linear) statistical model of the distance error. On the basis of an extensive experimental measurement campaign, we propose a general analytical framework, based on a Least Square (LS) method, to derive a novel statistical model for the range estimation error between a pair of UWB nodes. The proposed statistical model is then applied to improve the performance of a few illustrative localization algorithms in various realistic scenarios. The obtained experimental results show that the use of the proposed statistical model improves the accuracy of the considered localization algorithms with a reduction of the localization error up to 66%.

  11. Masked and unmasked error-related potentials during continuous control and feedback

    NASA Astrophysics Data System (ADS)

    Lopes Dias, Catarina; Sburlea, Andreea I.; Müller-Putz, Gernot R.

    2018-06-01

    The detection of error-related potentials (ErrPs) in tasks with discrete feedback is well established in the brain–computer interface (BCI) field. However, the decoding of ErrPs in tasks with continuous feedback is still in its early stages. Objective. We developed a task in which subjects have continuous control of a cursor’s position by means of a joystick. The cursor’s position was shown to the participants in two different modalities of continuous feedback: normal and jittered. The jittered feedback was created to mimic the instability that could exist if participants controlled the trajectory directly with brain signals. Approach. This paper studies the electroencephalographic (EEG)—measurable signatures caused by a loss of control over the cursor’s trajectory, causing a target miss. Main results. In both feedback modalities, time-locked potentials revealed the typical frontal-central components of error-related potentials. Errors occurring during the jittered feedback (masked errors) were delayed in comparison to errors occurring during normal feedback (unmasked errors). Masked errors displayed lower peak amplitudes than unmasked errors. Time-locked classification analysis allowed a good distinction between correct and error classes (average Cohen-, average TPR  =  81.8% and average TNR  =  96.4%). Time-locked classification analysis between masked error and unmasked error classes revealed results at chance level (average Cohen-, average TPR  =  60.9% and average TNR  =  58.3%). Afterwards, we performed asynchronous detection of ErrPs, combining both masked and unmasked trials. The asynchronous detection of ErrPs in a simulated online scenario resulted in an average TNR of 84.0% and in an average TPR of 64.9%. Significance. The time-locked classification results suggest that the masked and unmasked errors were indistinguishable in terms of classification. The asynchronous classification results suggest that the feedback modality did not hinder the asynchronous detection of ErrPs.

  12. A field technique for estimating aquifer parameters using flow log data

    USGS Publications Warehouse

    Paillet, Frederick L.

    2000-01-01

    A numerical model is used to predict flow along intervals between producing zones in open boreholes for comparison with measurements of borehole flow. The model gives flow under quasi-steady conditions as a function of the transmissivity and hydraulic head in an arbitrary number of zones communicating with each other along open boreholes. The theory shows that the amount of inflow to or outflow from the borehole under any one flow condition may not indicate relative zone transmissivity. A unique inversion for both hydraulic-head and transmissivity values is possible if flow is measured under two different conditions such as ambient and quasi-steady pumping, and if the difference in open-borehole water level between the two flow conditions is measured. The technique is shown to give useful estimates of water levels and transmissivities of two or more water-producing zones intersecting a single interval of open borehole under typical field conditions. Although the modeling technique involves some approximation, the principle limit on the accuracy of the method under field conditions is the measurement error in the flow log data. Flow measurements and pumping conditions are usually adjusted so that transmissivity estimates are most accurate for the most transmissive zones, and relative measurement error is proportionately larger for less transmissive zones. The most effective general application of the borehole-flow model results when the data are fit to models that systematically include more production zones of progressively smaller transmissivity values until model results show that all accuracy in the data set is exhausted.A numerical model is used to predict flow along intervals between producing zones in open boreholes for comparison with measurements of borehole flow. The model gives flow under quasi-steady conditions as a function of the transmissivity and hydraulic head in an arbitrary number of zones communicating with each other along open boreholes. The theory shows that the amount of inflow to or outflow from the borehole under any one flow condition may not indicate relative zone transmissivity. A unique inversion for both hydraulic-head and transmissivity values is possible if flow is measured under two different conditions such as ambient and quasi-steady pumping, and if the difference in open-borehole water level between the two flow conditions is measured. The technique is shown to give useful estimates of water levels and transmissivities of two or more water-producing zones intersecting a single interval of open borehole under typical field conditions. Although the modeling technique involves some approximation, the principle limit on the accuracy of the method under field conditions is the measurement error in the flow log data. Flow measurements and pumping conditions are usually adjusted so that transmissivity estimates are most accurate for the most transmissive zones, and relative measurement error is proportionately larger for less transmissive zones. The most effective general application of the borehole-flow model results when the data are fit to models that symmetrically include more production zones of progressively smaller transmissivity values until model results show that all accuracy in the data set is exhausted.

  13. Evaluation of the depth-integration method of measuring water discharge in large rivers

    USGS Publications Warehouse

    Moody, J.A.; Troutman, B.M.

    1992-01-01

    The depth-integration method oor measuring water discharge makes a continuos measurement of the water velocity from the water surface to the bottom at 20 to 40 locations or verticals across a river. It is especially practical for large rivers where river traffic makes it impractical to use boats attached to taglines strung across the river or to use current meters suspended from bridges. This method has the additional advantage over the standard two- and eight-tenths method in that a discharge-weighted suspended-sediment sample can be collected at the same time. When this method is used in large rivers such as the Missouri, Mississippi and Ohio, a microwave navigation system is used to determine the ship's position at each vertical sampling location across the river, and to make accurate velocity corrections to compensate for shift drift. An essential feature is a hydraulic winch that can lower and raise the current meter at a constant transit velocity so that the velocities at all depths are measured for equal lengths of time. Field calibration measurements show that: (1) the mean velocity measured on the upcast (bottom to surface) is within 1% of the standard mean velocity determined by 9-11 point measurements; (2) if the transit velocity is less than 25% of the mean velocity, then average error in the mean velocity is 4% or less. The major source of bias error is a result of mounting the current meter above a sounding weight and sometimes above a suspended-sediment sampling bottle, which prevents measurement of the velocity all the way to the bottom. The measured mean velocity is slightly larger than the true mean velocity. This bias error in the discharge is largest in shallow water (approximately 8% for the Missouri River at Hermann, MO, where the mean depth was 4.3 m) and smallest in deeper water (approximately 3% for the Mississippi River at Vickbsurg, MS, where the mean depth was 14.5 m). The major source of random error in the discharge is the natural variability of river velocities, which we assumed to be independent and random at each vertical. The standard error of the estimated mean velocity, at an individual vertical sampling location, may be as large as 9%, for large sand-bed alluvial rivers. The computed discharge, however, is a weighted mean of these random velocities. Consequently the standard error of computed discharge is divided by the square root of the number of verticals, producing typical values between 1 and 2%. The discharges measured by the depth-integrated method agreed within ??5% of those measured simultaneously by the standard two- and eight-tenths, six-tenth and moving boat methods. ?? 1992.

  14. Analysis of Children's Errors in Comprehension and Expression

    ERIC Educational Resources Information Center

    Hatcher, Ryan C.; Breaux, Kristina C.; Liu, Xiaochen; Bray, Melissa A.; Ottone-Cross, Karen L.; Courville, Troy; Luria, Sarah R.; Langley, Susan Dulong

    2017-01-01

    Children's oral language skills typically begin to develop sooner than their written language skills; however, the four language systems (listening, speaking, reading, and writing) then develop concurrently as integrated strands that influence one another. This research explored relationships between students' errors in language comprehension of…

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisk, William J.; Sullivan, Douglas

    This pilot scale study evaluated the counting accuracy of two people counting systems that could be used in demand controlled ventilation systems to provide control signals for modulating outdoor air ventilation rates. The evaluations included controlled challenges of the people counting systems using pre-planned movements of occupants through doorways and evaluations of counting accuracies when naive occupants (i.e., occupants unaware of the counting systems) passed through the entrance doors of the building or room. The two people counting systems had high counting accuracy accuracies, with errors typically less than 10percent, for typical non-demanding counting events. However, counting errors were highmore » in some highly challenging situations, such as multiple people passing simultaneously through a door. Counting errors, for at least one system, can be very high if people stand in the field of view of the sensor. Both counting system have limitations and would need to be used only at appropriate sites and where the demanding situations that led to counting errors were rare.« less

  16. Uncharted territory: measuring costs of diagnostic errors outside the medical record.

    PubMed

    Schwartz, Alan; Weiner, Saul J; Weaver, Frances; Yudkowsky, Rachel; Sharma, Gunjan; Binns-Calvey, Amy; Preyss, Ben; Jordan, Neil

    2012-11-01

    In a past study using unannounced standardised patients (USPs), substantial rates of diagnostic and treatment errors were documented among internists. Because the authors know the correct disposition of these encounters and obtained the physicians' notes, they can identify necessary treatment that was not provided and unnecessary treatment. They can also discern which errors can be identified exclusively from a review of the medical records. To estimate the avoidable direct costs incurred by physicians making errors in our previous study. In the study, USPs visited 111 internal medicine attending physicians. They presented variants of four previously validated cases that jointly manipulate the presence or absence of contextual and biomedical factors that could lead to errors in management if overlooked. For example, in a patient with worsening asthma symptoms, a complicating biomedical factor was the presence of reflux disease and a complicating contextual factor was inability to afford the currently prescribed inhaler. Costs of missed or unnecessary services were computed using Medicare cost-based reimbursement data. Fourteen practice locations, including two academic clinics, two community-based primary care networks with multiple sites, a core safety net provider, and three Veteran Administration government facilities. Contribution of errors to costs of care. Overall, errors in care resulted in predicted costs of approximately $174,000 across 399 visits, of which only $8745 was discernible from a review of the medical records alone (without knowledge of the correct diagnoses). The median cost of error per visit with an incorrect care plan differed by case and by presentation variant within case. Chart reviews alone underestimate costs of care because they typically reflect appropriate treatment decisions conditional on (potentially erroneous) diagnoses. Important information about patient context is often entirely missing from medical records. Experimental methods, including the use of USPs, reveal the substantial costs of these errors.

  17. Weights and measures: a new look at bisection behaviour in neglect.

    PubMed

    McIntosh, Robert D; Schindler, Igor; Birchall, Daniel; Milner, A David

    2005-12-01

    Horizontal line bisection is a ubiquitous task in the investigation of visual neglect. Patients with left neglect typically make rightward errors that increase with line length and for lines at more leftward positions. For short lines, or for lines presented in right space, these errors may 'cross over' to become leftward. We have taken a new approach to these phenomena by employing a different set of dependent and independent variables for their description. Rather than recording bisection error, we record the lateral position of the response within the workspace. We have studied how this varies when the locations of the left and right endpoints are manipulated independently. Across 30 patients with left neglect, we have observed a characteristic asymmetry between the 'weightings' accorded to the two endpoints, such that responses are less affected by changes in the location of the left endpoint than by changes in the location of the right. We show that a simple endpoint weightings analysis accounts readily for the effects of line length and spatial position, including cross-over effects, and leads to an index of neglect that is more sensitive than the standard measure. We argue that this novel approach is more parsimonious than the standard model and yields fresh insights into the nature of neglect impairment.

  18. Multistrip western blotting to increase quantitative data output.

    PubMed

    Kiyatkin, Anatoly; Aksamitiene, Edita

    2009-01-01

    The qualitative and quantitative measurements of protein abundance and modification states are essential in understanding their functions in diverse cellular processes. Typical western blotting, though sensitive, is prone to produce substantial errors and is not readily adapted to high-throughput technologies. Multistrip western blotting is a modified immunoblotting procedure based on simultaneous electrophoretic transfer of proteins from multiple strips of polyacrylamide gels to a single membrane sheet. In comparison with the conventional technique, Multistrip western blotting increases the data output per single blotting cycle up to tenfold, allows concurrent monitoring of up to nine different proteins from the same loading of the sample, and substantially improves the data accuracy by reducing immunoblotting-derived signal errors. This approach enables statistically reliable comparison of different or repeated sets of data, and therefore is beneficial to apply in biomedical diagnostics, systems biology, and cell signaling research.

  19. A Pilot Study Assessing Performance and Visual Attention of Teenagers with ASD in a Novel Adaptive Driving Simulator.

    PubMed

    Wade, Joshua; Weitlauf, Amy; Broderick, Neill; Swanson, Amy; Zhang, Lian; Bian, Dayi; Sarkar, Medha; Warren, Zachary; Sarkar, Nilanjan

    2017-11-01

    Individuals with Autism Spectrum Disorder (ASD), compared to typically-developed peers, may demonstrate behaviors that are counter to safe driving. The current work examines the use of a novel simulator in two separate studies. Study 1 demonstrates statistically significant performance differences between individuals with (N = 7) and without ASD (N = 7) with regards to the number of turning-related driving errors (p < 0.01). Study 2 shows that both the performance-based feedback group (N = 9) and combined performance- and gaze-sensitive feedback group (N = 8) achieved statistically significant reductions in driving errors following training (p < 0.05). These studies are the first to present results of fine-grained measures of visual attention of drivers and an adaptive driving intervention for individuals with ASD.

  20. Surface code quantum communication.

    PubMed

    Fowler, Austin G; Wang, David S; Hill, Charles D; Ladd, Thaddeus D; Van Meter, Rodney; Hollenberg, Lloyd C L

    2010-05-07

    Quantum communication typically involves a linear chain of repeater stations, each capable of reliable local quantum computation and connected to their nearest neighbors by unreliable communication links. The communication rate of existing protocols is low as two-way classical communication is used. By using a surface code across the repeater chain and generating Bell pairs between neighboring stations with probability of heralded success greater than 0.65 and fidelity greater than 0.96, we show that two-way communication can be avoided and quantum information can be sent over arbitrary distances with arbitrarily low error at a rate limited only by the local gate speed. This is achieved by using the unreliable Bell pairs to measure nonlocal stabilizers and feeding heralded failure information into post-transmission error correction. Our scheme also applies when the probability of heralded success is arbitrarily low.

  1. Effects of stress typicality during speeded grammatical classification.

    PubMed

    Arciuli, Joanne; Cupples, Linda

    2003-01-01

    The experiments reported here were designed to investigate the influence of stress typicality during speeded grammatical classification of disyllabic English words by native and non-native speakers. Trochaic nouns and iambic gram verbs were considered to be typically stressed, whereas iambic nouns and trochaic verbs were considered to be atypically stressed. Experiments 1a and 2a showed that while native speakers classified typically stressed words individual more quickly and more accurately than atypically stressed words during differences reading, there were no overall effects during classification of spoken stimuli. However, a subgroup of native speakers with high error rates did show a significant effect during classification of spoken stimuli. Experiments 1b and 2b showed that non-native speakers classified typically stressed words more quickly and more accurately than atypically stressed words during reading. Typically stressed words were classified more accurately than atypically stressed words when the stimuli were spoken. Importantly, there was a significant relationship between error rates, vocabulary size and the size of the stress typicality effect in each experiment. We conclude that participants use information about lexical stress to help them distinguish between disyllabic nouns and verbs during speeded grammatical classification. This is especially so for individuals with a limited vocabulary who lack other knowledge (e.g., semantic knowledge) about the differences between these grammatical categories.

  2. INFANT HEALTH PRODUCTION FUNCTIONS: WHAT A DIFFERENCE THE DATA MAKE

    PubMed Central

    Reichman, Nancy E.; Corman, Hope; Noonan, Kelly; Dave, Dhaval

    2008-01-01

    SUMMARY We examine the extent to which infant health production functions are sensitive to model specification and measurement error. We focus on the importance of typically unobserved but theoretically important variables (typically unobserved variables, TUVs), other non-standard covariates (NSCs), input reporting, and characterization of infant health. The TUVs represent wantedness, taste for risky behavior, and maternal health endowment. The NSCs include father characteristics. We estimate the effects of prenatal drug use, prenatal cigarette smoking, and First trimester prenatal care on birth weight, low birth weight, and a measure of abnormal infant health conditions. We compare estimates using self-reported inputs versus input measures that combine information from medical records and self-reports. We find that TUVs and NSCs are significantly associated with both inputs and outcomes, but that excluding them from infant health production functions does not appreciably affect the input estimates. However, using self-reported inputs leads to overestimated effects of inputs, particularly prenatal care, on outcomes, and using a direct measure of infant health does not always yield input estimates similar to those when using birth weight outcomes. The findings have implications for research, data collection, and public health policy. PMID:18792077

  3. Weighted divergence correction scheme and its fast implementation

    NASA Astrophysics Data System (ADS)

    Wang, ChengYue; Gao, Qi; Wei, RunJie; Li, Tian; Wang, JinJun

    2017-05-01

    Forcing the experimental volumetric velocity fields to satisfy mass conversation principles has been proved beneficial for improving the quality of measured data. A number of correction methods including the divergence correction scheme (DCS) have been proposed to remove divergence errors from measurement velocity fields. For tomographic particle image velocimetry (TPIV) data, the measurement uncertainty for the velocity component along the light thickness direction is typically much larger than for the other two components. Such biased measurement errors would weaken the performance of traditional correction methods. The paper proposes a variant for the existing DCS by adding weighting coefficients to the three velocity components, named as the weighting DCS (WDCS). The generalized cross validation (GCV) method is employed to choose the suitable weighting coefficients. A fast algorithm for DCS or WDCS is developed, making the correction process significantly low-cost to implement. WDCS has strong advantages when correcting velocity components with biased noise levels. Numerical tests validate the accuracy and efficiency of the fast algorithm, the effectiveness of GCV method, and the advantages of WDCS. Lastly, DCS and WDCS are employed to process experimental velocity fields from the TPIV measurement of a turbulent boundary layer. This shows that WDCS achieves a better performance than DCS in improving some flow statistics.

  4. Evaluation of the measurement of refractive error by the PowerRefractor: a remote, continuous and binocular measurement system of oculomotor function

    PubMed Central

    Hunt, O A; Wolffsohn, J S; Gilmartin, B

    2003-01-01

    Background/aim: The technique of photoretinoscopy is unique in being able to measure the dynamics of the oculomotor system (ocular accommodation, vergence, and pupil size) remotely (working distance typically 1 metre) and objectively in both eyes simultaneously. The aim of this study was to evaluate clinically the measurement of refractive error by a recent commercial photoretinoscopic device, the PowerRefractor (PlusOptiX, Germany). Method: The validity and repeatability of the PowerRefractor was compared to: subjective (non-cycloplegic) refraction on 100 adult subjects (mean age 23.8 (SD 5.7) years) and objective autorefraction (Shin-Nippon SRW-5000, Japan) on 150 subjects (20.1 (4.2) years). Repeatability was assessed by examining the differences between autorefractor readings taken from each eye and by re-measuring the objective prescription of 100 eyes at a subsequent session. Results: On average the PowerRefractor prescription was not significantly different from the subjective refraction, although quite variable (difference +0.05 (0.63) D, p = 0.41) and more negative than the SRW-5000 prescription (by −0.20 (0.72) D, p<0.001). There was no significant bias in the accuracy of the instrument with regard to the type or magnitude of refractive error. The PowerRefractor was found to be repeatable over the prescription range of −8.75D to +4.00D (mean spherical equivalent) examined. Conclusion: The PowerRefractor is a useful objective screening instrument and because of its remote and rapid measurement of both eyes simultaneously is able to assess the oculomotor response in a variety of unrestricted viewing conditions and patient types. PMID:14660462

  5. Evaluation of the measurement of refractive error by the PowerRefractor: a remote, continuous and binocular measurement system of oculomotor function.

    PubMed

    Hunt, O A; Wolffsohn, J S; Gilmartin, B

    2003-12-01

    The technique of photoretinoscopy is unique in being able to measure the dynamics of the oculomotor system (ocular accommodation, vergence, and pupil size) remotely (working distance typically 1 metre) and objectively in both eyes simultaneously. The aim of this study was to evaluate clinically the measurement of refractive error by a recent commercial photoretinoscopic device, the PowerRefractor (PlusOptiX, Germany). The validity and repeatability of the PowerRefractor was compared to: subjective (non-cycloplegic) refraction on 100 adult subjects (mean age 23.8 (SD 5.7) years) and objective autorefraction (Shin-Nippon SRW-5000, Japan) on 150 subjects (20.1 (4.2) years). Repeatability was assessed by examining the differences between autorefractor readings taken from each eye and by re-measuring the objective prescription of 100 eyes at a subsequent session. On average the PowerRefractor prescription was not significantly different from the subjective refraction, although quite variable (difference +0.05 (0.63) D, p=0.41) and more negative than the SRW-5000 prescription (by -0.20 (0.72) D, p<0.001). There was no significant bias in the accuracy of the instrument with regard to the type or magnitude of refractive error. The PowerRefractor was found to be repeatable over the prescription range of -8.75D to +4.00D (mean spherical equivalent) examined. The PowerRefractor is a useful objective screening instrument and because of its remote and rapid measurement of both eyes simultaneously is able to assess the oculomotor response in a variety of unrestricted viewing conditions and patient types.

  6. Data entry errors and design for model-based tight glycemic control in critical care.

    PubMed

    Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey

    2012-01-01

    Tight glycemic control (TGC) has shown benefits but has been difficult to achieve consistently. Model-based methods and computerized protocols offer the opportunity to improve TGC quality but require human data entry, particularly of blood glucose (BG) values, which can be significantly prone to error. This study presents the design and optimization of data entry methods to minimize error for a computerized and model-based TGC method prior to pilot clinical trials. To minimize data entry error, two tests were carried out to optimize a method with errors less than the 5%-plus reported in other studies. Four initial methods were tested on 40 subjects in random order, and the best two were tested more rigorously on 34 subjects. The tests measured entry speed and accuracy. Errors were reported as corrected and uncorrected errors, with the sum comprising a total error rate. The first set of tests used randomly selected values, while the second set used the same values for all subjects to allow comparisons across users and direct assessment of the magnitude of errors. These research tests were approved by the University of Canterbury Ethics Committee. The final data entry method tested reduced errors to less than 1-2%, a 60-80% reduction from reported values. The magnitude of errors was clinically significant and was typically by 10.0 mmol/liter or an order of magnitude but only for extreme values of BG < 2.0 mmol/liter or BG > 15.0-20.0 mmol/liter, both of which could be easily corrected with automated checking of extreme values for safety. The data entry method selected significantly reduced data entry errors in the limited design tests presented, and is in use on a clinical pilot TGC study. The overall approach and testing methods are easily performed and generalizable to other applications and protocols. © 2012 Diabetes Technology Society.

  7. Vibrational excitation functions for inelastic and superelastic electron scattering from the ground-electronic state in hot CO2

    NASA Astrophysics Data System (ADS)

    Kato, H.; Kawahara, H.; Hoshino, M.; Tanaka, H.; Campbell, L.; Brunger, M. J.

    2008-11-01

    We report inelastic and superelastic excitation function measurements for electron scattering from the ground vibrational quantum (0 0 0), the bending vibrational quantum (0 1 0) and the unresolved first bending overtone (0 2 0) and symmetric stretch (1 0 0) modes of the ground-electronic state in hot (700 K) carbon dioxide ( CO) molecules. The incident electron energy range of these measurements was 1-9 eV, with the relevant excitation functions being measured at the respective electron scattering angles of 30°, 60°, 90° and 120°. Where possible comparison is made to the often quite limited earlier data, with satisfactory agreement typically being found to within the cited experimental errors.

  8. An on-line calibration technique for improved blade by blade tip clearance measurement

    NASA Astrophysics Data System (ADS)

    Sheard, A. G.; Westerman, G. C.; Killeen, B.

    A description of a capacitance-based tip clearance measurement system which integrates a novel technique for calibrating the capacitance probe in situ is presented. The on-line calibration system allows the capacitance probe to be calibrated immediately prior to use, providing substantial operational advantages and maximizing measurement accuracy. The possible error sources when it is used in service are considered, and laboratory studies of performance to ascertain their magnitude are discussed. The 1.2-mm diameter FM capacitance probe is demonstrated to be insensitive to variations in blade tip thickness from 1.25 to 1.45 mm. Over typical compressor blading the probe's range was four times the variation in blade to blade clearance encountered in engine run components.

  9. Evaluation of electrolytic tilt sensors for wind tunnel model angle-of-attack (AOA) measurements

    NASA Technical Reports Server (NTRS)

    Wong, Douglas T.

    1991-01-01

    The results of a laboratory evaluation of three types of electrolytic tilt sensors as potential candidates for model attitude or angle of attack (AOA) measurements in wind tunnel tests are presented. Their performance was also compared with that from typical servo accelerometers used for AOA measurements. Model RG-37 electrolytic tilt sensors were found to have the highest overall accuracy among the three types. Compared with the servo accelerometer, their accuracies are about one order of magnitude worse and each of them cost about two-thirds less. Therefore, the sensors are unsuitable for AOA measurements although they are less expensive. However, the potential for other applications exists where the errors resulting from roll interaction, vibration, and response time are less, and sensor temperature can be controlled.

  10. Spelling errors among children with ADHD symptoms: the role of working memory.

    PubMed

    Re, Anna Maria; Mirandola, Chiara; Esposito, Stefania Sara; Capodieci, Agnese

    2014-09-01

    Research has shown that children with attention deficit/hyperactivity disorder (ADHD) may present a series of academic difficulties, including spelling errors. Given that correct spelling is supported by the phonological component of working memory (PWM), the present study examined whether or not the spelling difficulties of children with ADHD are emphasized when children's PWM is overloaded. A group of 19 children with ADHD symptoms (between 8 and 11 years of age), and a group of typically developing children matched for age, schooling, gender, rated intellectual abilities, and socioeconomic status, were administered two dictation texts: one under typical conditions and one under a pre-load condition that required the participants to remember a series of digits while writing. The results confirmed that children with ADHD symptoms have spelling difficulties, produce a higher percentages of errors compared to the control group children, and that these difficulties are enhanced under a higher load of PWM. An analysis of errors showed that this holds true, especially for phonological errors. The increased errors in the PWM condition was not due to a tradeoff between working memory and writing, as children with ADHD also performed more poorly in the PWM task. The theoretical and practical implications are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Teaching Statistics Online Using "Excel"

    ERIC Educational Resources Information Center

    Jerome, Lawrence

    2011-01-01

    As anyone who has taught or taken a statistics course knows, statistical calculations can be tedious and error-prone, with the details of a calculation sometimes distracting students from understanding the larger concepts. Traditional statistics courses typically use scientific calculators, which can relieve some of the tedium and errors but…

  12. Single-ping ADCP measurements in the Strait of Gibraltar

    NASA Astrophysics Data System (ADS)

    Sammartino, Simone; García Lafuente, Jesús; Naranjo, Cristina; Sánchez Garrido, José Carlos; Sánchez Leal, Ricardo

    2016-04-01

    In most Acoustic Doppler Current Profiler (ADCP) user manuals, it is widely recommended to apply ensemble averaging of the single-pings measurements, in order to obtain reliable observations of the current speed. The random error related to the single-ping measurement is typically too high to be used directly, while the averaging operation reduces the ensemble error of a factor of approximately √N, with N the number of averaged pings. A 75 kHz ADCP moored in the western exit of the Strait of Gibraltar, included in the long-term monitoring of the Mediterranean outflow, has recently served as test setup for a different approach to current measurements. The ensemble averaging has been disabled, while maintaining the internal coordinate conversion made by the instrument, and a series of single-ping measurements has been collected every 36 seconds during a period of approximately 5 months. The huge amount of data has been fluently handled by the instrument, and no abnormal battery consumption has been recorded. On the other hand a long and unique series of very high frequency current measurements has been collected. Results of this novel approach have been exploited in a dual way: from a statistical point of view, the availability of single-ping measurements allows a real estimate of the (a posteriori) ensemble average error of both current and ancillary variables. While the theoretical random error for horizontal velocity is estimated a priori as ˜2 cm s-1 for a 50 pings ensemble, the value obtained by the a posteriori averaging is ˜15 cm s-1, with an asymptotical behavior starting from an averaging size of 10 pings per ensemble. This result suggests the presence of external sources of random error (e.g.: turbulence), of higher magnitude than the internal sources (ADCP intrinsic precision), which cannot be reduced by the ensemble averaging. On the other hand, although the instrumental configuration is clearly not suitable for a precise estimation of turbulent parameters, some hints of the turbulent structure of the flow can be obtained by the empirical computation of zonal Reynolds stress (along the predominant direction of the current) and rate of production and dissipation of turbulent kinetic energy. All the parameters show a clear correlation with tidal fluctuations of the current, with maximum values coinciding with flood tides, during the maxima of the outflow Mediterranean current.

  13. Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data.

    PubMed

    Salahuddin, Saqib; Porter, Emily; Meaney, Paul M; O'Halloran, Martin

    2017-02-01

    The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues.

  14. Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data

    PubMed Central

    Salahuddin, Saqib; Porter, Emily; Meaney, Paul M.; O’Halloran, Martin

    2016-01-01

    The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues. PMID:28191324

  15. Global height-resolved methane retrievals from the Infrared Atmospheric Sounding Interferometer (IASI) on MetOp

    NASA Astrophysics Data System (ADS)

    Siddans, Richard; Knappett, Diane; Kerridge, Brian; Waterfall, Alison; Hurley, Jane; Latter, Barry; Boesch, Hartmut; Parker, Robert

    2017-11-01

    This paper describes the global height-resolved methane (CH4) retrieval scheme for the Infrared Atmospheric Sounding Interferometer (IASI) on MetOp, developed at the Rutherford Appleton Laboratory (RAL). The scheme precisely fits measured spectra in the 7.9 micron region to allow information to be retrieved on two independent layers centred in the upper and lower troposphere. It also uses nitrous oxide (N2O) spectral features in the same spectral interval to directly retrieve effective cloud parameters to mitigate errors in retrieved methane due to residual cloud and other geophysical variables. The scheme has been applied to analyse IASI measurements between 2007 and 2015. Results are compared to model fields from the MACC greenhouse gas inversion and independent measurements from satellite (GOSAT), airborne (HIPPO) and ground (TCCON) sensors. The estimated error on methane mixing ratio in the lower- and upper-tropospheric layers ranges from 20 to 100 and from 30 to 40 ppbv, respectively, and error on the derived column-average ranges from 20 to 40 ppbv. Vertical sensitivity extends through the lower troposphere, though it decreases near to the surface. Systematic differences with the other datasets are typically < 10 ppbv regionally and < 5 ppbv globally. In the Southern Hemisphere, a bias of around 20 ppbv is found with respect to MACC, which is not explained by vertical sensitivity or found in comparison of IASI to TCCON. Comparisons to HIPPO and MACC support the assertion that two layers can be independently retrieved and provide confirmation that the estimated random errors on the column- and layer-averaged amounts are realistic. The data have been made publically available via the Centre for Environmental Data Analysis (CEDA) data archive (Siddans, 2016).

  16. A video multitracking system for quantification of individual behavior in a large fish shoal: advantages and limits.

    PubMed

    Delcourt, Johann; Becco, Christophe; Vandewalle, Nicolas; Poncin, Pascal

    2009-02-01

    The capability of a new multitracking system to track a large number of unmarked fish (up to 100) is evaluated. This system extrapolates a trajectory from each individual and analyzes recorded sequences that are several minutes long. This system is very efficient in statistical individual tracking, where the individual's identity is important for a short period of time in comparison with the duration of the track. Individual identification is typically greater than 99%. Identification is largely efficient (more than 99%) when the fish images do not cross the image of a neighbor fish. When the images of two fish merge (occlusion), we consider that the spot on the screen has a double identity. Consequently, there are no identification errors during occlusions, even though the measurement of the positions of each individual is imprecise. When the images of these two merged fish separate (separation), individual identification errors are more frequent, but their effect is very low in statistical individual tracking. On the other hand, in complete individual tracking, where individual fish identity is important for the entire trajectory, each identification error invalidates the results. In such cases, the experimenter must observe whether the program assigns the correct identification, and, when an error is made, must edit the results. This work is not too costly in time because it is limited to the separation events, accounting for fewer than 0.1% of individual identifications. Consequently, in both statistical and rigorous individual tracking, this system allows the experimenter to gain time by measuring the individual position automatically. It can also analyze the structural and dynamic properties of an animal group with a very large sample, with precision and sampling that are impossible to obtain with manual measures.

  17. Investigating Systematic Errors of the Interstellar Flow Longitude Derived from the Pickup Ion Cutoff

    NASA Astrophysics Data System (ADS)

    Taut, A.; Berger, L.; Drews, C.; Bower, J.; Keilbach, D.; Lee, M. A.; Moebius, E.; Wimmer-Schweingruber, R. F.

    2017-12-01

    Complementary to the direct neutral particle measurements performed by e.g. IBEX, the measurement of PickUp Ions (PUIs) constitutes a diagnostic tool to investigate the local interstellar medium. PUIs are former neutral particles that have been ionized in the inner heliosphere. Subsequently, they are picked up by the solar wind and its frozen-in magnetic field. Due to this process, a characteristic Velocity Distribution Function (VDF) with a sharp cutoff evolves, which carries information about the PUI's injection speed and thus the former neutral particle velocity. The symmetry of the injection speed about the interstellar flow vector is used to derive the interstellar flow longitude from PUI measurements. Using He PUI data obtained by the PLASTIC sensor on STEREO A, we investigate how this concept may be affected by systematic errors. The PUI VDF strongly depends on the orientation of the local interplanetary magnetic field. Recently injected PUIs with speeds just below the cutoff speed typically form a highly anisotropic torus distribution in velocity space, which leads to a longitudinal transport for certain magnetic field orientation. Therefore, we investigate how the selection of magnetic field configurations in the data affects the result for the interstellar flow longitude that we derive from the PUI cutoff. Indeed, we find that the results follow a systematic trend with the filtered magnetic field angles that can lead to a shift of the result up to 5°. In turn, this means that every value for the interstellar flow longitude derived from the PUI cutoff is affected by a systematic error depending on the utilized magnetic field orientations. Here, we present our observations, discuss possible reasons for the systematic trend we discovered, and indicate selections that may minimize the systematic errors.

  18. Exposure measurement error in PM2.5 health effects studies: A pooled analysis of eight personal exposure validation studies

    PubMed Central

    2014-01-01

    Background Exposure measurement error is a concern in long-term PM2.5 health studies using ambient concentrations as exposures. We assessed error magnitude by estimating calibration coefficients as the association between personal PM2.5 exposures from validation studies and typically available surrogate exposures. Methods Daily personal and ambient PM2.5, and when available sulfate, measurements were compiled from nine cities, over 2 to 12 days. True exposure was defined as personal exposure to PM2.5 of ambient origin. Since PM2.5 of ambient origin could only be determined for five cities, personal exposure to total PM2.5 was also considered. Surrogate exposures were estimated as ambient PM2.5 at the nearest monitor or predicted outside subjects’ homes. We estimated calibration coefficients by regressing true on surrogate exposures in random effects models. Results When monthly-averaged personal PM2.5 of ambient origin was used as the true exposure, calibration coefficients equaled 0.31 (95% CI:0.14, 0.47) for nearest monitor and 0.54 (95% CI:0.42, 0.65) for outdoor home predictions. Between-city heterogeneity was not found for outdoor home PM2.5 for either true exposure. Heterogeneity was significant for nearest monitor PM2.5, for both true exposures, but not after adjusting for city-average motor vehicle number for total personal PM2.5. Conclusions Calibration coefficients were <1, consistent with previously reported chronic health risks using nearest monitor exposures being under-estimated when ambient concentrations are the exposure of interest. Calibration coefficients were closer to 1 for outdoor home predictions, likely reflecting less spatial error. Further research is needed to determine how our findings can be incorporated in future health studies. PMID:24410940

  19. Microstrip transmission line for soil moisture measurement

    NASA Astrophysics Data System (ADS)

    Chen, Xuemin; Li, Jing; Liang, Renyue; Sun, Yijie; Liu, C. Richard; Rogers, Richard; Claros, German

    2004-12-01

    Pavement life span is often affected by the amount of voids in the base and subgrade soils, especially moisture content in pavement. Most available moisture sensors are based on the capacitive sensing using planar blades. Since the planar sensor blades are fabricated on the same surface to reduce the overall size of the sensor, such structure cannot provide very high accuracy for moisture content measurement. As a consequence, a typical capacitive moisture sensor has an error in the range of 30%. A more accurate measurement is based on the time domain refelctometer (TDR) measurement. However, typical TDR system is fairly expensive equipment, very large in size, and difficult to operate, the moisture content measurement is limited. In this paper, a novel microstrip transmission line based moisture sensor is presented. This sensor uses the phase shift measurement of RF signal going through a transmission line buried in the soil to be measured. Since the amplitude of the transmission measurement is a strong function of the conductivity (loss of the media) and the imaginary part of dielectric constant, and the phase is mainly a strong function of the real part of the dielectric constant, measuring phase shift in transmission mode can directly obtain the soil moisture information. This sensor was designed and implemented. Sensor networking was devised. Both lab and field data show that this sensor is sensitive and accurate.

  20. A multi-segment foot model based on anatomically registered technical coordinate systems: method repeatability and sensitivity in pediatric planovalgus feet.

    PubMed

    Saraswat, Prabhav; MacWilliams, Bruce A; Davis, Roy B; D'Astous, Jacques L

    2013-01-01

    Several multisegment foot models have been proposed and some have been used to study foot pathologies. These models have been tested and validated on typically developed populations; however application of such models to feet with significant deformities presents an additional set of challenges. For the first time, in this study, a multisegment foot model is tested for repeatability in a population of children with symptomatic abnormal feet. The results from this population are compared to the same metrics collected from an age matched (8-14 years) typically developing population. The modified Shriners Hospitals for Children, Greenville (mSHCG) foot model was applied to ten typically developing children and eleven children with planovalgus feet by two clinicians. Five subjects in each group were retested by both clinicians after 4-6 weeks. Both intra-clinician and inter-clinician repeatability were evaluated using static and dynamic measures. A plaster mold method was used to quantify variability arising from marker placement error. Dynamic variability was measured by examining trial differences from the same subjects when multiple clinicians carried out the data collection multiple times. For hindfoot and forefoot angles, static and dynamic variability in both groups was found to be less than 4° and 6° respectively. The mSHCG model strategy of minimal reliance on anatomical markers for dynamic measures and inherent flexibility enabled by separate anatomical and technical coordinate systems resulted in a model equally repeatable in typically developing and planovalgus populations. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Environmental Chemicals in Urine and Blood: Improving Methods for Creatinine and Lipid Adjustment.

    PubMed

    O'Brien, Katie M; Upson, Kristen; Cook, Nancy R; Weinberg, Clarice R

    2016-02-01

    Investigators measuring exposure biomarkers in urine typically adjust for creatinine to account for dilution-dependent sample variation in urine concentrations. Similarly, it is standard to adjust for serum lipids when measuring lipophilic chemicals in serum. However, there is controversy regarding the best approach, and existing methods may not effectively correct for measurement error. We compared adjustment methods, including novel approaches, using simulated case-control data. Using a directed acyclic graph framework, we defined six causal scenarios for epidemiologic studies of environmental chemicals measured in urine or serum. The scenarios include variables known to influence creatinine (e.g., age and hydration) or serum lipid levels (e.g., body mass index and recent fat intake). Over a range of true effect sizes, we analyzed each scenario using seven adjustment approaches and estimated the corresponding bias and confidence interval coverage across 1,000 simulated studies. For urinary biomarker measurements, our novel method, which incorporates both covariate-adjusted standardization and the inclusion of creatinine as a covariate in the regression model, had low bias and possessed 95% confidence interval coverage of nearly 95% for most simulated scenarios. For serum biomarker measurements, a similar approach involving standardization plus serum lipid level adjustment generally performed well. To control measurement error bias caused by variations in serum lipids or by urinary diluteness, we recommend improved methods for standardizing exposure concentrations across individuals.

  2. Novel maximum likelihood approach for passive detection and localisation of multiple emitters

    NASA Astrophysics Data System (ADS)

    Hernandez, Marcel

    2017-12-01

    In this paper, a novel target acquisition and localisation algorithm (TALA) is introduced that offers a capability for detecting and localising multiple targets using the intermittent "signals-of-opportunity" (e.g. acoustic impulses or radio frequency transmissions) they generate. The TALA is a batch estimator that addresses the complex multi-sensor/multi-target data association problem in order to estimate the locations of an unknown number of targets. The TALA is unique in that it does not require measurements to be of a specific type, and can be implemented for systems composed of either homogeneous or heterogeneous sensors. The performance of the TALA is demonstrated in simulated scenarios with a network of 20 sensors and up to 10 targets. The sensors generate angle-of-arrival (AOA), time-of-arrival (TOA), or hybrid AOA/TOA measurements. It is shown that the TALA is able to successfully detect 83-99% of the targets, with a negligible number of false targets declared. Furthermore, the localisation errors of the TALA are typically within 10% of the errors generated by a "genie" algorithm that is given the correct measurement-to-target associations. The TALA also performs well in comparison with an optimistic Cramér-Rao lower bound, with typical differences in performance of 10-20%, and differences in performance of 40-50% in the most difficult scenarios considered. The computational expense of the TALA is also controllable, which allows the TALA to maintain computational feasibility even in the most challenging scenarios considered. This allows the approach to be implemented in time-critical scenarios, such as in the localisation of artillery firing events. It is concluded that the TALA provides a powerful situational awareness aid for passive surveillance operations.

  3. Gaussian pre-filtering for uncertainty minimization in digital image correlation using numerically-designed speckle patterns

    NASA Astrophysics Data System (ADS)

    Mazzoleni, Paolo; Matta, Fabio; Zappa, Emanuele; Sutton, Michael A.; Cigada, Alfredo

    2015-03-01

    This paper discusses the effect of pre-processing image blurring on the uncertainty of two-dimensional digital image correlation (DIC) measurements for the specific case of numerically-designed speckle patterns having particles with well-defined and consistent shape, size and spacing. Such patterns are more suitable for large measurement surfaces on large-scale specimens than traditional spray-painted random patterns without well-defined particles. The methodology consists of numerical simulations where Gaussian digital filters with varying standard deviation are applied to a reference speckle pattern. To simplify the pattern application process for large areas and increase contrast to reduce measurement uncertainty, the speckle shape, mean size and on-center spacing were selected to be representative of numerically-designed patterns that can be applied on large surfaces through different techniques (e.g., spray-painting through stencils). Such 'designer patterns' are characterized by well-defined regions of non-zero frequency content and non-zero peaks, and are fundamentally different from typical spray-painted patterns whose frequency content exhibits near-zero peaks. The effect of blurring filters is examined for constant, linear, quadratic and cubic displacement fields. Maximum strains between ±250 and ±20,000 με are simulated, thus covering a relevant range for structural materials subjected to service and ultimate stresses. The robustness of the simulation procedure is verified experimentally using a physical speckle pattern subjected to constant displacements. The stability of the relation between standard deviation of the Gaussian filter and measurement uncertainty is assessed for linear displacement fields at varying image noise levels, subset size, and frequency content of the speckle pattern. It is shown that bias error as well as measurement uncertainty are minimized through Gaussian pre-filtering. This finding does not apply to typical spray-painted patterns without well-defined particles, for which image blurring is only beneficial in reducing bias errors.

  4. Microionization chamber for reference dosimetry in IMRT verification: clinical implications on OAR dosimetric errors

    NASA Astrophysics Data System (ADS)

    Sánchez-Doblado, Francisco; Capote, Roberto; Leal, Antonio; Roselló, Joan V.; Lagares, Juan I.; Arráns, Rafael; Hartmann, Günther H.

    2005-03-01

    Intensity modulated radiotherapy (IMRT) has become a treatment of choice in many oncological institutions. Small fields or beamlets with sizes of 1 to 5 cm2 are now routinely used in IMRT delivery. Therefore small ionization chambers (IC) with sensitive volumes <=0.1 cm3are generally used for dose verification of an IMRT treatment. The measurement conditions during verification may be quite different from reference conditions normally encountered in clinical beam calibration, so dosimetry of these narrow photon beams pertains to the so-called non-reference conditions for beam calibration. This work aims at estimating the error made when measuring the organ at risk's (OAR) absolute dose by a micro ion chamber (μIC) in a typical IMRT treatment. The dose error comes from the assumption that the dosimetric parameters determining the absolute dose are the same as for the reference conditions. We have selected two clinical cases, treated by IMRT, for our dose error evaluations. Detailed geometrical simulation of the μIC and the dose verification set-up was performed. The Monte Carlo (MC) simulation allows us to calculate the dose measured by the chamber as a dose averaged over the air cavity within the ion-chamber active volume (Dair). The absorbed dose to water (Dwater) is derived as the dose deposited inside the same volume, in the same geometrical position, filled and surrounded by water in the absence of the ion chamber. Therefore, the Dwater/Dair dose ratio is the MC estimator of the total correction factor needed to convert the absorbed dose in air into the absorbed dose in water. The dose ratio was calculated for the μIC located at the isocentre within the OARs for both clinical cases. The clinical impact of the calculated dose error was found to be negligible for the studied IMRT treatments.

  5. Collecting Kinematic Data on a Ski Track with Optoelectronic Stereophotogrammetry: A Methodological Study Assessing the Feasibility of Bringing the Biomechanics Lab to the Field.

    PubMed

    Spörri, Jörg; Schiefermüller, Christian; Müller, Erich

    2016-01-01

    In the laboratory, optoelectronic stereophotogrammetry is one of the most commonly used motion capture systems; particularly, when position- or orientation-related analyses of human movements are intended. However, for many applied research questions, field experiments are indispensable, and it is not a priori clear whether optoelectronic stereophotogrammetric systems can be expected to perform similarly to in-lab experiments. This study aimed to assess the instrumental errors of kinematic data collected on a ski track using optoelectronic stereophotogrammetry, and to investigate the magnitudes of additional skiing-specific errors and soft tissue/suit artifacts. During a field experiment, the kinematic data of different static and dynamic tasks were captured by the use of 24 infrared-cameras. The distances between three passive markers attached to a rigid bar were stereophotogrammetrically reconstructed and, subsequently, were compared to the manufacturer-specified exact values. While at rest or skiing at low speed, the optoelectronic stereophotogrammetric system's accuracy and precision for determining inter-marker distances were found to be comparable to those known for in-lab experiments (< 1 mm). However, when measuring a skier's kinematics under "typical" skiing conditions (i.e., high speeds, inclined/angulated postures and moderate snow spraying), additional errors were found to occur for distances between equipment-fixed markers (total measurement errors: 2.3 ± 2.2 mm). Moreover, for distances between skin-fixed markers, such as the anterior hip markers, additional artifacts were observed (total measurement errors: 8.3 ± 7.1 mm). In summary, these values can be considered sufficient for the detection of meaningful position- or orientation-related differences in alpine skiing. However, it must be emphasized that the use of optoelectronic stereophotogrammetry on a ski track is seriously constrained by limited practical usability, small-sized capture volumes and the occurrence of extensive snow spraying (which results in marker obscuration). The latter limitation possibly might be overcome by the use of more sophisticated cluster-based marker sets.

  6. Evaluation of airborne topographic lidar for quantifying beach changes

    USGS Publications Warehouse

    2003-01-01

    A scanning airborne topographic lidar was evaluated for its ability to quantify beach topography and changes during the Sandy Duck experiment in 1997 along the North Carolina coast. Elevation estimates, acquired with NASA's Airborne Topographic Mapper (ATM), were compared to elevations measured with three types of ground-based mea- surements-1) differential GPS equipped all-terrain vehicle (ATV) that surveyed a 3-km reach of beach from the shoreline to the dune, 2) GPS antenna mounted on a stadia rod used to intensely survey a different 100 m reach of beach, and 3) a second GPS-equipped ATV that surveyed a 70-km-long transect along the coast. Over 40,000 individual intercomparisons between ATM and ground surveys were calculated. RMS vertical differences associated with the ATM when compared to ground measurements ranged from 13 to 19 cm. Considering all of the intercomparisons together, RMS ≃15 cm. This RMS error represents a total error for individual elevation estimates including uncertainties associated with random and mean errors. The latter was the largest source of error and was attributed to drift in differential GPS. The ≃15cm vertical accuracy of the ATM is adequate to resolve beach-change signals typical of the impact of storms. For example, ATM surveys of Assateague Island (spanning the border of MD and VA) prior to and immediately following a severe northeaster showed vertical beach changes in places greater than 2 m, much greater than expected errors associated with the ATM. A major asset of airborne lidar is the high spatial data density. Measurements of elevation are acquired every few m2 over regional scales of hundreds of kilometers. Hence, many scales of beach morphology and change can be resolved, from beach cusps tens of meters in wavelength to entire coastal cells com- prising tens to hundreds of kilometers of coast. Topographic lidars similar to the ATM are becoming increasingly available from commercial vendors and should, in the future, be widely used in beach su

  7. The Trojan Lifetime Champions Health Survey: development, validity, and reliability.

    PubMed

    Sorenson, Shawn C; Romano, Russell; Scholefield, Robin M; Schroeder, E Todd; Azen, Stanley P; Salem, George J

    2015-04-01

    Self-report questionnaires are an important method of evaluating lifespan health, exercise, and health-related quality of life (HRQL) outcomes among elite, competitive athletes. Few instruments, however, have undergone formal characterization of their psychometric properties within this population. To evaluate the validity and reliability of a novel health and exercise questionnaire, the Trojan Lifetime Champions (TLC) Health Survey. Descriptive laboratory study. A large National Collegiate Athletic Association Division I university. A total of 63 university alumni (age range, 24 to 84 years), including former varsity collegiate athletes and a control group of nonathletes. Participants completed the TLC Health Survey twice at a mean interval of 23 days with randomization to the paper or electronic version of the instrument. Content validity, feasibility of administration, test-retest reliability, parallel-form reliability between paper and electronic forms, and estimates of systematic and typical error versus differences of clinical interest were assessed across a broad range of health, exercise, and HRQL measures. Correlation coefficients, including intraclass correlation coefficients (ICCs) for continuous variables and κ agreement statistics for ordinal variables, for test-retest reliability averaged 0.86, 0.90, 0.80, and 0.74 for HRQL, lifetime health, recent health, and exercise variables, respectively. Correlation coefficients, again ICCs and κ, for parallel-form reliability (ie, equivalence) between paper and electronic versions averaged 0.90, 0.85, 0.85, and 0.81 for HRQL, lifetime health, recent health, and exercise variables, respectively. Typical measurement error was less than the a priori thresholds of clinical interest, and we found minimal evidence of systematic test-retest error. We found strong evidence of content validity, convergent construct validity with the Short-Form 12 Version 2 HRQL instrument, and feasibility of administration in an elite, competitive athletic population. These data suggest that the TLC Health Survey is a valid and reliable instrument for assessing lifetime and recent health, exercise, and HRQL, among elite competitive athletes. Generalizability of the instrument may be enhanced by additional, larger-scale studies in diverse populations.

  8. Alignment methods: strategies, challenges, benchmarking, and comparative overview.

    PubMed

    Löytynoja, Ari

    2012-01-01

    Comparative evolutionary analyses of molecular sequences are solely based on the identities and differences detected between homologous characters. Errors in this homology statement, that is errors in the alignment of the sequences, are likely to lead to errors in the downstream analyses. Sequence alignment and phylogenetic inference are tightly connected and many popular alignment programs use the phylogeny to divide the alignment problem into smaller tasks. They then neglect the phylogenetic tree, however, and produce alignments that are not evolutionarily meaningful. The use of phylogeny-aware methods reduces the error but the resulting alignments, with evolutionarily correct representation of homology, can challenge the existing practices and methods for viewing and visualising the sequences. The inter-dependency of alignment and phylogeny can be resolved by joint estimation of the two; methods based on statistical models allow for inferring the alignment parameters from the data and correctly take into account the uncertainty of the solution but remain computationally challenging. Widely used alignment methods are based on heuristic algorithms and unlikely to find globally optimal solutions. The whole concept of one correct alignment for the sequences is questionable, however, as there typically exist vast numbers of alternative, roughly equally good alignments that should also be considered. This uncertainty is hidden by many popular alignment programs and is rarely correctly taken into account in the downstream analyses. The quest for finding and improving the alignment solution is complicated by the lack of suitable measures of alignment goodness. The difficulty of comparing alternative solutions also affects benchmarks of alignment methods and the results strongly depend on the measure used. As the effects of alignment error cannot be predicted, comparing the alignments' performance in downstream analyses is recommended.

  9. Design and simulation of sensor networks for tracking Wifi users in outdoor urban environments

    NASA Astrophysics Data System (ADS)

    Thron, Christopher; Tran, Khoi; Smith, Douglas; Benincasa, Daniel

    2017-05-01

    We present a proof-of-concept investigation into the use of sensor networks for tracking of WiFi users in outdoor urban environments. Sensors are fixed, and are capable of measuring signal power from users' WiFi devices. We derive a maximum likelihood estimate for user location based on instantaneous sensor power measurements. The algorithm takes into account the effects of power control, and is self-calibrating in that the signal power model used by the location algorithm is adjusted and improved as part of the operation of the network. Simulation results to verify the system's performance are presented. The simulation scenario is based on a 1.5 km2 area of lower Manhattan, The self-calibration mechanism was verified for initial rms (root mean square) errors of up to 12 dB in the channel power estimates: rms errors were reduced by over 60% in 300 track-hours, in systems with limited power control. Under typical operating conditions with (without) power control, location rms errors are about 8.5 (5) meters with 90% accuracy within 9 (13) meters, for both pedestrian and vehicular users. The distance error distributions for smaller distances (<30 m) are well-approximated by an exponential distribution, while the distributions for large distance errors have fat tails. The issue of optimal sensor placement in the sensor network is also addressed. We specify a linear programming algorithm for determining sensor placement for networks with reduced number of sensors. In our test case, the algorithm produces a network with 18.5% fewer sensors with comparable accuracy estimation performance. Finally, we discuss future research directions for improving the accuracy and capabilities of sensor network systems in urban environments.

  10. Interactions of task and subject variables among continuous performance tests.

    PubMed

    Denney, Colin B; Rapport, Mark D; Chung, Kyong-Mee

    2005-04-01

    Contemporary models of working memory suggest that target paradigm (TP) and target density (TD) should interact as influences on error rates derived from continuous performance tests (CPTs). The present study evaluated this hypothesis empirically in a typically developing, ethnically diverse sample of children. The extent to which scores based on different combinations of these task parameters showed different patterns of relationship to age, intelligence, and gender was also assessed. Four continuous performance tests were derived by combining two target paradigms (AX and repeated letter target stimuli) with two levels of target density (8.3% and 33%). Variations in mean omission (OE) and commission (CE) error rates were examined within and across combinations of TP and TD. In addition, a nested series of structural equation models was utilized to examine patterns of relationship among error rates, age, intelligence, and gender. Target paradigm and target density interacted as influences on error rates. Increasing density resulted in higher OE and CE rates for the AX paradigm. In contrast, the high density condition yielded a decline in OE rates accompanied by a small increase in CEs using the repeated letter CPT. Target paradigms were also distinguishable on the basis of age when using OEs as the performance measure, whereas combinations of age and intelligence distinguished between density levels but not target paradigms using CEs as the dependent measure. Different combinations of target paradigm and target density appear to yield scores that are conceptually and psychometrically distinguishable. Consequently, developmentally appropriate interpretation of error rates across tasks may require (a) careful analysis of working memory and attentional resources required for successful performance, and (b) normative data bases that are differently stratified with respect to combinations of age and intelligence.

  11. Retrieval of carbon dioxide vertical profiles from solar occultation observations and associated error budgets for ACE-FTS and CASS-FTS

    NASA Astrophysics Data System (ADS)

    Sioris, C. E.; Boone, C. D.; Nassar, R.; Sutton, K. J.; Gordon, I. E.; Walker, K. A.; Bernath, P. F.

    2014-07-01

    An algorithm is developed to retrieve the vertical profile of carbon dioxide in the 5 to 25 km altitude range using mid-infrared solar occultation spectra from the main instrument of the ACE (Atmospheric Chemistry Experiment) mission, namely the Fourier transform spectrometer (FTS). The main challenge is to find an atmospheric phenomenon which can be used for accurate tangent height determination in the lower atmosphere, where the tangent heights (THs) calculated from geometric and timing information are not of sufficient accuracy. Error budgets for the retrieval of CO2 from ACE-FTS and the FTS on a potential follow-on mission named CASS (Chemical and Aerosol Sounding Satellite) are calculated and contrasted. Retrieved THs have typical biases of 60 m relative to those retrieved using the ACE version 3.x software after revisiting the temperature dependence of the N2 CIA (collision-induced absorption) laboratory measurements and accounting for sulfate aerosol extinction. After correcting for the known residual high bias of ACE version 3.x THs expected from CO2 spectroscopic/isotopic inconsistencies, the remaining bias for tangent heights determined with the N2 CIA is -20 m. CO2 in the 5-13 km range in the 2009-2011 time frame is validated against aircraft measurements from CARIBIC (Civil Aircraft for the Regular Investigation of the atmosphere Based on an Instrument Container), CONTRAIL (Comprehensive Observation Network for Trace gases by Airline), and HIPPO (HIAPER Pole-to-Pole Observations), yielding typical biases of -1.7 ppm in the 5-13 km range. The standard error of these biases in this vertical range is 0.4 ppm. The multi-year ACE-FTS data set is valuable in determining the seasonal variation of the latitudinal gradient which arises from the strong seasonal cycle in the Northern Hemisphere troposphere. The annual growth of CO2 in this time frame is determined to be 2.6 ± 0.4 ppm year-1, in agreement with the currently accepted global growth rate based on ground-based measurements.

  12. Error baseline rates of five sample preparation methods used to characterize RNA virus populations.

    PubMed

    Kugelman, Jeffrey R; Wiley, Michael R; Nagle, Elyse R; Reyes, Daniel; Pfeffer, Brad P; Kuhn, Jens H; Sanchez-Lockhart, Mariano; Palacios, Gustavo F

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA) as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5) of all compared methods.

  13. Evaluation of robotic training forces that either enhance or reduce error in chronic hemiparetic stroke survivors.

    PubMed

    Patton, James L; Stoykov, Mary Ellen; Kovic, Mark; Mussa-Ivaldi, Ferdinando A

    2006-01-01

    This investigation is one in a series of studies that address the possibility of stroke rehabilitation using robotic devices to facilitate "adaptive training." Healthy subjects, after training in the presence of systematically applied forces, typically exhibit a predictable "after-effect." A critical question is whether this adaptive characteristic is preserved following stroke so that it might be exploited for restoring function. Another important question is whether subjects benefit more from training forces that enhance their errors than from forces that reduce their errors. We exposed hemiparetic stroke survivors and healthy age-matched controls to a pattern of disturbing forces that have been found by previous studies to induce a dramatic adaptation in healthy individuals. Eighteen stroke survivors made 834 movements in the presence of a robot-generated force field that pushed their hands proportional to its speed and perpendicular to its direction of motion--either clockwise or counterclockwise. We found that subjects could adapt, as evidenced by significant after-effects. After-effects were not correlated with the clinical scores that we used for measuring motor impairment. Further examination revealed that significant improvements occurred only when the training forces magnified the original errors, and not when the training forces reduced the errors or were zero. Within this constrained experimental task we found that error-enhancing therapy (as opposed to guiding the limb closer to the correct path) to be more effective than therapy that assisted the subject.

  14. Error baseline rates of five sample preparation methods used to characterize RNA virus populations

    PubMed Central

    Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717

  15. Accommodative Performance of Children With Unilateral Amblyopia

    PubMed Central

    Manh, Vivian; Chen, Angela M.; Tarczy-Hornoch, Kristina; Cotter, Susan A.; Candy, T. Rowan

    2015-01-01

    Purpose. The purpose of this study was to compare the accommodative performance of the amblyopic eye of children with unilateral amblyopia to that of their nonamblyopic eye, and also to that of children without amblyopia, during both monocular and binocular viewing. Methods. Modified Nott retinoscopy was used to measure accommodative performance of 38 subjects with unilateral amblyopia and 25 subjects with typical vision from 3 to 13 years of age during monocular and binocular viewing at target distances of 50, 33, and 25 cm. The relationship between accommodative demand and interocular difference (IOD) in accommodative error was assessed in each group. Results. The mean IOD in monocular accommodative error for amblyopic subjects across all three viewing distances was 0.49 diopters (D) (95% confidence interval [CI], ±1.12 D) in the 180° meridian and 0.54 D (95% CI, ±1.27 D) in the 90° meridian, with the amblyopic eye exhibiting greater accommodative errors on average. Interocular difference in monocular accommodative error increased significantly with increasing accommodative demand; 5%, 47%, and 58% of amblyopic subjects had monocular errors in the amblyopic eye that fell outside the upper 95% confidence limit for the better eye of control subjects at viewing distances of 50, 33, and 25 cm, respectively. Conclusions. When viewing monocularly, children with unilateral amblyopia had greater mean accommodative errors in their amblyopic eyes than in their nonamblyopic eyes, and when compared with control subjects. This could lead to unintended retinal image defocus during patching therapy for amblyopia. PMID:25626970

  16. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.

  17. Combination volumetric and gravimetric sorption instrument for high accuracy measurements of methane adsorption

    NASA Astrophysics Data System (ADS)

    Burress, Jacob; Bethea, Donald; Troub, Brandon

    2017-05-01

    The accurate measurement of adsorbed gas up to high pressures (˜100 bars) is critical for the development of new materials for adsorbed gas storage. The typical Sievert-type volumetric method introduces accumulating errors that can become large at maximum pressures. Alternatively, gravimetric methods employing microbalances require careful buoyancy corrections. In this paper, we present a combination gravimetric and volumetric system for methane sorption measurements on samples between ˜0.5 and 1 g. The gravimetric method described requires no buoyancy corrections. The tandem use of the gravimetric method allows for a check on the highest uncertainty volumetric measurements. The sources and proper calculation of uncertainties are discussed. Results from methane measurements on activated carbon MSC-30 and metal-organic framework HKUST-1 are compared across methods and within the literature.

  18. Combination volumetric and gravimetric sorption instrument for high accuracy measurements of methane adsorption.

    PubMed

    Burress, Jacob; Bethea, Donald; Troub, Brandon

    2017-05-01

    The accurate measurement of adsorbed gas up to high pressures (∼100 bars) is critical for the development of new materials for adsorbed gas storage. The typical Sievert-type volumetric method introduces accumulating errors that can become large at maximum pressures. Alternatively, gravimetric methods employing microbalances require careful buoyancy corrections. In this paper, we present a combination gravimetric and volumetric system for methane sorption measurements on samples between ∼0.5 and 1 g. The gravimetric method described requires no buoyancy corrections. The tandem use of the gravimetric method allows for a check on the highest uncertainty volumetric measurements. The sources and proper calculation of uncertainties are discussed. Results from methane measurements on activated carbon MSC-30 and metal-organic framework HKUST-1 are compared across methods and within the literature.

  19. An investigation of error characteristics and coding performance

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.

    1993-01-01

    The first year's effort on NASA Grant NAG5-2006 was an investigation to characterize typical errors resulting from the EOS dorn link. The analysis methods developed for this effort were used on test data from a March 1992 White Sands Terminal Test. The effectiveness of a concatenated coding scheme of a Reed Solomon outer code and a convolutional inner code versus a Reed Solomon only code scheme has been investigated as well as the effectiveness of a Periodic Convolutional Interleaver in dispersing errors of certain types. The work effort consisted of development of software that allows simulation studies with the appropriate coding schemes plus either simulated data with errors or actual data with errors. The software program is entitled Communication Link Error Analysis (CLEAN) and models downlink errors, forward error correcting schemes, and interleavers.

  20. Resolution requirements for aero-optical simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mani, Ali; Wang Meng; Moin, Parviz

    2008-11-10

    Analytical criteria are developed to estimate the error of aero-optical computations due to inadequate spatial resolution of refractive index fields in high Reynolds number flow simulations. The unresolved turbulence structures are assumed to be locally isotropic and at low turbulent Mach number. Based on the Kolmogorov spectrum for the unresolved structures, the computational error of the optical path length is estimated and linked to the resulting error in the computed far-field optical irradiance. It is shown that in the high Reynolds number limit, for a given geometry and Mach number, the spatial resolution required to capture aero-optics within a pre-specifiedmore » error margin does not scale with Reynolds number. In typical aero-optical applications this resolution requirement is much lower than the resolution required for direct numerical simulation, and therefore, a typical large-eddy simulation can capture the aero-optical effects. The analysis is extended to complex turbulent flow simulations in which non-uniform grid spacings are used to better resolve the local turbulence structures. As a demonstration, the analysis is used to estimate the error of aero-optical computation for an optical beam passing through turbulent wake of flow over a cylinder.« less

  1. Context-dependent sequential effects of target selection for action.

    PubMed

    Moher, Jeff; Song, Joo-Hyun

    2013-07-11

    Humans exhibit variation in behavior from moment to moment even when performing a simple, repetitive task. Errors are typically followed by cautious responses, minimizing subsequent distractor interference. However, less is known about how variation in the execution of an ultimately correct response affects subsequent behavior. We asked participants to reach toward a uniquely colored target presented among distractors and created two categories to describe participants' responses in correct trials based on analyses of movement trajectories; partial errors referred to trials in which observers initially selected a nontarget for action before redirecting the movement and accurately pointing to the target, and direct movements referred to trials in which the target was directly selected for action. We found that latency to initiate a hand movement was shorter in trials following partial errors compared to trials following direct movements. Furthermore, when the target and distractor colors were repeated, movement time and reach movement curvature toward distractors were greater following partial errors compared to direct movements. Finally, when the colors were repeated, partial errors were more frequent than direct movements following partial-error trials, and direct movements were more frequent following direct-movement trials. The dependence of these latter effects on repeated-task context indicates the involvement of higher-level cognitive mechanisms in an integrated attention-action system in which execution of a partial-error or direct-movement response affects memory representations that bias performance in subsequent trials. Altogether, these results demonstrate that whether a nontarget is selected for action or not has a measurable impact on subsequent behavior.

  2. Explicitly solvable complex Chebyshev approximation problems related to sine polynomials

    NASA Technical Reports Server (NTRS)

    Freund, Roland

    1989-01-01

    Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.

  3. Improving Word Learning in Children Using an Errorless Technique

    ERIC Educational Resources Information Center

    Warmington, Meesha; Hitch, Graham J.; Gathercole, Susan E.

    2013-01-01

    The current experiment examined the relative advantage of an errorless learning technique over an errorful one in the acquisition of novel names for unfamiliar objects in typically developing children aged between 7 and 9 years. Errorless learning led to significantly better learning than did errorful learning. Processing speed and vocabulary…

  4. Decision Aids for Multiple-Decision Disease Management as Affected by Weather Input Errors

    USDA-ARS?s Scientific Manuscript database

    Many disease management decision support systems (DSS) rely, exclusively or in part, on weather inputs to calculate an indicator for disease hazard. Error in the weather inputs, typically due to forecasting, interpolation or estimation from off-site sources, may affect model calculations and manage...

  5. Peak-locking centroid bias in Shack-Hartmann wavefront sensing

    NASA Astrophysics Data System (ADS)

    Anugu, Narsireddy; Garcia, Paulo J. V.; Correia, Carlos M.

    2018-05-01

    Shack-Hartmann wavefront sensing relies on accurate spot centre measurement. Several algorithms were developed with this aim, mostly focused on precision, i.e. minimizing random errors. In the solar and extended scene community, the importance of the accuracy (bias error due to peak-locking, quantization, or sampling) of the centroid determination was identified and solutions proposed. But these solutions only allow partial bias corrections. To date, no systematic study of the bias error was conducted. This article bridges the gap by quantifying the bias error for different correlation peak-finding algorithms and types of sub-aperture images and by proposing a practical solution to minimize its effects. Four classes of sub-aperture images (point source, elongated laser guide star, crowded field, and solar extended scene) together with five types of peak-finding algorithms (1D parabola, the centre of gravity, Gaussian, 2D quadratic polynomial, and pyramid) are considered, in a variety of signal-to-noise conditions. The best performing peak-finding algorithm depends on the sub-aperture image type, but none is satisfactory to both bias and random errors. A practical solution is proposed that relies on the antisymmetric response of the bias to the sub-pixel position of the true centre. The solution decreases the bias by a factor of ˜7 to values of ≲ 0.02 pix. The computational cost is typically twice of current cross-correlation algorithms.

  6. Impact of Data Assimilation on Cost-Accuracy Tradeoff in Multi-Fidelity Models at the Example of an Infiltration Problem

    NASA Astrophysics Data System (ADS)

    Sinsbeck, Michael; Tartakovsky, Daniel

    2015-04-01

    Infiltration into top soil can be described by alternative models with different degrees of fidelity: Richards equation and the Green-Ampt model. These models typically contain uncertain parameters and forcings, rendering predictions of the state variables uncertain as well. Within the probabilistic framework, solutions of these models are given in terms of their probability density functions (PDFs) that, in the presence of data, can be treated as prior distributions. The assimilation of soil moisture data into model predictions, e.g., via a Bayesian updating of solution PDFs, poses a question of model selection: Given a significant difference in computational cost, is a lower-fidelity model preferable to its higher-fidelity counter-part? We investigate this question in the context of heterogeneous porous media, whose hydraulic properties are uncertain. While low-fidelity (reduced-complexity) models introduce a model error, their moderate computational cost makes it possible to generate more realizations, which reduces the (e.g., Monte Carlo) sampling or stochastic error. The ratio between these two errors determines the model with the smallest total error. We found assimilation of measurements of a quantity of interest (the soil moisture content, in our example) to decrease the model error, increasing the probability that the predictive accuracy of a reduced-complexity model does not fall below that of its higher-fidelity counterpart.

  7. (U) An Analytic Study of Piezoelectric Ejecta Mass Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tregillis, Ian Lee

    2017-02-16

    We consider the piezoelectric measurement of the areal mass of an ejecta cloud, for the specific case where ejecta are created by a single shock at the free surface and fly ballistically through vacuum to the sensor. To do so, we define time- and velocity-dependent ejecta “areal mass functions” at the source and sensor in terms of typically unknown distribution functions for the ejecta particles. Next, we derive an equation governing the relationship between the areal mass function at the source (which resides in the rest frame of the free surface) and at the sensor (which resides in the laboratorymore » frame). We also derive expressions for the analytic (“true”) accumulated ejecta mass at the sensor and the measured (“inferred”) value obtained via the standard method for analyzing piezoelectric voltage traces. This approach enables us to derive an exact expression for the error imposed upon a piezoelectric ejecta mass measurement (in a perfect system) by the assumption of instantaneous creation. We verify that when the ejecta are created instantaneously (i.e., when the time dependence is a delta function), the piezoelectric inference method exactly reproduces the correct result. When creation is not instantaneous, the standard piezo analysis will always overestimate the true mass. However, the error is generally quite small (less than several percent) for most reasonable velocity and time dependences. In some cases, errors exceeding 10-15% may require velocity distributions or ejecta production timescales inconsistent with experimental observations. These results are demonstrated rigorously with numerous analytic test problems.« less

  8. Stress errors in a case of developmental surface dyslexia in Filipino.

    PubMed

    Dulay, Katrina May; Hanley, J Richard

    2015-01-01

    This paper reports the case of a dyslexic boy (L.A.) whose impaired reading of Filipino is consistent with developmental surface dyslexia. Filipino has a transparent alphabetic orthography with stress typically falling on the penultimate syllable of multisyllabic words. However, exceptions to the typical stress pattern are not marked in the Filipino orthography. L.A. read words with typical stress patterns as accurately as controls, but made many more stress errors than controls when reading Filipino words with atypical stress. He regularized the pronunciation of many of these words by incorrectly placing the stress on the penultimate syllable. Since he also read nonwords as accurately and quickly as controls and performed well on tests of phonological awareness, L.A. appears to present a clear case of developmental surface dyslexia in a transparent orthography.

  9. Stereo depth distortions in teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Vonsydow, Marika

    1988-01-01

    In teleoperation, a typical application of stereo vision is to view a work space located short distances (1 to 3m) in front of the cameras. The work presented here treats converged camera placement and studies the effects of intercamera distance, camera-to-object viewing distance, and focal length of the camera lenses on both stereo depth resolution and stereo depth distortion. While viewing the fronto-parallel plane 1.4 m in front of the cameras, depth errors are measured on the order of 2cm. A geometric analysis was made of the distortion of the fronto-parallel plane of divergence for stereo TV viewing. The results of the analysis were then verified experimentally. The objective was to determine the optimal camera configuration which gave high stereo depth resolution while minimizing stereo depth distortion. It is found that for converged cameras at a fixed camera-to-object viewing distance, larger intercamera distances allow higher depth resolutions, but cause greater depth distortions. Thus with larger intercamera distances, operators will make greater depth errors (because of the greater distortions), but will be more certain that they are not errors (because of the higher resolution).

  10. Analytic Perturbation Method for Estimating Ground Flash Fraction from Satellite Lightning Observations

    NASA Technical Reports Server (NTRS)

    Koshak, William; Solakiewicz, Richard

    2013-01-01

    An analytic perturbation method is introduced for estimating the lightning ground flash fraction in a set of N lightning flashes observed by a satellite lightning mapper. The value of N is large, typically in the thousands, and the observations consist of the maximum optical group area produced by each flash. The method is tested using simulated observations that are based on Optical Transient Detector (OTD) and Lightning Imaging Sensor (LIS) data. National Lightning Detection NetworkTM (NLDN) data is used to determine the flash-type (ground or cloud) of the satellite-observed flashes, and provides the ground flash fraction truth for the simulation runs. It is found that the mean ground flash fraction retrieval errors are below 0.04 across the full range 0-1 under certain simulation conditions. In general, it is demonstrated that the retrieval errors depend on many factors (i.e., the number, N, of satellite observations, the magnitude of random and systematic measurement errors, and the number of samples used to form certain climate distributions employed in the model).

  11. Alcohol effects on performance monitoring and adjustment: affect modulation and impairment of evaluative cognitive control.

    PubMed

    Bartholow, Bruce D; Henry, Erika A; Lust, Sarah A; Saults, J Scott; Wood, Phillip K

    2012-02-01

    Alcohol is known to impair self-regulatory control of behavior, though mechanisms for this effect remain unclear. Here, we tested the hypothesis that alcohol's reduction of negative affect (NA) is a key mechanism for such impairment. This hypothesis was tested by measuring the amplitude of the error-related negativity (ERN), a component of the event-related brain potential (ERP) posited to reflect the extent to which behavioral control failures are experienced as distressing, while participants completed a laboratory task requiring self-regulatory control. Alcohol reduced both the ERN and error positivity (Pe) components of the ERP following errors and impaired typical posterror behavioral adjustment. Structural equation modeling indicated that effects of alcohol on both the ERN and posterror adjustment were significantly mediated by reductions in NA. Effects of alcohol on Pe amplitude were unrelated to posterror adjustment, however. These findings indicate a role for affect modulation in understanding alcohol's effects on self-regulatory impairment and more generally support theories linking the ERN with a distress-related response to control failures. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  12. Results of scatterometer systems analysis for NASA/MSC Earth observation sensor evaluation program

    NASA Technical Reports Server (NTRS)

    Krishen, K.; Vlahos, N.; Brandt, O.; Graybeal, G.

    1970-01-01

    A systems evaluation of the 13.3 GHz scatterometer system is presented. The effects of phase error between the scatterometer channels, antenna pattern deviations, aircraft attitude deviations, environmental changes, and other related factors such as processing errors, system repeatability, and propeller modulation, are established. Furthermore, the reduction in system errors and calibration improvement is investigated by taking into account these parameter deviations. Typical scatterometer data samples are presented.

  13. CHARACTERIZATION OF THE MILLIMETER-WAVE POLARIZATION OF CENTAURUS A WITH QUaD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zemcov, M.; Bock, J.; Leitch, E.

    2010-02-20

    Centaurus (Cen) A represents one of the best candidates for an isolated, compact, highly polarized source that is bright at typical cosmic microwave background (CMB) experiment frequencies. We present measurements of the 4{sup 0} x 2{sup 0} region centered on Cen A with QUaD, a CMB polarimeter whose absolute polarization angle is known to an accuracy of 0.{sup 0}5. Simulations are performed to assess the effect of misestimation of the instrumental parameters on the final measurement and systematic errors due to the field's background structure and temporal variability from Cen A's nuclear region are determined. The total (Q, U) ofmore » the inner lobe region is (1.00 +- 0.07(stat.) +- 0.04(sys.), - 1.72 +- 0.06 +- 0.05) Jy at 100 GHz and (0.80 +- 0.06 +- 0.06, - 1.40 +- 0.07 +- 0.08) Jy at 150 GHz, leading to polarization angles and total errors of -30.{sup 0}0 +- 1.{sup 0}1 and -29.{sup 0}1 +- 1.{sup 0}7. These measurements will allow the use of Cen A as a polarized calibration source for future millimeter experiments.« less

  14. Robustness of reliable change indices to variability in Parkinson's disease with mild cognitive impairment.

    PubMed

    Turner, T H; Renfroe, J B; Elm, J; Duppstadt-Delambo, A; Hinson, V K

    2016-01-01

    Ability to identify change is crucial for measuring response to interventions and tracking disease progression. Beyond psychometrics, investigations of Parkinson's disease with mild cognitive impairment (PD-MCI) must consider fluctuating medication, motor, and mental status. One solution is to employ 90% reliable change indices (RCIs) from test manuals to account for account measurement error and practice effects. The current study examined robustness of 90% RCIs for 19 commonly used executive function tests in 14 PD-MCI subjects assigned to the placebo arm of a 10-week randomized controlled trial of atomoxetine in PD-MCI. Using 90% RCIs, the typical participant showed spurious improvement on one measure, and spurious decline on another. Reliability estimates from healthy adults standardization samples and PD-MCI were similar. In contrast to healthy adult samples, practice effects were minimal in this PD-MCI group. Separate 90% RCIs based on the PD-MCI sample did not further reduce error rate. In the present study, application of 90% RCIs based on healthy adults in standardization samples effectively reduced misidentification of change in a sample of PD-MCI. Our findings support continued application of 90% RCIs when using executive function tests to assess change in neurological populations with fluctuating status.

  15. LOGISMOS-B for primates: primate cortical surface reconstruction and thickness measurement

    NASA Astrophysics Data System (ADS)

    Oguz, Ipek; Styner, Martin; Sanchez, Mar; Shi, Yundi; Sonka, Milan

    2015-03-01

    Cortical thickness and surface area are important morphological measures with implications for many psychiatric and neurological conditions. Automated segmentation and reconstruction of the cortical surface from 3D MRI scans is challenging due to the variable anatomy of the cortex and its highly complex geometry. While many methods exist for this task in the context of the human brain, these methods are typically not readily applicable to the primate brain. We propose an innovative approach based on our recently proposed human cortical reconstruction algorithm, LOGISMOS-B, and the Laplace-based thickness measurement method. Quantitative evaluation of our approach was performed based on a dataset of T1- and T2-weighted MRI scans from 12-month-old macaques where labeling by our anatomical experts was used as independent standard. In this dataset, LOGISMOS-B has an average signed surface error of 0.01 +/- 0.03mm and an unsigned surface error of 0.42 +/- 0.03mm over the whole brain. Excluding the rather problematic temporal pole region further improves unsigned surface distance to 0.34 +/- 0.03mm. This high level of accuracy reached by our algorithm even in this challenging developmental dataset illustrates its robustness and its potential for primate brain studies.

  16. Automation is an Effective Way to Improve Quality of Verification (Calibration) of Measuring Instruments

    NASA Astrophysics Data System (ADS)

    Golobokov, M.; Danilevich, S.

    2018-04-01

    In order to assess calibration reliability and automate such assessment, procedures for data collection and simulation study of thermal imager calibration procedure have been elaborated. The existing calibration techniques do not always provide high reliability. A new method for analyzing the existing calibration techniques and developing new efficient ones has been suggested and tested. A type of software has been studied that allows generating instrument calibration reports automatically, monitoring their proper configuration, processing measurement results and assessing instrument validity. The use of such software allows reducing man-hours spent on finalization of calibration data 2 to 5 times and eliminating a whole set of typical operator errors.

  17. The effect of surface anisotropy and viewing geometry on the estimation of NDVI from AVHRR

    USGS Publications Warehouse

    Meyer, David; Verstraete, M.; Pinty, B.

    1995-01-01

    Since terrestrial surfaces are anisotropic, all spectral reflectance measurements obtained with a small instantaneous field of view instrument are specific to these angular conditions, and the value of the corresponding NDVI, computed from these bidirectional reflectances, is relative to the particular geometry of illumination and viewing at the time of the measurement. This paper documents the importance of these geometric effects through simulations of the AVHRR data acquisition process, and investigates the systematic biases that result from the combination of ecosystem-specific anisotropies with instrument-specific sampling capabilities. Typical errors in the value of NDVI are estimated, and strategies to reduce these effects are explored. -from Authors

  18. Improved estimation of heavy rainfall by weather radar after reflectivity correction and accounting for raindrop size distribution variability

    NASA Astrophysics Data System (ADS)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2015-04-01

    Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands, locally giving rise to rainfall accumulations exceeding 150 mm. Correctly measuring the amount of precipitation during such an extreme event is important, both from a hydrological and meteorological perspective. Unfortunately, the operational weather radar measurements were affected by multiple sources of error and only 30% of the precipitation observed by rain gauges was estimated. Such an underestimation of heavy rainfall, albeit generally less strong than in this extreme case, is typical for operational weather radar in The Netherlands. In general weather radar measurement errors can be subdivided into two groups: (1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, radar calibration, vertical profile of reflectivity) and (2) errors resulting from variations in the raindrop size distribution that in turn result in incorrect rainfall intensity and attenuation estimates from observed reflectivity measurements. A stepwise procedure to correct for the first group of errors leads to large improvements in the quality of the estimated precipitation, increasing the radar rainfall accumulations to about 65% of those observed by gauges. To correct for the second group of errors, a coherent method is presented linking the parameters of the radar reflectivity-rain rate (Z-R) and radar reflectivity-specific attenuation (Z-k) relationships to the normalized drop size distribution (DSD). Two different procedures were applied. First, normalized DSD parameters for the whole event and for each precipitation type separately (convective, stratiform and undefined) were obtained using local disdrometer observations. Second, 10,000 randomly generated plausible normalized drop size distributions were used for rainfall estimation, to evaluate whether this Monte Carlo method would improve the quality of weather radar rainfall products. Using the disdrometer information, the best results were obtained in case no differentiation between precipitation type (convective, stratiform and undefined) was made, increasing the event accumulations to more than 80% of those observed by gauges. For the randomly optimized procedure, radar precipitation estimates further improve and closely resemble observations in case one differentiates between precipitation type. However, the optimal parameter sets are very different from those derived from disdrometer observations. It is therefore questionable if single disdrometer observations are suitable for large-scale quantitative precipitation estimation, especially if the disdrometer is located relatively far away from the main rain event, which was the case in this study. In conclusion, this study shows the benefit of applying detailed error correction methods to improve the quality of the weather radar product, but also confirms the need to be cautious using locally obtained disdrometer measurements.

  19. The impact of reflectivity correction and accounting for raindrop size distribution variability to improve precipitation estimation by weather radar for an extreme low-land mesoscale convective system

    NASA Astrophysics Data System (ADS)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2014-11-01

    Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands, locally giving rise to rainfall accumulations exceeding 150 mm. Correctly measuring the amount of precipitation during such an extreme event is important, both from a hydrological and meteorological perspective. Unfortunately, the operational weather radar measurements were affected by multiple sources of error and only 30% of the precipitation observed by rain gauges was estimated. Such an underestimation of heavy rainfall, albeit generally less strong than in this extreme case, is typical for operational weather radar in The Netherlands. In general weather radar measurement errors can be subdivided into two groups: (1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, radar calibration, vertical profile of reflectivity) and (2) errors resulting from variations in the raindrop size distribution that in turn result in incorrect rainfall intensity and attenuation estimates from observed reflectivity measurements. A stepwise procedure to correct for the first group of errors leads to large improvements in the quality of the estimated precipitation, increasing the radar rainfall accumulations to about 65% of those observed by gauges. To correct for the second group of errors, a coherent method is presented linking the parameters of the radar reflectivity-rain rate (Z - R) and radar reflectivity-specific attenuation (Z - k) relationships to the normalized drop size distribution (DSD). Two different procedures were applied. First, normalized DSD parameters for the whole event and for each precipitation type separately (convective, stratiform and undefined) were obtained using local disdrometer observations. Second, 10,000 randomly generated plausible normalized drop size distributions were used for rainfall estimation, to evaluate whether this Monte Carlo method would improve the quality of weather radar rainfall products. Using the disdrometer information, the best results were obtained in case no differentiation between precipitation type (convective, stratiform and undefined) was made, increasing the event accumulations to more than 80% of those observed by gauges. For the randomly optimized procedure, radar precipitation estimates further improve and closely resemble observations in case one differentiates between precipitation type. However, the optimal parameter sets are very different from those derived from disdrometer observations. It is therefore questionable if single disdrometer observations are suitable for large-scale quantitative precipitation estimation, especially if the disdrometer is located relatively far away from the main rain event, which was the case in this study. In conclusion, this study shows the benefit of applying detailed error correction methods to improve the quality of the weather radar product, but also confirms the need to be cautious using locally obtained disdrometer measurements.

  20. The Impact of Soil Sampling Errors on Variable Rate Fertilization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. L. Hoskinson; R C. Rope; L G. Blackwood

    2004-07-01

    Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and amore » predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences accounted for almost 87% of the cost difference. The sum of these differences could result in a $34 per acre cost difference for the fertilization. Because of these differences, better analysis or better sampling methods may need to be done, or more samples collected, to ensure that the soil measurements are truly representative of the field’s spatial variability.« less

  1. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    PubMed

    Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and subjective assessment of difficulty in fingerprint comparisons.

  2. A five-collector system for the simultaneous measurement of argon isotope ratios in a static mass spectrometer

    USGS Publications Warehouse

    Stacey, J.S.; Sherrill, N.D.; Dalrymple, G.B.; Lanphere, M.A.; Carpenter, N.V.

    1981-01-01

    A system is described that utilizes five separate Faraday-cup collector assemblies, aligned along the focal plane of a mass spectrometer, to collect simultaneous argon ion beams at masses 36-40. Each collector has its own electrometer amplifier and analog-to-digital measuring channel, the outputs of which are processed by a minicomputer that also controls the mass spectrometer. The mass spectrometer utilizes a 90?? sector magnetic analyzer with a radius of 23 cm, in which some degree of z-direction focussing is provided for all the ion beams by the fringe field of the magnet. Simultaneous measurement of the ion beams helps to eliminate mass-spectrometer memory as a significant source of measurement error during an analysis. Isotope ratios stabilize between 7 and 9 s after sample admission into the spectrometer, and thereafter changes in the measured ratios are linear, typically to within ??0.02%. Thus the multi-collector arrangement permits very short extrapolation times for computation of initial ratios, and also provides the advantages of simultaneous measurement of the ion currents in that errors due to variations in ion beam intensity are minimized. A complete analysis takes less than 10 min, so that sample throughput can be greatly enhanced. In this instrument, the factor limiting analytical precision now lies in short-term apparent variations in the interchannel calibration factors. ?? 1981.

  3. Hyperspectral imaging spectro radiometer improves radiometric accuracy

    NASA Astrophysics Data System (ADS)

    Prel, Florent; Moreau, Louis; Bouchard, Robert; Bullis, Ritchie D.; Roy, Claude; Vallières, Christian; Levesque, Luc

    2013-06-01

    Reliable and accurate infrared characterization is necessary to measure the specific spectral signatures of aircrafts and associated infrared counter-measures protections (i.e. flares). Infrared characterization is essential to improve counter measures efficiency, improve friend-foe identification and reduce the risk of friendly fire. Typical infrared characterization measurement setups include a variety of panchromatic cameras and spectroradiometers. Each instrument brings essential information; cameras measure the spatial distribution of targets and spectroradiometers provide the spectral distribution of the emitted energy. However, the combination of separate instruments brings out possible radiometric errors and uncertainties that can be reduced with Hyperspectral imagers. These instruments combine both spectral and spatial information into the same data. These instruments measure both the spectral and spatial distribution of the energy at the same time ensuring the temporal and spatial cohesion of collected information. This paper presents a quantitative analysis of the main contributors of radiometric uncertainties and shows how a hyperspectral imager can reduce these uncertainties.

  4. Application of Blue Laser Triangulation Sensors for Displacement Measurement Through Fire.

    PubMed

    Hoehler, Matthew S; Smith, Christopher M

    2016-11-01

    This paper explores the use of blue laser triangulation sensors to measure displacement of a target located behind or in the close proximity of natural gas diffusion flames. This measurement is critical for providing high-quality data in structural fire tests. The position of the laser relative to the flame envelope can significantly affect the measurement scatter, but has little influence on the mean values. We observe that the measurement scatter is normally distributed and increases linearly with the distance of the target from the flame along the beam path. Based on these observations, we demonstrate how time-averaging can be used to achieve a standard uncertainty associated with the displacement error of less than 0.1 mm, which is typically sufficient for structural fire testing applications. Measurements with the investigated blue laser sensors were not impeded by the thermal radiation emitted from the flame or the soot generated from the relatively clean-burning natural gas.

  5. Results of x-ray mirror round-robin metrology measurements at the APS, ESRF, and SPring-8 optical metrology laboratories.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Assoufid, L.; Rommeveaux, A.; Ohashi, H.

    2005-01-01

    This paper presents the first series of round-robin metrology measurements of x-ray mirrors organized at the Advanced Photon Source (APS) in the USA, the European Synchrotron Radiation Facility in France, and the Super Photon Ring (SPring-8) (in a collaboration with Osaka University, ) in Japan. This work is part of the three institutions' three-way agreement to promote a direct exchange of research information and experience amongst their specialists. The purpose of the metrology round robin is to compare the performance and limitations of the instrumentation used at the optical metrology laboratories of these facilities and to set the basis formore » establishing guidelines and procedures to accurately perform the measurements. The optics used in the measurements were selected to reflect typical, as well as state of the art, in mirror fabrication. The first series of the round robin measurements focuses on flat and cylindrical mirrors with varying sizes and quality. Three mirrors (two flats and one cylinder) were successively measured using long trace profilers. Although the three facilities' LTPs are of different design, the measurements were found to be in excellent agreement. The maximum discrepancy of the rms slope error values is 0.1 {micro}rad, that of the rms shape error was 3 nm, and they all relate to the measurement of the cylindrical mirror. The next round-robin measurements will deal with elliptical and spherical optics.« less

  6. Broadband EIT borehole measurements with high phase accuracy using numerical corrections of electromagnetic coupling effects

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Zimmermann, E.; Huisman, J. A.; Treichel, A.; Wolters, B.; van Waasen, S.; Kemna, A.

    2013-08-01

    Electrical impedance tomography (EIT) is gaining importance in the field of geophysics and there is increasing interest for accurate borehole EIT measurements in a broad frequency range (mHz to kHz) in order to study subsurface properties. To characterize weakly polarizable soils and sediments with EIT, high phase accuracy is required. Typically, long electrode cables are used for borehole measurements. However, this may lead to undesired electromagnetic coupling effects associated with the inductive coupling between the double wire pairs for current injection and potential measurement and the capacitive coupling between the electrically conductive shield of the cable and the electrically conductive environment surrounding the electrode cables. Depending on the electrical properties of the subsurface and the measured transfer impedances, both coupling effects can cause large phase errors that have typically limited the frequency bandwidth of field EIT measurements to the mHz to Hz range. The aim of this paper is to develop numerical corrections for these phase errors. To this end, the inductive coupling effect was modeled using electronic circuit models, and the capacitive coupling effect was modeled by integrating discrete capacitances in the electrical forward model describing the EIT measurement process. The correction methods were successfully verified with measurements under controlled conditions in a water-filled rain barrel, where a high phase accuracy of 0.8 mrad in the frequency range up to 10 kHz was achieved. The corrections were also applied to field EIT measurements made using a 25 m long EIT borehole chain with eight electrodes and an electrode separation of 1 m. The results of a 1D inversion of these measurements showed that the correction methods increased the measurement accuracy considerably. It was concluded that the proposed correction methods enlarge the bandwidth of the field EIT measurement system, and that accurate EIT measurements can now be made in the mHz to kHz frequency range. This increased accuracy in the kHz range will allow a more accurate field characterization of the complex electrical conductivity of soils and sediments, which may lead to the improved estimation of saturated hydraulic conductivity from electrical properties. Although the correction methods have been developed for a custom-made EIT system, they also have potential to improve the phase accuracy of EIT measurements made with commercial systems relying on multicore cables.

  7. Evaluation of Light Detection and Ranging (LIDAR) for measuring river corridor topography

    USGS Publications Warehouse

    Bowen, Z.H.; Waltermire, R.G.

    2002-01-01

    LIDAR is relatively new in the commercial market for remote sensing of topography and it is difficult to find objective reporting on the accuracy of LIDAR measurements in an applied context. Accuracy specifications for LIDAR data in published evaluations range from 1 to 2 m root mean square error (RMSEx,y) and 15 to 20 cm RMSEz. Most of these estimates are based on measurements over relatively flat, homogeneous terrain. This study evaluated the accuracy of one LIDAR data set over a range of terrain types in a western river corridor. Elevation errors based on measurements over all terrain types were larger (RMSEz equals 43 cm) than values typically reported. This result is largely attributable to horizontal positioning limitations (1 to 2 m RMSEx,y) in areas with variable terrain and large topographic relief. Cross-sectional profiles indicated algorithms that were effective for removing vegetation in relatively flat terrain were less effective near the active channel where dense vegetation was found in a narrow band along a low terrace. LIDAR provides relatively accurate data at densities (50,000 to 100,000 points per km2) not feasible with other survey technologies. Other options for projects requiring higher accuracy include low-altitude aerial photography and intensive ground surveying.

  8. Highly accurate surface maps from profilometer measurements

    NASA Astrophysics Data System (ADS)

    Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.

    2013-04-01

    Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.

  9. The Jebsen Taylor Test of Hand Function: A Pilot Test-Retest Reliability Study in Typically Developing Children.

    PubMed

    Reedman, Sarah Elizabeth; Beagley, Simon; Sakzewski, Leanne; Boyd, Roslyn N

    2016-08-01

    The aim of this pilot study was to evaluate reproducibility of the Jebsen Taylor Test of Hand Function (JTTHF) in children. Eighty-seven typically developing children 5 to 10 years old were included from five Outside School Hours Care centers in the Greater Brisbane Region, Australia. Hand function was assessed on two occasions with a modified JTTHF, then reproducibility was assessed using Intraclass Correlation Coefficient (ICC [3,1]) and the Standard Error of Measurement (SEM). Total scores for male and female children were not significantly different. Five-year-old children were significantly different to all other age groups and were excluded from further analysis. Results for 71 children, 6 to 10 years old were analyzed (mean age 8.31 years (SD 1.32); 33 males). Test-retest reliability for total scores on the dominant and nondominant hands were ICC 0.74 (95% CI 0.61, 0.83) and ICC 0.72 (95% CI 0.59, 0.82), respectively. 'Writing' and 'Simulated Feeding' subtests demonstrated poor reproducibility. The Smallest Real Difference was 5.09 seconds for total score on the dominant hand. Findings indicate good test-retest reliability for the JTTHF total score to measure hand function in typically developing children aged 6 to 10 years.

  10. EVENT-RELATED POTENTIAL STUDY OF ATTENTION REGULATION DURING ILLUSORY FIGURE CATEGORIZATION TASK IN ADHD, AUTISM SPECTRUM DISORDER, AND TYPICAL CHILDREN.

    PubMed

    Sokhadze, Estate M; Baruth, Joshua M; Sears, Lonnie; Sokhadze, Guela E; El-Baz, Ayman S; Williams, Emily; Klapheke, Robert; Casanova, Manuel F

    2012-01-01

    Autism spectrum disorders (ASD) and attention deficit/hyperactivity disorder (ADHD) are very common developmental disorders which share some similar symptoms of social, emotional, and attentional deficits. This study is aimed to help understand the differences and similarities of these deficits using analysis of dense-array event-related potentials (ERP) during an illusory figure recognition task. Although ADHD and ASD seem very distinct, they have been shown to share some similarities in their symptoms. Our hypothesis was that children with ASD will show less pronounced differences in ERP responses to target and non-target stimuli as compared to typical children, and to a lesser extent, ADHD. Participants were children with ASD (N=16), ADHD (N=16), and controls (N=16). EEG was collected using a 128 channel EEG system. The task involved the recognition of a specific illusory shape, in this case a square or triangle, created by three or four inducer disks. There were no between group differences in reaction time (RT) to target stimuli, but both ASD and ADHD committed more errors, specifically the ASD group had statistically higher commission error rate than controls. Post-error RT in ASD group was exhibited in a post-error speeding rather than corrective RT slowing typical for the controls. The ASD group also demonstrated an attenuated error-related negativity (ERN) as compared to ADHD and controls. The fronto-central P200, N200, and P300 were enhanced and less differentiated in response to target and non-target figures in the ASD group. The same ERP components were marked by more prolonged latencies in the ADHD group as compared to both ASD and typical controls. The findings are interpreted according to the "minicolumnar" hypothesis proposing existence of neuropathological differences in ASD and ADHD, specifically minicolumnar number/width morphometry spectrum differences. In autism, a model of local hyperconnectivity and long-range hypoconnectivity explains many of the behavioral and cognitive deficits present in the condition, while the inverse arrangement of local hypoconnectivity and long-range hyperconnectivity in ADHD explains some deficits typical for this disorder. The current ERP study supports the proposed suggestion that some between group differences could be manifested in the frontal ERP indices of executive functions during performance on an illusory figure categorization task.

  11. A probabilistic approach to remote compositional analysis of planetary surfaces

    USGS Publications Warehouse

    Lapotre, Mathieu G.A.; Ehlmann, Bethany L.; Minson, Sarah E.

    2017-01-01

    Reflected light from planetary surfaces provides information, including mineral/ice compositions and grain sizes, by study of albedo and absorption features as a function of wavelength. However, deconvolving the compositional signal in spectra is complicated by the nonuniqueness of the inverse problem. Trade-offs between mineral abundances and grain sizes in setting reflectance, instrument noise, and systematic errors in the forward model are potential sources of uncertainty, which are often unquantified. Here we adopt a Bayesian implementation of the Hapke model to determine sets of acceptable-fit mineral assemblages, as opposed to single best fit solutions. We quantify errors and uncertainties in mineral abundances and grain sizes that arise from instrument noise, compositional end members, optical constants, and systematic forward model errors for two suites of ternary mixtures (olivine-enstatite-anorthite and olivine-nontronite-basaltic glass) in a series of six experiments in the visible-shortwave infrared (VSWIR) wavelength range. We show that grain sizes are generally poorly constrained from VSWIR spectroscopy. Abundance and grain size trade-offs lead to typical abundance errors of ≤1 wt % (occasionally up to ~5 wt %), while ~3% noise in the data increases errors by up to ~2 wt %. Systematic errors further increase inaccuracies by a factor of 4. Finally, phases with low spectral contrast or inaccurate optical constants can further increase errors. Overall, typical errors in abundance are <10%, but sometimes significantly increase for specific mixtures, prone to abundance/grain-size trade-offs that lead to high unmixing uncertainties. These results highlight the need for probabilistic approaches to remote determination of planetary surface composition.

  12. Development and Testing of a High-Precision Position and Attitude Measuring System for a Space Mechanism

    NASA Technical Reports Server (NTRS)

    Khanenya, Nikolay; Paciotti, Gabriel; Forzani, Eugenio; Blecha, Luc

    2016-01-01

    This paper describes a high-precision optical metrology system - a unique ground test equipment which was designed and implemented for simultaneous precise contactless measurements of 6 degrees-of-freedom (3 translational + 3 rotational) of a space mechanism end-effector [1] in a thermally controlled ISO 5 clean environment. The developed contactless method reconstructs both position and attitude of the specimen from three cross-sections measured by 2D distance sensors [2]. The cleanliness is preserved by the hermetic test chamber filled with high purity nitrogen. The specimen's temperature is controlled by the thermostat [7]. The developed method excludes errors caused by the thermal deformations and manufacturing inaccuracies of the test jig. Tests and simulations show that the measurement accuracy of an object absolute position is of 20 micron in in-plane measurement (XY) and about 50 micron out of plane (Z). The typical absolute attitude is determined with an accuracy better than 3 arcmin in rotation around X and Y and better than 10 arcmin in Z. The metrology system is able to determine relative position and movement with an accuracy one order of magnitude lower than the absolute accuracy. Typical relative displacement measurement accuracies are better than 1 micron in X and Y and about 2 micron in Z. Finally, the relative rotation can be measured with accuracy better than 20 arcsec in any direction.

  13. Edge profile analysis of Joint European Torus (JET) Thomson scattering data: Quantifying the systematic error due to edge localised mode synchronisation.

    PubMed

    Leyland, M J; Beurskens, M N A; Flanagan, J C; Frassinetti, L; Gibson, K J; Kempenaars, M; Maslov, M; Scannell, R

    2016-01-01

    The Joint European Torus (JET) high resolution Thomson scattering (HRTS) system measures radial electron temperature and density profiles. One of the key capabilities of this diagnostic is measuring the steep pressure gradient, termed the pedestal, at the edge of JET plasmas. The pedestal is susceptible to limiting instabilities, such as Edge Localised Modes (ELMs), characterised by a periodic collapse of the steep gradient region. A common method to extract the pedestal width, gradient, and height, used on numerous machines, is by performing a modified hyperbolic tangent (mtanh) fit to overlaid profiles selected from the same region of the ELM cycle. This process of overlaying profiles, termed ELM synchronisation, maximises the number of data points defining the pedestal region for a given phase of the ELM cycle. When fitting to HRTS profiles, it is necessary to incorporate the diagnostic radial instrument function, particularly important when considering the pedestal width. A deconvolved fit is determined by a forward convolution method requiring knowledge of only the instrument function and profiles. The systematic error due to the deconvolution technique incorporated into the JET pedestal fitting tool has been documented by Frassinetti et al. [Rev. Sci. Instrum. 83, 013506 (2012)]. This paper seeks to understand and quantify the systematic error introduced to the pedestal width due to ELM synchronisation. Synthetic profiles, generated with error bars and point-to-point variation characteristic of real HRTS profiles, are used to evaluate the deviation from the underlying pedestal width. We find on JET that the ELM synchronisation systematic error is negligible in comparison to the statistical error when assuming ten overlaid profiles (typical for a pre-ELM fit to HRTS profiles). This confirms that fitting a mtanh to ELM synchronised profiles is a robust and practical technique for extracting the pedestal structure.

  14. The effect of grid transparency and finite collector size on determining ion temperature and density by the retarding potential analyzer

    NASA Technical Reports Server (NTRS)

    Troy, B. E., Jr.; Maier, E. J.

    1973-01-01

    The analysis of ion data from retarding potential analyzers (RPA's) is generally done under the planar approximation, which assumes that the grid transparency is constant with angle of incidence and that all ions reaching the plane of the collectors are collected. These approximations are not valid for situations in which the ion thermal velocity is comparable to the vehicle velocity, causing ions to enter the RPA with high average transverse velocity. To investigate these effects, the current-voltage curves for H+ at 4000 K were calculated, taking into account the finite collector size and the variation of grid transparency with angle. These curves are then analyzed under the planar approximation. The results show that only small errors in temperature and density are introduced for an RPA with typical dimensions; and that even when the density error is substantial for non-typical dimensions, the temperature error remains minimal.

  15. Emmetropisation and the aetiology of refractive errors

    PubMed Central

    Flitcroft, D I

    2014-01-01

    The distribution of human refractive errors displays features that are not commonly seen in other biological variables. Compared with the more typical Gaussian distribution, adult refraction within a population typically has a negative skew and increased kurtosis (ie is leptokurtotic). This distribution arises from two apparently conflicting tendencies, first, the existence of a mechanism to control eye growth during infancy so as to bring refraction towards emmetropia/low hyperopia (ie emmetropisation) and second, the tendency of many human populations to develop myopia during later childhood and into adulthood. The distribution of refraction therefore changes significantly with age. Analysis of the processes involved in shaping refractive development allows for the creation of a life course model of refractive development. Monte Carlo simulations based on such a model can recreate the variation of refractive distributions seen from birth to adulthood and the impact of increasing myopia prevalence on refractive error distributions in Asia. PMID:24406411

  16. On the impact of a refined stochastic model for airborne LiDAR measurements

    NASA Astrophysics Data System (ADS)

    Bolkas, Dimitrios; Fotopoulos, Georgia; Glennie, Craig

    2016-09-01

    Accurate topographic information is critical for a number of applications in science and engineering. In recent years, airborne light detection and ranging (LiDAR) has become a standard tool for acquiring high quality topographic information. The assessment of airborne LiDAR derived DEMs is typically based on (i) independent ground control points and (ii) forward error propagation utilizing the LiDAR geo-referencing equation. The latter approach is dependent on the stochastic model information of the LiDAR observation components. In this paper, the well-known statistical tool of variance component estimation (VCE) is implemented for a dataset in Houston, Texas, in order to refine the initial stochastic information. Simulations demonstrate the impact of stochastic-model refinement for two practical applications, namely coastal inundation mapping and surface displacement estimation. Results highlight scenarios where erroneous stochastic information is detrimental. Furthermore, the refined stochastic information provides insights on the effect of each LiDAR measurement in the airborne LiDAR error budget. The latter is important for targeting future advancements in order to improve point cloud accuracy.

  17. A Physics-Based Engineering Methodology for Calculating Soft Error Rates of Bulk CMOS and SiGe Heterojunction Bipolar Transistor Integrated Circuits

    NASA Astrophysics Data System (ADS)

    Fulkerson, David E.

    2010-02-01

    This paper describes a new methodology for characterizing the electrical behavior and soft error rate (SER) of CMOS and SiGe HBT integrated circuits that are struck by ions. A typical engineering design problem is to calculate the SER of a critical path that commonly includes several circuits such as an input buffer, several logic gates, logic storage, clock tree circuitry, and an output buffer. Using multiple 3D TCAD simulations to solve this problem is too costly and time-consuming for general engineering use. The new and simple methodology handles the problem with ease by simple SPICE simulations. The methodology accurately predicts the measured threshold linear energy transfer (LET) of a bulk CMOS SRAM. It solves for circuit currents and voltage spikes that are close to those predicted by expensive 3D TCAD simulations. It accurately predicts the measured event cross-section vs. LET curve of an experimental SiGe HBT flip-flop. The experimental cross section vs. frequency behavior and other subtle effects are also accurately predicted.

  18. A molecular topology approach to predicting pesticide pollution of groundwater

    USGS Publications Warehouse

    Worrall , Fred

    2001-01-01

    Various models have proposed methods for the discrimination of polluting and nonpolluting compounds on the basis of simple parameters, typically adsorption and degradation constants. However, such attempts are prone to site variability and measurement error to the extent that compounds cannot be reliably classified nor the chemistry of pollution extrapolated from them. Using observations of pesticide occurrence in U.S. groundwater it is possible to show that polluting from nonpolluting compounds can be distinguished purely on the basis of molecular topology. Topological parameters can be derived without measurement error or site-specific variability. A logistic regression model has been developed which explains 97% of the variation in the data, with 86% of the variation being explained by the rule that a compound will be found in groundwater if 6 < 0.55. Where 6χp is the sixth-order molecular path connectivity. One group of compounds cannot be classified by this rule and prediction requires reference to higher order connectivity parameters. The use of molecular approaches for understanding pollution at the molecular level and their application to agrochemical development and risk assessment is discussed.

  19. Assessment of the Derivative-Moment Transformation method for unsteady-load estimation

    NASA Astrophysics Data System (ADS)

    Mohebbian, Ali; Rival, David

    2011-11-01

    It is often difficult, if not impossible, to measure the aerodynamic or hydrodynamic forces on a moving body. For this reason, a classical control-volume technique is typically applied to extract the unsteady forces instead. However, measuring the acceleration term within the volume of interest using PIV can be limited by optical access, reflections as well as shadows. Therefore in this study an alternative approach, termed the Derivative-Moment Transformation (DMT) method, is introduced and tested on a synthetic data set produced using numerical simulations. The test case involves the unsteady loading of a flat plate in a two-dimensional, laminar periodic gust. The results suggest that the DMT method can accurately predict the acceleration term so long as appropriate spatial and temporal resolutions are maintained. The major deficiency was found to be the determination of pressure in the wake. The effect of control-volume size was investigated suggesting that smaller domains work best by minimizing the associated error with the pressure field. When increasing the control-volume size, the number of calculations necessary for the pressure-gradient integration increases, in turn substantially increasing the error propagation.

  20. Using Gaussian mixture models to detect and classify dolphin whistles and pulses.

    PubMed

    Peso Parada, Pablo; Cardenal-López, Antonio

    2014-06-01

    In recent years, a number of automatic detection systems for free-ranging cetaceans have been proposed that aim to detect not just surfaced, but also submerged, individuals. These systems are typically based on pattern-recognition techniques applied to underwater acoustic recordings. Using a Gaussian mixture model, a classification system was developed that detects sounds in recordings and classifies them as one of four types: background noise, whistles, pulses, and combined whistles and pulses. The classifier was tested using a database of underwater recordings made off the Spanish coast during 2011. Using cepstral-coefficient-based parameterization, a sound detection rate of 87.5% was achieved for a 23.6% classification error rate. To improve these results, two parameters computed using the multiple signal classification algorithm and an unpredictability measure were included in the classifier. These parameters, which helped to classify the segments containing whistles, increased the detection rate to 90.3% and reduced the classification error rate to 18.1%. Finally, the potential of the multiple signal classification algorithm and unpredictability measure for estimating whistle contours and classifying cetacean species was also explored, with promising results.

  1. Inhibition and Updating, but Not Switching, Predict Developmental Dyslexia and Individual Variation in Reading Ability

    PubMed Central

    Doyle, Caoilainn; Smeaton, Alan F.; Roche, Richard A. P.; Boran, Lorraine

    2018-01-01

    To elucidate the core executive function profile (strengths and weaknesses in inhibition, updating, and switching) associated with dyslexia, this study explored executive function in 27 children with dyslexia and 29 age matched controls using sensitive z-mean measures of each ability and controlled for individual differences in processing speed. This study found that developmental dyslexia is associated with inhibition and updating, but not switching impairments, at the error z-mean composite level, whilst controlling for processing speed. Inhibition and updating (but not switching) error composites predicted both dyslexia likelihood and reading ability across the full range of variation from typical to atypical. The predictive relationships were such that those with poorer performance on inhibition and updating measures were significantly more likely to have a diagnosis of developmental dyslexia and also demonstrate poorer reading ability. These findings suggest that inhibition and updating abilities are associated with developmental dyslexia and predict reading ability. Future studies should explore executive function training as an intervention for children with dyslexia as core executive functions appear to be modifiable with training and may transfer to improved reading ability. PMID:29892245

  2. Eliminating US hospital medical errors.

    PubMed

    Kumar, Sameer; Steinebach, Marc

    2008-01-01

    Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.

  3. A Gigabit-per-Second Ka-Band Demonstration Using a Reconfigurable FPGA Modulator

    NASA Technical Reports Server (NTRS)

    Lee, Dennis; Gray, Andrew A.; Kang, Edward C.; Tsou, Haiping; Lay, Norman E.; Fong, Wai; Fisher, Dave; Hoy, Scott

    2005-01-01

    Gigabit-per-second communications have been a desired target for future NASA Earth science missions, and for potential manned lunar missions. Frequency bandwidth at S-band and X-band is typically insufficient to support missions at these high data rates. In this paper, we present the results of a 1 Gbps 32-QAM end-to-end experiment at Ka-band using a reconfigurable Field Programmable Gate Array (FPGA) baseband modulator board. Bit error rate measurements of the received signal using a software receiver demonstrate the feasibility of using ultra-high data rates at Ka-band, although results indicate that error correcting coding and/or modulator predistortion must be implemented in addition. Also, results of the demonstration validate the low-cost, MOS-based reconfigurable modulator approach taken to development of a high rate modulator, as opposed to more expensive ASIC or pure analog approaches.

  4. Inferring time derivatives including cell growth rates using Gaussian processes

    NASA Astrophysics Data System (ADS)

    Swain, Peter S.; Stevenson, Keiran; Leary, Allen; Montano-Gutierrez, Luis F.; Clark, Ivan B. N.; Vogel, Jackie; Pilizota, Teuta

    2016-12-01

    Often the time derivative of a measured variable is of as much interest as the variable itself. For a growing population of biological cells, for example, the population's growth rate is typically more important than its size. Here we introduce a non-parametric method to infer first and second time derivatives as a function of time from time-series data. Our approach is based on Gaussian processes and applies to a wide range of data. In tests, the method is at least as accurate as others, but has several advantages: it estimates errors both in the inference and in any summary statistics, such as lag times, and allows interpolation with the corresponding error estimation. As illustrations, we infer growth rates of microbial cells, the rate of assembly of an amyloid fibril and both the speed and acceleration of two separating spindle pole bodies. Our algorithm should thus be broadly applicable.

  5. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  6. Real-time absorption and scattering characterization of slab-shaped turbid samples obtained by a combination of angular and spatially resolved measurements.

    PubMed

    Dam, Jan S; Yavari, Nazila; Sørensen, Søren; Andersson-Engels, Stefan

    2005-07-10

    We present a fast and accurate method for real-time determination of the absorption coefficient, the scattering coefficient, and the anisotropy factor of thin turbid samples by using simple continuous-wave noncoherent light sources. The three optical properties are extracted from recordings of angularly resolved transmittance in addition to spatially resolved diffuse reflectance and transmittance. The applied multivariate calibration and prediction techniques are based on multiple polynomial regression in combination with a Newton--Raphson algorithm. The numerical test results based on Monte Carlo simulations showed mean prediction errors of approximately 0.5% for all three optical properties within ranges typical for biological media. Preliminary experimental results are also presented yielding errors of approximately 5%. Thus the presented methods show a substantial potential for simultaneous absorption and scattering characterization of turbid media.

  7. Modeling Error Distributions of Growth Curve Models through Bayesian Methods

    ERIC Educational Resources Information Center

    Zhang, Zhiyong

    2016-01-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is…

  8. Children with ADHD Symptoms Are Less Susceptible to Gap-Filling Errors than Typically Developing Children

    ERIC Educational Resources Information Center

    Mirandola, C.; Paparella, G.; Re, A. M.; Ghetti, S.; Cornoldi, C.

    2012-01-01

    Enhanced semantic processing is associated with increased false recognition of items consistent with studied material, suggesting that children with poor semantic skills could produce fewer false memories. We examined whether memory errors differed in children with Attention Deficit/Hyperactivity Disorder (ADHD) and controls. Children viewed 18…

  9. Decreased Sensitivity to Long-Distance Dependencies in Children with a History of Specific Language Impairment: Electrophysiological Evidence

    PubMed Central

    Purdy, J. D.; Leonard, Laurence B.; Weber-Fox, Christine; Kaganovich, Natalya

    2015-01-01

    Purpose One possible source of tense and agreement limitations in children with SLI is a weakness in appreciating structural dependencies that occur in many sentences in the input. We tested this possibility in the present study. Method Children with a history of SLI (H-SLI; N = 12; M age 9;7) and typically developing same-age peers (TD; N = 12; M age 9;7) listened to and made grammaticality judgments about grammatical and ungrammatical sentences involving either a local agreement error (e.g., Every night they talks on the phone) or a long-distance finiteness error (e.g., He makes the quiet boy talks a little louder). Electrophysiological (ERP) and behavioral (accuracy) measures were obtained. Results Local agreement errors elicited the expected anterior negativity and P600 components in both groups of children. However, relative to the TD group, the P600 effect for the long-distance finiteness errors was delayed, reduced in amplitude, and shorter in duration for the H-SLI group. The children's grammaticality judgments were consistent with the ERP findings. Conclusions Children with H-SLI seem to be relatively insensitive to the finiteness constraints that matrix verbs place on subject-verb clauses that appear later in the sentence. PMID:24686983

  10. Triple collocation-based estimation of spatially correlated observation error covariance in remote sensing soil moisture data assimilation

    NASA Astrophysics Data System (ADS)

    Wu, Kai; Shu, Hong; Nie, Lei; Jiao, Zhenhang

    2018-01-01

    Spatially correlated errors are typically ignored in data assimilation, thus degenerating the observation error covariance R to a diagonal matrix. We argue that a nondiagonal R carries more observation information making assimilation results more accurate. A method, denoted TC_Cov, was proposed for soil moisture data assimilation to estimate spatially correlated observation error covariance based on triple collocation (TC). Assimilation experiments were carried out to test the performance of TC_Cov. AMSR-E soil moisture was assimilated with a diagonal R matrix computed using the TC and assimilated using a nondiagonal R matrix, as estimated by proposed TC_Cov. The ensemble Kalman filter was considered as the assimilation method. Our assimilation results were validated against climate change initiative data and ground-based soil moisture measurements using the Pearson correlation coefficient and unbiased root mean square difference metrics. These experiments confirmed that deterioration of diagonal R assimilation results occurred when model simulation is more accurate than observation data. Furthermore, nondiagonal R achieved higher correlation coefficient and lower ubRMSD values over diagonal R in experiments and demonstrated the effectiveness of TC_Cov to estimate richly structuralized R in data assimilation. In sum, compared with diagonal R, nondiagonal R may relieve the detrimental effects of assimilation when simulated model results outperform observation data.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, Tom; Croft, Stephen; Jarman, Kenneth D.

    The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed andmore » achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.« less

  12. Covariance Analysis Tool (G-CAT) for Computing Ascent, Descent, and Landing Errors

    NASA Technical Reports Server (NTRS)

    Boussalis, Dhemetrios; Bayard, David S.

    2013-01-01

    G-CAT is a covariance analysis tool that enables fast and accurate computation of error ellipses for descent, landing, ascent, and rendezvous scenarios, and quantifies knowledge error contributions needed for error budgeting purposes. Because GCAT supports hardware/system trade studies in spacecraft and mission design, it is useful in both early and late mission/ proposal phases where Monte Carlo simulation capability is not mature, Monte Carlo simulation takes too long to run, and/or there is a need to perform multiple parametric system design trades that would require an unwieldy number of Monte Carlo runs. G-CAT is formulated as a variable-order square-root linearized Kalman filter (LKF), typically using over 120 filter states. An important property of G-CAT is that it is based on a 6-DOF (degrees of freedom) formulation that completely captures the combined effects of both attitude and translation errors on the propagated trajectories. This ensures its accuracy for guidance, navigation, and control (GN&C) analysis. G-CAT provides the desired fast turnaround analysis needed for error budgeting in support of mission concept formulations, design trade studies, and proposal development efforts. The main usefulness of a covariance analysis tool such as G-CAT is its ability to calculate the performance envelope directly from a single run. This is in sharp contrast to running thousands of simulations to obtain similar information using Monte Carlo methods. It does this by propagating the "statistics" of the overall design, rather than simulating individual trajectories. G-CAT supports applications to lunar, planetary, and small body missions. It characterizes onboard knowledge propagation errors associated with inertial measurement unit (IMU) errors (gyro and accelerometer), gravity errors/dispersions (spherical harmonics, masscons), and radar errors (multiple altimeter beams, multiple Doppler velocimeter beams). G-CAT is a standalone MATLAB- based tool intended to run on any engineer's desktop computer.

  13. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion procedure. In each case, the developed model-error approach enables to remove posterior bias and obtain a more realistic characterization of uncertainty.

  14. Error Checking and Graphical Representation of Multiple–Complete–Digest (MCD) Restriction-Fragment Maps

    PubMed Central

    Thayer, Edward C.; Olson, Maynard V.; Karp, Richard M.

    1999-01-01

    Genetic and physical maps display the relative positions of objects or markers occurring within a target DNA molecule. In constructing maps, the primary objective is to determine the ordering of these objects. A further objective is to assign a coordinate to each object, indicating its distance from a reference end of the target molecule. This paper describes a computational method and a body of software for assigning coordinates to map objects, given a solution or partial solution to the ordering problem. We describe our method in the context of multiple–complete–digest (MCD) mapping, but it should be applicable to a variety of other mapping problems. Because of errors in the data or insufficient clone coverage to uniquely identify the true ordering of the map objects, a partial ordering is typically the best one can hope for. Once a partial ordering has been established, one often seeks to overlay a metric along the map to assess the distances between the map objects. This problem often proves intractable because of data errors such as erroneous local length measurements (e.g., large clone lengths on low-resolution physical maps). We present a solution to the coordinate assignment problem for MCD restriction-fragment mapping, in which a coordinated set of single-enzyme restriction maps are simultaneously constructed. We show that the coordinate assignment problem can be expressed as the solution of a system of linear constraints. If the linear system is free of inconsistencies, it can be solved using the standard Bellman–Ford algorithm. In the more typical case where the system is inconsistent, our program perturbs it to find a new consistent system of linear constraints, close to those of the given inconsistent system, using a modified Bellman–Ford algorithm. Examples are provided of simple map inconsistencies and the methods by which our program detects candidate data errors and directs the user to potential suspect regions of the map. PMID:9927487

  15. Wafer-level colinearity monitoring for TFH applications

    NASA Astrophysics Data System (ADS)

    Moore, Patrick; Newman, Gary; Abreau, Kelly J.

    2000-06-01

    Advances in thin film head (TFH) designs continue to outpace those in the IC industry. The transition to giant magneto resistive (GMR) designs is underway along with the push toward areal densities in the 20 Gbit/inch2 regime and beyond. This comes at a time when the popularity of the low-cost personal computer (PC) is extremely high, and PC prices are continuing to fall. Consequently, TFH manufacturers are forced to deal with pricing pressure in addition to technological demands. New methods of monitoring and improving yield are required along with advanced head designs. TFH manufacturing is a two-step process. The first is a wafer-level process consisting of manufacturing devices on substrates using processes similar to those in the IC industry. The second half is a slider-level process where wafers are diced into 'rowbars' containing many heads. Each rowbar is then lapped to obtain the desired performance from each head. Variation in the placement of specific layers of each device on the bar, known as a colinearity error, causes a change in device performance and directly impacts yield. The photolithography tool and process contribute to colinearity errors. These components include stepper lens distortion errors, stepper stage errors, reticle fabrication errors, and CD uniformity errors. Currently, colinearity is only very roughly estimated during wafer-level TFH production. An absolute metrology tool, such as a Nikon XY, could be used to quantify colinearity with improved accuracy, but this technique is impractical since TFH manufacturers typically do not have this type of equipment at the production site. More importantly, this measurement technique does not provide the rapid feedback needed in a high-volume production facility. Consequently, the wafer-fab must rely on resistivity-based measurements from slider-fab to quantify colinearity errors. The feedback of this data may require several weeks, making it useless as a process diagnostic. This study examines a method of quickly estimating colinearity at the wafer-level with a test reticle and metrology equipment routinely found in TFH facilities. Colinearity results are correlated to slider-fab measurements on production devices. Stepper contributions to colinearity are estimated, and compared across multiple steppers and stepper generations. Multiple techniques of integrating this diagnostic into production are investigated and discussed.

  16. Apparatus for measurement of acoustic wave propagation under uniaxial loading with application to measurement of third-order elastic constants of piezoelectric single crystals.

    PubMed

    Zhang, Haifeng; Kosinski, J A; Karim, Md Afzalul

    2013-05-01

    We describe an apparatus for the measurement of acoustic wave propagation under uniaxial loading featuring a special mechanism designed to assure a uniform mechanical load on a cube-shaped sample of piezoelectric material. We demonstrate the utility of the apparatus by determining the effects of stresses on acoustic wave speed, which forms a foundation for the final determination of the third-order elastic constants of langasite and langatate single crystals. The transit time method is used to determine changes in acoustic wave velocity as the loading is varied. In order to minimize error and improve the accuracy of the wave speed measurements, the cross correlation method is used to determine the small changes in the time of flight. Typical experimental results are presented and discussed.

  17. Evaluate error correction ability of magnetorheological finishing by smoothing spectral function

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Fan, Bin; Wan, Yongjian; Shi, Chunyan; Zhuo, Bin

    2014-08-01

    Power Spectral Density (PSD) has been entrenched in optics design and manufacturing as a characterization of mid-high spatial frequency (MHSF) errors. Smoothing Spectral Function (SSF) is a newly proposed parameter that based on PSD to evaluate error correction ability of computer controlled optical surfacing (CCOS) technologies. As a typical deterministic and sub-aperture finishing technology based on CCOS, magnetorheological finishing (MRF) leads to MHSF errors inevitably. SSF is employed to research different spatial frequency error correction ability of MRF process. The surface figures and PSD curves of work-piece machined by MRF are presented. By calculating SSF curve, the correction ability of MRF for different spatial frequency errors will be indicated as a normalized numerical value.

  18. Registration of 2D to 3D joint images using phase-based mutual information

    NASA Astrophysics Data System (ADS)

    Dalvi, Rupin; Abugharbieh, Rafeef; Pickering, Mark; Scarvell, Jennie; Smith, Paul

    2007-03-01

    Registration of two dimensional to three dimensional orthopaedic medical image data has important applications particularly in the area of image guided surgery and sports medicine. Fluoroscopy to computer tomography (CT) registration is an important case, wherein digitally reconstructed radiographs derived from the CT data are registered to the fluoroscopy data. Traditional registration metrics such as intensity-based mutual information (MI) typically work well but often suffer from gross misregistration errors when the image to be registered contains a partial view of the anatomy visible in the target image. Phase-based MI provides a robust alternative similarity measure which, in addition to possessing the general robustness and noise immunity that MI provides, also employs local phase information in the registration process which makes it less susceptible to the aforementioned errors. In this paper, we propose using the complex wavelet transform for computing image phase information and incorporating that into a phase-based MI measure for image registration. Tests on a CT volume and 6 fluoroscopy images of the knee are presented. The femur and the tibia in the CT volume were individually registered to the fluoroscopy images using intensity-based MI, gradient-based MI and phase-based MI. Errors in the coordinates of fiducials present in the bone structures were used to assess the accuracy of the different registration schemes. Quantitative results demonstrate that the performance of intensity-based MI was the worst. Gradient-based MI performed slightly better, while phase-based MI results were the best consistently producing the lowest errors.

  19. Modeling and validation of spectral BRDF on material surface of space target

    NASA Astrophysics Data System (ADS)

    Hou, Qingyu; Zhi, Xiyang; Zhang, Huili; Zhang, Wei

    2014-11-01

    The modeling and the validation methods of the spectral BRDF on the material surface of space target were presented. First, the microscopic characteristics of the space targets' material surface were analyzed based on fiber-optic spectrometer using to measure the direction reflectivity of the typical materials surface. To determine the material surface of space target is isotropic, atomic force microscopy was used to measure the material surface structure of space target and obtain Gaussian distribution model of microscopic surface element height. Then, the spectral BRDF model based on that the characteristics of the material surface were isotropic and the surface micro-facet with the Gaussian distribution which we obtained was constructed. The model characterizes smooth and rough surface well for describing the material surface of the space target appropriately. Finally, a spectral BRDF measurement platform in a laboratory was set up, which contains tungsten halogen lamp lighting system, fiber optic spectrometer detection system and measuring mechanical systems with controlling the entire experimental measurement and collecting measurement data by computers automatically. Yellow thermal control material and solar cell were measured with the spectral BRDF, which showed the relationship between the reflection angle and BRDF values at three wavelengths in 380nm, 550nm, 780nm, and the difference between theoretical model values and the measured data was evaluated by relative RMS error. Data analysis shows that the relative RMS error is less than 6%, which verified the correctness of the spectral BRDF model.

  20. Knee rotation influences the femoral tunnel angle measurement after anterior cruciate ligament reconstruction: a 3-dimensional computed tomography model study

    PubMed Central

    Tang, Jing; Thorhauer, Eric; Marsh, Chelsea; Fu, Freddie H.

    2013-01-01

    Purpose Femoral tunnel angle (FTA) has been proposed as a metric for evaluating whether ACL reconstruction was performed anatomically. In clinic, radiographic images are typically acquired with an uncertain amount of internal/external knee rotation. The extent to which knee rotation will influence FTA measurement is unclear. Furthermore, differences in FTA measurement between the two common positions (0° and 45° knee flexion) have not been established. The purpose of this study was to investigate the influence of knee rotation on FTA measurement after ACL reconstruction. Methods Knee CT data from 16 subjects were segmented to produce 3D bone models. Central axes of tunnels were identified. The 0° and 45° flexion angles were simulated. Knee internal/external rotations were simulated in a range of ±20°. FTA was defined as the angle between the tunnel axis and femoral shaft axis, orthogonally projected into the coronal plane. Results Femoral tunnel angle was positively/negatively correlated with knee rotation angle at 0°/45° knee flexion. At 0° knee flexion, FTA for anterio-medial (AM) tunnels was significantly decreased at 20° of external knee rotation. At 45° knee flexion, more than 16° external or 19° internal rotation significantly altered FTA measurements for single-bundle tunnels; smaller rotations (±9° for AM, ±5° for PL) created significant errors in FTA measurements after double-bundle reconstruction. Conclusion Femoral tunnel angle measurements were correlated with knee rotation. Relatively small imaging malalignment introduced significant errors with knee flexed 45°. This study supports using the 0° flexion position for knee radiographs to reduce errors in FTA measurement due to knee internal/external rotation. Level of evidence Case–control study, Level III. PMID:23589127

  1. Identifying Changes of Complex Flood Dynamics with Recurrence Analysis

    NASA Astrophysics Data System (ADS)

    Wendi, D.; Merz, B.; Marwan, N.

    2016-12-01

    Temporal changes in flood hazard system are known to be difficult to detect and attribute due to multiple drivers that include complex processes that are non-stationary and highly variable. These drivers, such as human-induced climate change, natural climate variability, implementation of flood defense, river training, or land use change, could impact variably on space-time scales and influence or mask each other. Flood time series may show complex behavior that vary at a range of time scales and may cluster in time. Moreover hydrological time series (i.e. discharge) are often subject to measurement errors, such as rating curve error especially in the case of extremes where observation are actually derived through extrapolation. This study focuses on the application of recurrence based data analysis techniques (recurrence plot) for understanding and quantifying spatio-temporal changes in flood hazard in Germany. The recurrence plot is known as an effective tool to visualize the dynamics of phase space trajectories i.e. constructed from a time series by using an embedding dimension and a time delay, and it is known to be effective in analyzing non-stationary and non-linear time series. Sensitivity of the common measurement errors and noise on recurrence analysis will also be analyzed and evaluated against conventional methods. The emphasis will be on the identification of characteristic recurrence properties that could associate typical dynamic to certain flood events.

  2. Evaluation of glued-diaphragm fibre optic pressure sensors in a shock tube

    NASA Astrophysics Data System (ADS)

    Sharifian, S. Ahmad; Buttsworth, David R.

    2007-02-01

    Glued-diaphragm fibre optic pressure sensors that utilize standard telecommunications components which are based on Fabry-Perot interferometry are appealing in a number of respects. Principally, they have high spatial and temporal resolution and are low in cost. These features potentially make them well suited to operation in extreme environments produced in short-duration high-enthalpy wind tunnel facilities where spatial and temporal resolution are essential, but attrition rates for sensors are typically very high. The sensors we consider utilize a zirconia ferrule substrate and a thin copper foil which are bonded together using an adhesive. The sensors show a fast response and can measure fluctuations with a frequency up to 250 kHz. The sensors also have a high spatial resolution on the order of 0.1 mm. However, with the interrogation and calibration processes adopted in this work, apparent errors of up to 30% of the maximum pressure have been observed. Such errors are primarily caused by mechanical hysteresis and adhesive viscoelasticity. If a dynamic calibration is adopted, the maximum measurement error can be limited to about 10% of the maximum pressure. However, a better approach is to eliminate the adhesive from the construction process or design the diaphragm and substrate in a way that does not require the adhesive to carry a significant fraction of the mechanical loading.

  3. Quartz crystal resonator g sensitivity measurement methods and recent results

    NASA Astrophysics Data System (ADS)

    Driscoll, M. M.

    1990-09-01

    A technique for accurate measurements of quartz crystal resonator vibration sensitivity is described. The technique utilizes a crystal oscillator circuit in which a prescribed length of coaxial cable is used to connect the resonator to the oscillator sustaining stage. A method is provided for determination and removal of measurement errors normally introduced as a result of cable vibration. In addition to oscillator-type measurements, it is also possible to perform similar vibration sensitivity measurements using a synthesized signal generator with the resonator installed in a passive phase bridge. Test results are reported for 40 and 50 MHz, fifth overtone AT-cut, and third overtone SC-cut crystals. Acceleration sensitivity (gamma vector) values for the SC-cut resonators were typically four times smaller (5 x 10 to the -10th/g) than for the AT-cut units. However, smaller unit-to-unit gamma vector magnitude variation was exhibited by the AT-cut resonators.

  4. Absolute Density Calibration Cell for Laser Induced Fluorescence Erosion Rate Measurements

    NASA Technical Reports Server (NTRS)

    Domonkos, Matthew T.; Stevens, Richard E.

    2001-01-01

    Flight qualification of ion thrusters typically requires testing on the order of 10,000 hours. Extensive knowledge of wear mechanisms and rates is necessary to establish design confidence prior to long duration tests. Consequently, real-time erosion rate measurements offer the potential both to reduce development costs and to enhance knowledge of the dependency of component wear on operating conditions. Several previous studies have used laser-induced fluorescence (LIF) to measure real-time, in situ erosion rates of ion thruster accelerator grids. Those studies provided only relative measurements of the erosion rate. In the present investigation, a molybdenum tube was resistively heated such that the evaporation rate yielded densities within the tube on the order of those expected from accelerator grid erosion. This work examines the suitability of the density cell as an absolute calibration source for LIF measurements, and the intrinsic error was evaluated.

  5. Coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    Analysis of quantum error correcting (QEC) codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. We present analytic results for the logical error as a function of concatenation level and code distance for coherent errors under the repetition code. For data-only coherent errors, we find that the logical error is partially coherent and therefore non-Pauli. However, the coherent part of the error is negligible after two or more concatenation levels or at fewer than ɛ - (d - 1) error correction cycles. Here ɛ << 1 is the rotation angle error per cycle for a single physical qubit and d is the code distance. These results support the validity of modeling coherent errors using a Pauli channel under some minimum requirements for code distance and/or concatenation. We discuss extensions to imperfect syndrome extraction and implications for general QEC.

  6. The development and validation of using inertial sensors to monitor postural change in resistance exercise.

    PubMed

    Gleadhill, Sam; Lee, James Bruce; James, Daniel

    2016-05-03

    This research presented and validated a method of assessing postural changes during resistance exercise using inertial sensors. A simple lifting task was broken down to a series of well-defined tasks, which could be examined and measured in a controlled environment. The purpose of this research was to determine whether timing measures obtained from inertial sensor accelerometer outputs are able to provide accurate, quantifiable information of resistance exercise movement patterns. The aim was to complete a timing measure validation of inertial sensor outputs. Eleven participants completed five repetitions of 15 different deadlift variations. Participants were monitored with inertial sensors and an infrared three dimensional motion capture system. Validation was undertaken using a Will Hopkins Typical Error of the Estimate, with a Pearson׳s correlation and a Bland Altman Limits of Agreement analysis. Statistical validation measured the timing agreement during deadlifts, from inertial sensor outputs and the motion capture system. Timing validation results demonstrated a Pearson׳s correlation of 0.9997, with trivial standardised error (0.026) and standardised bias (0.002). Inertial sensors can now be used in practical settings with as much confidence as motion capture systems, for accelerometer timing measurements of resistance exercise. This research provides foundations for inertial sensors to be applied for qualitative activity recognition of resistance exercise and safe lifting practices. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. A high accuracy magnetic heading system composed of fluxgate magnetometers and a microcomputer

    NASA Astrophysics Data System (ADS)

    Liu, Sheng-Wu; Zhang, Zhao-Nian; Hung, James C.

    The authors present a magnetic heading system consisting of two fluxgate magnetometers and a single-chip microcomputer. The system, when compared to gyro compasses, is smaller in size, lighter in weight, simpler in construction, quicker in reaction time, free from drift, and more reliable. Using a microcomputer in the system, heading error due to compass deviation, sensor offsets, scale factor uncertainty, and sensor tilts can be compensated with the help of an error model. The laboratory test of a typical system showed that the accuracy of the system was improved from more than 8 deg error without error compensation to less than 0.3 deg error with compensation.

  8. A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.

    Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less

  9. A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes

    DOE PAGES

    Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.

    2017-02-05

    Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less

  10. Array coding for large data memories

    NASA Technical Reports Server (NTRS)

    Tranter, W. H.

    1982-01-01

    It is pointed out that an array code is a convenient method for storing large quantities of data. In a typical application, the array consists of N data words having M symbols in each word. The probability of undetected error is considered, taking into account three symbol error probabilities which are of interest, and a formula for determining the probability of undetected error. Attention is given to the possibility of reading data into the array using a digital communication system with symbol error probability p. Two different schemes are found to be of interest. The conducted analysis of array coding shows that the probability of undetected error is very small even for relatively large arrays.

  11. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Köpke, Corinna; Irving, James; Elsheikh, Ahmed H.

    2018-06-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward model linking subsurface physical properties to measured data, which is typically assumed to be perfectly known in the inversion procedure. However, to make the stochastic solution of the inverse problem computationally tractable using methods such as Markov-chain-Monte-Carlo (MCMC), fast approximations of the forward model are commonly employed. This gives rise to model error, which has the potential to significantly bias posterior statistics if not properly accounted for. Here, we present a new methodology for dealing with the model error arising from the use of approximate forward solvers in Bayesian solutions to hydrogeophysical inverse problems. Our approach is geared towards the common case where this error cannot be (i) effectively characterized through some parametric statistical distribution; or (ii) estimated by interpolating between a small number of computed model-error realizations. To this end, we focus on identification and removal of the model-error component of the residual during MCMC using a projection-based approach, whereby the orthogonal basis employed for the projection is derived in each iteration from the K-nearest-neighboring entries in a model-error dictionary. The latter is constructed during the inversion and grows at a specified rate as the iterations proceed. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar travel-time data considering three different subsurface parameterizations of varying complexity. Synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed for their inversion. In each case, our developed approach enables us to remove posterior bias and obtain a more realistic characterization of uncertainty.

  12. Investigating temporal field sampling strategies for site-specific calibration of three soil moisture-neutron intensity parameterisation methods

    NASA Astrophysics Data System (ADS)

    Iwema, J.; Rosolem, R.; Baatz, R.; Wagener, T.; Bogena, H. R.

    2015-07-01

    The Cosmic-Ray Neutron Sensor (CRNS) can provide soil moisture information at scales relevant to hydrometeorological modelling applications. Site-specific calibration is needed to translate CRNS neutron intensities into sensor footprint average soil moisture contents. We investigated temporal sampling strategies for calibration of three CRNS parameterisations (modified N0, HMF, and COSMIC) by assessing the effects of the number of sampling days and soil wetness conditions on the performance of the calibration results while investigating actual neutron intensity measurements, for three sites with distinct climate and land use: a semi-arid site, a temperate grassland, and a temperate forest. When calibrated with 1 year of data, both COSMIC and the modified N0 method performed better than HMF. The performance of COSMIC was remarkably good at the semi-arid site in the USA, while the N0mod performed best at the two temperate sites in Germany. The successful performance of COSMIC at all three sites can be attributed to the benefits of explicitly resolving individual soil layers (which is not accounted for in the other two parameterisations). To better calibrate these parameterisations, we recommend in situ soil sampled to be collected on more than a single day. However, little improvement is observed for sampling on more than 6 days. At the semi-arid site, the N0mod method was calibrated better under site-specific average wetness conditions, whereas HMF and COSMIC were calibrated better under drier conditions. Average soil wetness condition gave better calibration results at the two humid sites. The calibration results for the HMF method were better when calibrated with combinations of days with similar soil wetness conditions, opposed to N0mod and COSMIC, which profited from using days with distinct wetness conditions. Errors in actual neutron intensities were translated to average errors specifically to each site. At the semi-arid site, these errors were below the typical measurement uncertainties from in situ point-scale sensors and satellite remote sensing products. Nevertheless, at the two humid sites, reduction in uncertainty with increasing sampling days only reached typical errors associated with satellite remote sensing products. The outcomes of this study can be used by researchers as a CRNS calibration strategy guideline.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lin, E-mail: godyalin@163.com; Singh, Uttam, E-mail: uttamsingh@hri.res.in; Pati, Arun K., E-mail: akpati@hri.res.in

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate thatmore » mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.« less

  14. Accuracy and Resolution Analysis of a Direct Resistive Sensor Array to FPGA Interface

    PubMed Central

    Oballe-Peinado, Óscar; Vidal-Verdú, Fernando; Sánchez-Durán, José A.; Castellanos-Ramos, Julián; Hidalgo-López, José A.

    2016-01-01

    Resistive sensor arrays are formed by a large number of individual sensors which are distributed in different ways. This paper proposes a direct connection between an FPGA and a resistive array distributed in M rows and N columns, without the need of analog-to-digital converters to obtain resistance values in the sensor and where the conditioning circuit is reduced to the use of a capacitor in each of the columns of the matrix. The circuit allows parallel measurements of the N resistors which form each of the rows of the array, eliminating the resistive crosstalk which is typical of these circuits. This is achieved by an addressing technique which does not require external elements to the FPGA. Although the typical resistive crosstalk between resistors which are measured simultaneously is eliminated, other elements that have an impact on the measurement of discharge times appear in the proposed architecture and, therefore, affect the uncertainty in resistance value measurements; these elements need to be studied. Finally, the performance of different calibration techniques is assessed experimentally on a discrete resistor array, obtaining for a new model of calibration, a maximum relative error of 0.066% in a range of resistor values which correspond to a tactile sensor. PMID:26840321

  15. Accuracy and Resolution Analysis of a Direct Resistive Sensor Array to FPGA Interface.

    PubMed

    Oballe-Peinado, Óscar; Vidal-Verdú, Fernando; Sánchez-Durán, José A; Castellanos-Ramos, Julián; Hidalgo-López, José A

    2016-02-01

    Resistive sensor arrays are formed by a large number of individual sensors which are distributed in different ways. This paper proposes a direct connection between an FPGA and a resistive array distributed in M rows and N columns, without the need of analog-to-digital converters to obtain resistance values in the sensor and where the conditioning circuit is reduced to the use of a capacitor in each of the columns of the matrix. The circuit allows parallel measurements of the N resistors which form each of the rows of the array, eliminating the resistive crosstalk which is typical of these circuits. This is achieved by an addressing technique which does not require external elements to the FPGA. Although the typical resistive crosstalk between resistors which are measured simultaneously is eliminated, other elements that have an impact on the measurement of discharge times appear in the proposed architecture and, therefore, affect the uncertainty in resistance value measurements; these elements need to be studied. Finally, the performance of different calibration techniques is assessed experimentally on a discrete resistor array, obtaining for a new model of calibration, a maximum relative error of 0.066% in a range of resistor values which correspond to a tactile sensor.

  16. Case Assignment in Typically Developing English-Speaking Children: A Paired Priming Study

    ERIC Educational Resources Information Center

    Wisman Weil, Lisa Marie

    2013-01-01

    This study utilized a paired priming paradigm to examine the influence of input features on case assignment in typically developing English-speaking children. The Input Ambiguity Hypothesis (Pelham, 2011) was experimentally tested to help explain why children produce subject pronoun case errors. Analyses of third singular "-s" marking on…

  17. The Effects and Side-Effects of Statistics Education: Psychology Students' (Mis-)Conceptions of Probability

    ERIC Educational Resources Information Center

    Morsanyi, Kinga; Primi, Caterina; Chiesi, Francesca; Handley, Simon

    2009-01-01

    In three studies we looked at two typical misconceptions of probability: the representativeness heuristic, and the equiprobability bias. The literature on statistics education predicts that some typical errors and biases (e.g., the equiprobability bias) increase with education, whereas others decrease. This is in contrast with reasoning theorists'…

  18. Planning for robust reserve networks using uncertainty analysis

    USGS Publications Warehouse

    Moilanen, A.; Runge, M.C.; Elith, Jane; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.

    2006-01-01

    Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.

  19. Primer ID Validates Template Sampling Depth and Greatly Reduces the Error Rate of Next-Generation Sequencing of HIV-1 Genomic RNA Populations

    PubMed Central

    Zhou, Shuntai; Jones, Corbin; Mieczkowski, Piotr

    2015-01-01

    ABSTRACT Validating the sampling depth and reducing sequencing errors are critical for studies of viral populations using next-generation sequencing (NGS). We previously described the use of Primer ID to tag each viral RNA template with a block of degenerate nucleotides in the cDNA primer. We now show that low-abundance Primer IDs (offspring Primer IDs) are generated due to PCR/sequencing errors. These artifactual Primer IDs can be removed using a cutoff model for the number of reads required to make a template consensus sequence. We have modeled the fraction of sequences lost due to Primer ID resampling. For a typical sequencing run, less than 10% of the raw reads are lost to offspring Primer ID filtering and resampling. The remaining raw reads are used to correct for PCR resampling and sequencing errors. We also demonstrate that Primer ID reveals bias intrinsic to PCR, especially at low template input or utilization. cDNA synthesis and PCR convert ca. 20% of RNA templates into recoverable sequences, and 30-fold sequence coverage recovers most of these template sequences. We have directly measured the residual error rate to be around 1 in 10,000 nucleotides. We use this error rate and the Poisson distribution to define the cutoff to identify preexisting drug resistance mutations at low abundance in an HIV-infected subject. Collectively, these studies show that >90% of the raw sequence reads can be used to validate template sampling depth and to dramatically reduce the error rate in assessing a genetically diverse viral population using NGS. IMPORTANCE Although next-generation sequencing (NGS) has revolutionized sequencing strategies, it suffers from serious limitations in defining sequence heterogeneity in a genetically diverse population, such as HIV-1 due to PCR resampling and PCR/sequencing errors. The Primer ID approach reveals the true sampling depth and greatly reduces errors. Knowing the sampling depth allows the construction of a model of how to maximize the recovery of sequences from input templates and to reduce resampling of the Primer ID so that appropriate multiplexing can be included in the experimental design. With the defined sampling depth and measured error rate, we are able to assign cutoffs for the accurate detection of minority variants in viral populations. This approach allows the power of NGS to be realized without having to guess about sampling depth or to ignore the problem of PCR resampling, while also being able to correct most of the errors in the data set. PMID:26041299

  20. Different Neural Patterns Are Associated with Trials Preceding Inhibitory Errors in Children with and without Attention-Deficit/Hyperactivity Disorder

    ERIC Educational Resources Information Center

    Spinelli, Simona; Joel, Suresh; Nelson, Tess E.; Vasa, Roma A.; Pekar, James J.; Mostofsky, Stewart H.

    2011-01-01

    Objective: Attention-deficit/hyperactivity disorder (ADHD) is associated with difficulty inhibiting impulsive, hyperactive, and off-task behavior. However, no studies have examined whether a distinct pattern of brain activity precedes inhibitory errors in typically developing (TD) children and children with ADHD. In healthy adults, increased…

  1. An Intuitive Graphical Approach to Understanding the Split-Plot Experiment

    ERIC Educational Resources Information Center

    Robinson, Timothy J.; Brenneman, William A.; Myers, William R.

    2009-01-01

    While split-plot designs have received considerable attention in the literature over the past decade, there seems to be a general lack of intuitive understanding of the error structure of these designs and the resulting statistical analysis. Typically, students learn the proper error terms for testing factors of a split-plot design via "expected…

  2. The Different Time Course of Phonotactic Constraint Learning in Children and Adults: Evidence from Speech Errors

    ERIC Educational Resources Information Center

    Smalle, Eleonore H. M.; Muylle, Merel; Szmalec, Arnaud; Duyck, Wouter

    2017-01-01

    Speech errors typically respect the speaker's implicit knowledge of language-wide phonotactics (e.g., /t/ cannot be a syllable onset in the English language). Previous work demonstrated that adults can learn novel experimentally induced phonotactic constraints by producing syllable strings in which the allowable position of a phoneme depends on…

  3. Benchmarking observational uncertainties for hydrology (Invited)

    NASA Astrophysics Data System (ADS)

    McMillan, H. K.; Krueger, T.; Freer, J. E.; Westerberg, I.

    2013-12-01

    There is a pressing need for authoritative and concise information on the expected error distributions and magnitudes in hydrological data, to understand its information content. Many studies have discussed how to incorporate uncertainty information into model calibration and implementation, and shown how model results can be biased if uncertainty is not appropriately characterised. However, it is not always possible (for example due to financial or time constraints) to make detailed studies of uncertainty for every research study. Instead, we propose that the hydrological community could benefit greatly from sharing information on likely uncertainty characteristics and the main factors that control the resulting magnitude. In this presentation, we review the current knowledge of uncertainty for a number of key hydrological variables: rainfall, flow and water quality (suspended solids, nitrogen, phosphorus). We collated information on the specifics of the data measurement (data type, temporal and spatial resolution), error characteristics measured (e.g. standard error, confidence bounds) and error magnitude. Our results were primarily split by data type. Rainfall uncertainty was controlled most strongly by spatial scale, flow uncertainty was controlled by flow state (low, high) and gauging method. Water quality presented a more complex picture with many component errors. For all variables, it was easy to find examples where relative error magnitude exceeded 40%. We discuss some of the recent developments in hydrology which increase the need for guidance on typical error magnitudes, in particular when doing comparative/regionalisation and multi-objective analysis. Increased sharing of data, comparisons between multiple catchments, and storage in national/international databases can mean that data-users are far removed from data collection, but require good uncertainty information to reduce bias in comparisons or catchment regionalisation studies. Recently it has become more common for hydrologists to use multiple data types and sources within a single study. This may be driven by complex water management questions which integrate water quantity, quality and ecology; or by recognition of the value of auxiliary data to understand hydrological processes. We discuss briefly the impact of data uncertainty on the increasingly popular use of diagnostic signatures for hydrological process understanding and model development.

  4. Genetic and environmental influences on female sexual orientation, childhood gender typicality and adult gender identity.

    PubMed

    Burri, Andrea; Cherkas, Lynn; Spector, Timothy; Rahman, Qazi

    2011-01-01

    Human sexual orientation is influenced by genetic and non-shared environmental factors as are two important psychological correlates--childhood gender typicality (CGT) and adult gender identity (AGI). However, researchers have been unable to resolve the genetic and non-genetic components that contribute to the covariation between these traits, particularly in women. Here we performed a multivariate genetic analysis in a large sample of British female twins (N = 4,426) who completed a questionnaire assessing sexual attraction, CGT and AGI. Univariate genetic models indicated modest genetic influences on sexual attraction (25%), AGI (11%) and CGT (31%). For the multivariate analyses, a common pathway model best fitted the data. This indicated that a single latent variable influenced by a genetic component and common non-shared environmental component explained the association between the three traits but there was substantial measurement error. These findings highlight common developmental factors affecting differences in sexual orientation.

  5. Improved estimation of subject-level functional connectivity using full and partial correlation with empirical Bayes shrinkage.

    PubMed

    Mejia, Amanda F; Nebel, Mary Beth; Barber, Anita D; Choe, Ann S; Pekar, James J; Caffo, Brian S; Lindquist, Martin A

    2018-05-15

    Reliability of subject-level resting-state functional connectivity (FC) is determined in part by the statistical techniques employed in its estimation. Methods that pool information across subjects to inform estimation of subject-level effects (e.g., Bayesian approaches) have been shown to enhance reliability of subject-level FC. However, fully Bayesian approaches are computationally demanding, while empirical Bayesian approaches typically rely on using repeated measures to estimate the variance components in the model. Here, we avoid the need for repeated measures by proposing a novel measurement error model for FC describing the different sources of variance and error, which we use to perform empirical Bayes shrinkage of subject-level FC towards the group average. In addition, since the traditional intra-class correlation coefficient (ICC) is inappropriate for biased estimates, we propose a new reliability measure denoted the mean squared error intra-class correlation coefficient (ICC MSE ) to properly assess the reliability of the resulting (biased) estimates. We apply the proposed techniques to test-retest resting-state fMRI data on 461 subjects from the Human Connectome Project to estimate connectivity between 100 regions identified through independent components analysis (ICA). We consider both correlation and partial correlation as the measure of FC and assess the benefit of shrinkage for each measure, as well as the effects of scan duration. We find that shrinkage estimates of subject-level FC exhibit substantially greater reliability than traditional estimates across various scan durations, even for the most reliable connections and regardless of connectivity measure. Additionally, we find partial correlation reliability to be highly sensitive to the choice of penalty term, and to be generally worse than that of full correlations except for certain connections and a narrow range of penalty values. This suggests that the penalty needs to be chosen carefully when using partial correlations. Copyright © 2018. Published by Elsevier Inc.

  6. Comparing joint kinematics and center of mass acceleration as feedback for control of standing balance by functional neuromuscular stimulation.

    PubMed

    Nataraj, Raviraj; Audu, Musa L; Triolo, Ronald J

    2012-05-06

    The purpose of this study was to determine the comparative effectiveness of feedback control systems for maintaining standing balance based on joint kinematics or total body center of mass (COM) acceleration, and assess their clinical practicality for standing neuroprostheses after spinal cord injury (SCI). In simulation, controller performance was measured according to the upper extremity effort required to stabilize a three-dimensional model of bipedal standing against a variety of postural disturbances. Three cases were investigated: proportional-derivative control based on joint kinematics alone, COM acceleration feedback alone, and combined joint kinematics and COM acceleration feedback. Additionally, pilot data was collected during external perturbations of an individual with SCI standing with functional neuromuscular stimulation (FNS), and the resulting joint kinematics and COM acceleration data was analyzed. Compared to the baseline case of maximal constant muscle excitations, the three control systems reduced the mean upper extremity loading by 51%, 43% and 56%, respectively against external force-pulse perturbations. Controller robustness was defined as the degradation in performance with increasing levels of input errors expected with clinical deployment of sensor-based feedback. At error levels typical for body-mounted inertial sensors, performance degradation due to sensor noise and placement were negligible. However, at typical tracking error levels, performance could degrade as much as 86% for joint kinematics feedback and 35% for COM acceleration feedback. Pilot data indicated that COM acceleration could be estimated with a few well-placed sensors and efficiently captures information related to movement synergies observed during perturbed bipedal standing following SCI. Overall, COM acceleration feedback may be a more feasible solution for control of standing with FNS given its superior robustness and small number of inputs required.

  7. Absolute GPS Positioning Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Ramillien, G.

    A new inverse approach for restoring the absolute coordinates of a ground -based station from three or four observed GPS pseudo-ranges is proposed. This stochastic method is based on simulations of natural evolution named genetic algorithms (GA). These iterative procedures provide fairly good and robust estimates of the absolute positions in the Earth's geocentric reference system. For comparison/validation, GA results are compared to the ones obtained using the classical linearized least-square scheme for the determination of the XYZ location proposed by Bancroft (1985) which is strongly limited by the number of available observations (i.e. here, the number of input pseudo-ranges must be four). The r.m.s. accuracy of the non -linear cost function reached by this latter method is typically ~10-4 m2 corresponding to ~300-500-m accuracies for each geocentric coordinate. However, GA can provide more acceptable solutions (r.m.s. errors < 10-5 m2), even when only three instantaneous pseudo-ranges are used, such as a lost of lock during a GPS survey. Tuned GA parameters used in different simulations are N=1000 starting individuals, as well as Pc=60-70% and Pm=30-40% for the crossover probability and mutation rate, respectively. Statistical tests on the ability of GA to recover acceptable coordinates in presence of important levels of noise are made simulating nearly 3000 random samples of erroneous pseudo-ranges. Here, two main sources of measurement errors are considered in the inversion: (1) typical satellite-clock errors and/or 300-metre variance atmospheric delays, and (2) Geometrical Dilution of Precision (GDOP) due to the particular GPS satellite configuration at the time of acquisition. Extracting valuable information and even from low-quality starting range observations, GA offer an interesting alternative for high -precision GPS positioning.

  8. Comparing joint kinematics and center of mass acceleration as feedback for control of standing balance by functional neuromuscular stimulation

    PubMed Central

    2012-01-01

    Background The purpose of this study was to determine the comparative effectiveness of feedback control systems for maintaining standing balance based on joint kinematics or total body center of mass (COM) acceleration, and assess their clinical practicality for standing neuroprostheses after spinal cord injury (SCI). Methods In simulation, controller performance was measured according to the upper extremity effort required to stabilize a three-dimensional model of bipedal standing against a variety of postural disturbances. Three cases were investigated: proportional-derivative control based on joint kinematics alone, COM acceleration feedback alone, and combined joint kinematics and COM acceleration feedback. Additionally, pilot data was collected during external perturbations of an individual with SCI standing with functional neuromuscular stimulation (FNS), and the resulting joint kinematics and COM acceleration data was analyzed. Results Compared to the baseline case of maximal constant muscle excitations, the three control systems reduced the mean upper extremity loading by 51%, 43% and 56%, respectively against external force-pulse perturbations. Controller robustness was defined as the degradation in performance with increasing levels of input errors expected with clinical deployment of sensor-based feedback. At error levels typical for body-mounted inertial sensors, performance degradation due to sensor noise and placement were negligible. However, at typical tracking error levels, performance could degrade as much as 86% for joint kinematics feedback and 35% for COM acceleration feedback. Pilot data indicated that COM acceleration could be estimated with a few well-placed sensors and efficiently captures information related to movement synergies observed during perturbed bipedal standing following SCI. Conclusions Overall, COM acceleration feedback may be a more feasible solution for control of standing with FNS given its superior robustness and small number of inputs required. PMID:22559852

  9. Astrometric Calibration and Performance of the Dark Energy Camera

    DOE PAGES

    Bernstein, G. M.; Armstrong, R.; Plazas, A. A.; ...

    2017-05-30

    We characterize the ability of the Dark Energy Camera (DECam) to perform relative astrometry across its 500 Mpix, 3more » $deg^2$ science field of view, and across 4 years of operation. This is done using internal comparisons of $~ 4 x 10^7$ measurements of high-S/N stellar images obtained in repeat visits to fields of moderate stellar density, with the telescope dithered to move the sources around the array. An empirical astrometric model includes terms for: optical distortions; stray electric fields in the CCD detectors; chromatic terms in the instrumental and atmospheric optics; shifts in CCD relative positions of up to $$\\approx 10 \\mu m$$ when the DECam temperature cycles; and low-order distortions to each exposure from changes in atmospheric refraction and telescope alignment. Errors in this astrometric model are dominated by stochastic variations with typical amplitudes of 10-30 mas (in a 30 s exposure) and $$5^{\\prime}-10^{\\prime}$$ arcmin coherence length, plausibly attributed to Kolmogorov-spectrum atmospheric turbulence. The size of these atmospheric distortions is not closely related to the seeing. Given an astrometric reference catalog at density $$\\approx 0.7$$ $$arcmin^{-2}$$, e.g. from Gaia, the typical atmospheric distortions can be interpolated to $$\\approx$$ 7 mas RMS accuracy (for 30 s exposures) with $$1^{\\prime}$$ arcmin coherence length for residual errors. Remaining detectable error contributors are 2-4 mas RMS from unmodelled stray electric fields in the devices, and another 2-4 mas RMS from focal plane shifts between camera thermal cycles. Thus the astrometric solution for a single DECam exposure is accurate to 3-6 mas ( $$\\approx$$ 0.02 pixels, or $$\\approx$$ 300 nm) on the focal plane, plus the stochastic atmospheric distortion.« less

  10. Internal consistency tests for evaluation of measurements of anthropogenic hydrocarbons in the troposphere

    NASA Astrophysics Data System (ADS)

    Parrish, D. D.; Trainer, M.; Young, V.; Goldan, P. D.; Kuster, W. C.; Jobson, B. T.; Fehsenfeld, F. C.; Lonneman, W. A.; Zika, R. D.; Farmer, C. T.; Riemer, D. D.; Rodgers, M. O.

    1998-09-01

    Measurements of tropospheric nonmethane hydrocarbons (NMHCs) made in continental North America should exhibit a common pattern determined by photochemical removal and dilution acting upon the typical North American urban emissions. We analyze 11 data sets collected in the United States in the context of this hypothesis, in most cases by analyzing the geometric mean and standard deviations of ratios of selected NMHCs. In the analysis we attribute deviations from the common pattern to plausible systematic and random experimental errors. In some cases the errors have been independently verified and the specific causes identified. Thus this common pattern provides a check for internal consistency in NMHC data sets. Specific tests are presented which should provide useful diagnostics for all data sets of anthropogenic NMHC measurements collected in the United States. Similar tests, based upon the perhaps different emission patterns of other regions, presumably could be developed. The specific tests include (1) a lower limit for ethane concentrations, (2) specific NMHCs that should be detected if any are, (3) the relatively constant mean ratios of the longer-lived NMHCs with similar atmospheric lifetimes, (4) the constant relative patterns of families of NMHCs, and (5) limits on the ambient variability of the NMHC ratios. Many experimental problems are identified in the literature and the Southern Oxidant Study data sets. The most important conclusion of this paper is that a rigorous field intercomparison of simultaneous measurements of ambient NMHCs by different techniques and researchers is of crucial importance to the field of atmospheric chemistry. The tests presented here are suggestive of errors but are not definitive; only a field intercomparison can resolve the uncertainties.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fetterly, K; Mathew, V

    Purpose: Transcatheter aortic valve replacement (TAVR) procedures provide a method to implant a prosthetic aortic valve via a minimallyinvasive, catheter-based procedure. TAVR procedures require use of interventional fluoroscopy c-arm projection angles which are aligned with the aortic valve plane to minimize prosthetic valve positioning error due to x-ray imaging parallax. The purpose of this work is to calculate the continuous range of interventional fluoroscopy c-arm projection angles which are aligned with the aortic valve plane from a single planar image of a valvuloplasty balloon inflated across the aortic valve. Methods: Computational methods to measure the 3D angular orientation of themore » aortic valve were developed. Required inputs include a planar x-ray image of a known valvuloplasty balloon inflated across the aortic valve and specifications of x-ray imaging geometry from the DICOM header of the image. A-priori knowledge of the species-specific typical range of aortic orientation is required to specify the sign of the angle of the long axis of the balloon with respect to the x-ray beam. The methods were validated ex-vivo and in a live pig. Results: Ex-vivo experiments demonstrated that the angular orientation of a stationary inflated valvuloplasty balloon can be measured with precision less than 1 degree. In-vivo pig experiments demonstrated that cardiac motion contributed to measurement variability, with precision less than 3 degrees. Error in specification of x-ray geometry directly influences measurement accuracy. Conclusion: This work demonstrates that the 3D angular orientation of the aortic valve can be calculated precisely from a planar image of a valvuloplasty balloon inflated across the aortic valve and known x-ray geometry. This method could be used to determine appropriate c-arm angular projections during TAVR procedures to minimize x-ray imaging parallax and thereby minimize prosthetic valve positioning errors.« less

  12. The content of lexical stimuli and self-reported physiological state modulate error-related negativity amplitude.

    PubMed

    Benau, Erik M; Moelter, Stephen T

    2016-09-01

    The Error-Related Negativity (ERN) and Correct-Response Negativity (CRN) are brief event-related potential (ERP) components-elicited after the commission of a response-associated with motivation, emotion, and affect. The Error Positivity (Pe) typically appears after the ERN, and corresponds to awareness of having committed an error. Although motivation has long been established as an important factor in the expression and morphology of the ERN, physiological state has rarely been explored as a variable in these investigations. In the present study, we investigated whether self-reported physiological state (SRPS; wakefulness, hunger, or thirst) corresponds with ERN amplitude and type of lexical stimuli. Participants completed a SRPS questionnaire and then completed a speeded Lexical Decision Task with words and pseudowords that were either food-related or neutral. Though similar in frequency and length, food-related stimuli elicited increased accuracy, faster errors, and generated a larger ERN and smaller CRN than neutral words. Self-reported thirst correlated with improved accuracy and smaller ERN and CRN amplitudes. The Pe and Pc (correct positivity) were not impacted by physiological state or by stimulus content. The results indicate that physiological state and manipulations of lexical content may serve as important avenues for future research. Future studies that apply more sensitive measures of physiological and motivational state (e.g., biomarkers for satiety) or direct manipulations of satiety may be a useful technique for future research into response monitoring. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Evaluation of Mycology Laboratory Proficiency Testing

    PubMed Central

    Reilly, Andrew A.; Salkin, Ira F.; McGinnis, Michael R.; Gromadzki, Sally; Pasarell, Lester; Kemna, Maggi; Higgins, Nancy; Salfinger, Max

    1999-01-01

    Changes over the last decade in overt proficiency testing (OPT) regulations have been ostensibly directed at improving laboratory performance on patient samples. However, the overt (unblinded) format of the tests and regulatory penalties associated with incorrect values allow and encourage laboratorians to take extra precautions with OPT analytes. As a result OPT may measure optimal laboratory performance instead of the intended target of typical performance attained during routine patient testing. This study addresses this issue by evaluating medical mycology OPT and comparing its fungal specimen identification error rates to those obtained in a covert (blinded) proficiency testing (CPT) program. Identifications from 188 laboratories participating in the New York State mycology OPT from 1982 to 1994 were compared with the identifications of the same fungi recovered from patient specimens in 1989 and 1994 as part of the routine procedures of 88 of these laboratories. The consistency in the identification of OPT specimens was sufficient to make accurate predictions of OPT error rates. However, while the error rates in OPT and CPT were similar for Candida albicans, significantly higher error rates were found in CPT for Candida tropicalis, Candida glabrata, and other common pathogenic fungi. These differences may, in part, be due to OPT’s use of ideal organism representatives cultured under optimum growth conditions. This difference, as well as the organism-dependent error rate differences, reflects the limitations of OPT as a means of assessing the quality of routine laboratory performance in medical mycology. PMID:10364601

  14. [Methodologic developmental principles of standardized surveys within the scope of social gerontologic studies].

    PubMed

    Bansemir, G

    1987-01-01

    The conception and evaluation of standardized oral or written questioning as quantifying instruments of research orientate by the basic premises of Marxist-Leninist theory of recognition and general scientific logic. In the present contribution the socio-gerontological research process is outlined in extracts. By referring to the intrinsic connection between some of its essential components--problem, formation of hypotheses, obtaining indicators/measurement, preliminary examination, evaluation-as well as to typical errors and (fictitious) examples of practical research, this contribution contrasts the natural, apparently uncomplicated course of structured questioning with its qualitative methodological fundamentals and demands.

  15. Scattering, Adsorption, and Langmuir-Hinshelwood Desorption Models for Physisorptive and Chemisorptive Gas-Surface Systems

    DTIC Science & Technology

    2013-09-01

    respectively, and ΦQw is the reflected flux of Q at complete accommodation. Typical properties of Q are tangential momentum mct , normal mo- mentum mcn, and...that σt ≡ mct −mc′t mct −mcw = ct − c′t ct , (9) 23 where c′t is the post-collisional tangential speed, and cw is the speed of the wall, which in this...was measured within an experimental error of ±0.02, and the coverage θ noise level was about 0.01 ML. 35 MKS simulations are compared with data in

  16. Error, rather than its probability, elicits specific electrocortical signatures: a combined EEG-immersive virtual reality study of action observation.

    PubMed

    Pezzetta, Rachele; Nicolardi, Valentina; Tidoni, Emmanuele; Aglioti, Salvatore Maria

    2018-06-06

    Detecting errors in one's own actions, and in the actions of others, is a crucial ability for adaptable and flexible behavior. Studies show that specific EEG signatures underpin the monitoring of observed erroneous actions (error-related negativity, error-positivity, mid-frontal theta oscillations). However, the majority of studies on action observation used sequences of trials where erroneous actions were less frequent than correct actions. Therefore, it was not possible to disentangle whether the activation of the performance monitoring system was due to an error - as a violation of the intended goal - or a surprise/novelty effect, associated with a rare and unexpected event. Combining EEG and immersive virtual reality (IVR-CAVE system), we recorded the neural signal of 25 young adults who observed in first-person perspective, simple reach-to-grasp actions performed by an avatar aiming for a glass. Importantly, the proportion of erroneous actions was higher than correct actions. Results showed that the observation of erroneous actions elicits the typical electro-cortical signatures of error monitoring and therefore the violation of the action goal is still perceived as a salient event. The observation of correct actions elicited stronger alpha suppression. This confirmed the role of the alpha frequency band in the general orienting response to novel and infrequent stimuli. Our data provides novel evidence that an observed goal error (the action slip) triggers the activity of the performance monitoring system even when erroneous actions, which are, typically, relevant events, occur more often than correct actions and thus are not salient because of their rarity.

  17. A statistical study of radio-source structure effects on astrometric very long baseline interferometry observations

    NASA Technical Reports Server (NTRS)

    Ulvestad, J. S.

    1989-01-01

    Errors from a number of sources in astrometric very long baseline interferometry (VLBI) have been reduced in recent years through a variety of methods of calibration and modeling. Such reductions have led to a situation in which the extended structure of the natural radio sources used in VLBI is a significant error source in the effort to improve the accuracy of the radio reference frame. In the past, work has been done on individual radio sources to establish the magnitude of the errors caused by their particular structures. The results of calculations on 26 radio sources are reported in which an effort is made to determine the typical delay and delay-rate errors for a number of sources having different types of structure. It is found that for single observations of the types of radio sources present in astrometric catalogs, group-delay and phase-delay scatter in the 50 to 100 psec range due to source structure can be expected at 8.4 GHz on the intercontinental baselines available in the Deep Space Network (DSN). Delay-rate scatter of approx. 5 x 10(exp -15) sec sec(exp -1) (or approx. 0.002 mm sec (exp -1) is also expected. If such errors mapped directly into source position errors, they would correspond to position uncertainties of approx. 2 to 5 nrad, similar to the best position determinations in the current JPL VLBI catalog. With the advent of wider bandwidth VLBI systems on the large DSN antennas, the system noise will be low enough so that the structure-induced errors will be a significant part of the error budget. Several possibilities for reducing the structure errors are discussed briefly, although it is likely that considerable effort will have to be devoted to the structure problem in order to reduce the typical error by a factor of two or more.

  18. Solar concentrator panel and gore testing in the JPL 25-foot space simulator

    NASA Technical Reports Server (NTRS)

    Dennison, E. W.; Argoud, M. J.

    1981-01-01

    The optical imaging characteristics of parabolic solar concentrator panels (or gores) have been measured using the optical beam of the JPL 25-foot space simulator. The simulator optical beam has been characterized, and the virtual source position and size have been determined. These data were used to define the optical test geometry. The point source image size and focal length have been determined for several panels. A flux distribution of a typical solar concentrator has been estimated from these data. Aperture photographs of the panels were used to determine the magnitude and characteristics of the reflecting surface errors. This measurement technique has proven to be highly successful at determining the optical characteristics of solar concentrator panels.

  19. Fault Identification Based on Nlpca in Complex Electrical Engineering

    NASA Astrophysics Data System (ADS)

    Zhang, Yagang; Wang, Zengping; Zhang, Jinfang

    2012-07-01

    The fault is inevitable in any complex systems engineering. Electric power system is essentially a typically nonlinear system. It is also one of the most complex artificial systems in this world. In our researches, based on the real-time measurements of phasor measurement unit, under the influence of white Gaussian noise (suppose the standard deviation is 0.01, and the mean error is 0), we used mainly nonlinear principal component analysis theory (NLPCA) to resolve fault identification problem in complex electrical engineering. The simulation results show that the fault in complex electrical engineering is usually corresponding to the variable with the maximum absolute value coefficient in the first principal component. These researches will have significant theoretical value and engineering practical significance.

  20. W-Band Circularly Polarized TE11 Mode Transducer

    NASA Astrophysics Data System (ADS)

    Zhan, Mingzhou; He, Wangdong; Wang, Lei

    2018-06-01

    This paper presents a balanced sidewall exciting approach to realize the circularly polarized TE11 mode transducer. We used a voltage vector transfer matrix to establish the relationship between input and output vectors, then we analyzed amplitude and phase errors to estimate the isolation of degenerate mode. A mode transducer with a sidewall exciter was designed based on the results. In the 88-100 GHz frequency range, the simulated axial ratio is less than 1.05 and the isolation of linearly polarization TE11 mode is higher than 30 dBc. In back-to-back measurements, the return loss is generally greater than 20 dB with a typical insertion loss of 1.2 dB. Back-to-back transmission measurements are in excellent agreement with simulations.

  1. W-Band Circularly Polarized TE11 Mode Transducer

    NASA Astrophysics Data System (ADS)

    Zhan, Mingzhou; He, Wangdong; Wang, Lei

    2018-04-01

    This paper presents a balanced sidewall exciting approach to realize the circularly polarized TE11 mode transducer. We used a voltage vector transfer matrix to establish the relationship between input and output vectors, then we analyzed amplitude and phase errors to estimate the isolation of degenerate mode. A mode transducer with a sidewall exciter was designed based on the results. In the 88-100 GHz frequency range, the simulated axial ratio is less than 1.05 and the isolation of linearly polarization TE11 mode is higher than 30 dBc. In back-to-back measurements, the return loss is generally greater than 20 dB with a typical insertion loss of 1.2 dB. Back-to-back transmission measurements are in excellent agreement with simulations.

  2. Measuring the spin of black holes in binary systems using gravitational waves.

    PubMed

    Vitale, Salvatore; Lynch, Ryan; Veitch, John; Raymond, Vivien; Sturani, Riccardo

    2014-06-27

    Compact binary coalescences are the most promising sources of gravitational waves (GWs) for ground-based detectors. Binary systems containing one or two spinning black holes are particularly interesting due to spin-orbit (and eventual spin-spin) interactions and the opportunity of measuring spins directly through GW observations. In this Letter, we analyze simulated signals emitted by spinning binaries with several values of masses, spins, orientations, and signal-to-noise ratios, as detected by an advanced LIGO-Virgo network. We find that for moderate or high signal-to-noise ratio the spin magnitudes can be estimated with errors of a few percent (5%-30%) for neutron star-black hole (black hole-black hole) systems. Spins' tilt angle can be estimated with errors of 0.04 rad in the best cases, but typical values will be above 0.1 rad. Errors will be larger for signals barely above the threshold for detection. The difference in the azimuth angles of the spins, which may be used to check if spins are locked into resonant configurations, cannot be constrained. We observe that the best performances are obtained when the line of sight is perpendicular to the system's total angular momentum and that a sudden change of behavior occurs when a system is observed from angles such that the plane of the orbit can be seen both from above and below during the time the signal is in band. This study suggests that direct measurement of black hole spin by means of GWs can be as precise as what can be obtained from x-ray binaries.

  3. Evaluation of Natural Language Processing (NLP) Systems to Annotate Drug Product Labeling with MedDRA Terminology.

    PubMed

    Ly, Thomas; Pamer, Carol; Dang, Oanh; Brajovic, Sonja; Haider, Shahrukh; Botsis, Taxiarchis; Milward, David; Winter, Andrew; Lu, Susan; Ball, Robert

    2018-05-31

    The FDA Adverse Event Reporting System (FAERS) is a primary data source for identifying unlabeled adverse events (AEs) in a drug or biologic drug product's postmarketing phase. Many AE reports must be reviewed by drug safety experts to identify unlabeled AEs, even if the reported AEs are previously identified, labeled AEs. Integrating the labeling status of drug product AEs into FAERS could increase report triage and review efficiency. Medical Dictionary for Regulatory Activities (MedDRA) is the standard for coding AE terms in FAERS cases. However, drug manufacturers are not required to use MedDRA to describe AEs in product labels. We hypothesized that natural language processing (NLP) tools could assist in automating the extraction and MedDRA mapping of AE terms in drug product labels. We evaluated the performance of three NLP systems, (ETHER, I2E, MetaMap) for their ability to extract AE terms from drug labels and translate the terms to MedDRA Preferred Terms (PTs). Pharmacovigilance-based annotation guidelines for extracting AE terms from drug labels were developed for this study. We compared each system's output to MedDRA PT AE lists, manually mapped by FDA pharmacovigilance experts using the guidelines, for ten drug product labels known as the "gold standard AE list" (GSL) dataset. Strict time and configuration conditions were imposed in order to test each system's capabilities under conditions of no human intervention and minimal system configuration. Each NLP system's output was evaluated for precision, recall and F measure in comparison to the GSL. A qualitative error analysis (QEA) was conducted to categorize a random sample of each NLP system's false positive and false negative errors. A total of 417, 278, and 250 false positive errors occurred in the ETHER, I2E, and MetaMap outputs, respectively. A total of 100, 80, and 187 false negative errors occurred in ETHER, I2E, and MetaMap outputs, respectively. Precision ranged from 64% to 77%, recall from 64% to 83% and F measure from 67% to 79%. I2E had the highest precision (77%), recall (83%) and F measure (79%). ETHER had the lowest precision (64%). MetaMap had the lowest recall (64%). The QEA found that the most prevalent false positive errors were context errors such as "Context error/General term", "Context error/Instructions or monitoring parameters", "Context error/Medical history preexisting condition underlying condition risk factor or contraindication", and "Context error/AE manifestations or secondary complication". The most prevalent false negative errors were in the "Incomplete or missed extraction" error category. Missing AE terms were typically due to long terms, or terms containing non-contiguous words which do not correspond exactly to MedDRA synonyms. MedDRA mapping errors were a minority of errors for ETHER and I2E but were the most prevalent false positive errors for MetaMap. The results demonstrate that it may be feasible to use NLP tools to extract and map AE terms to MedDRA PTs. However, the NLP tools we tested would need to be modified or reconfigured to lower the error rates to support their use in a regulatory setting. Tools specific for extracting AE terms from drug labels and mapping the terms to MedDRA PTs may need to be developed to support pharmacovigilance. Conducting research using additional NLP systems on a larger, diverse GSL would also be informative. Copyright © 2018. Published by Elsevier Inc.

  4. Comparison between a typical and a simplified model for blast load-induced structural response

    NASA Astrophysics Data System (ADS)

    Abd-Elhamed, A.; Mahmoud, S.

    2017-02-01

    As explosive blasts continue to cause severe damage as well as victims in both civil and military environments. There is a bad need for understanding the behavior of structural elements to such extremely short duration dynamic loads where it is of great concern nowadays. Due to the complexity of the typical blast pressure profile model and in order to reduce the modelling and computational efforts, the simplified triangle model for blast loads profile is used to analyze structural response. This simplified model considers only the positive phase and ignores the suction phase which characterizes the typical one in simulating blast loads. The closed from solution for the equation of motion under blast load as a forcing term modelled either typical or simplified models has been derived. The considered herein two approaches have been compared using the obtained results from simulation response analysis of a building structure under an applied blast load. The computed error in simulating response using the simplified model with respect to the typical one has been computed. In general, both simplified and typical models can perform the dynamic blast-load induced response of building structures. However, the simplified one shows a remarkably different response behavior as compared to the typical one despite its simplicity and the use of only positive phase for simulating the explosive loads. The prediction of the dynamic system responses using the simplified model is not satisfactory due to the obtained larger errors as compared to the system responses obtained using the typical one.

  5. Modeling coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  6. The influence of outliers on results of wet deposition measurements as a function of measurement strategy

    NASA Astrophysics Data System (ADS)

    Slanina, J.; Möls, J. J.; Baard, J. H.

    The results of a wet deposition monitoring experiment, carried out by eight identical wet-only precipitation samplers operating on the basis of 24 h samples, have been used to investigate the accuracy and uncertainties in wet deposition measurements. The experiment was conducted near Lelystad, The Netherlands over the period 1 March 1983-31 December 1985. By rearranging the data for one to eight samplers and sampling periods of 1 day to 1 month both systematic and random errors were investigated as a function of measuring strategy. A Gaussian distribution of the results was observed. Outliers, detected by a Dixon test ( a = 0.05) influenced strongly both the yearly averaged results and the standard deviation of this average as a function of the number of samplers and the length of the sampling period. The systematic bias in bulk elements, using one sampler, varies typically from 2 to 20% and for trace elements from 10 to 500%, respectively. Severe problems are encountered in the case of Zn, Cu, Cr, Ni and especially Cd. For the sensitive detection of trends generally more than one sampler per measuring station is necessary as the standard deviation in the yearly averaged wet deposition is typically 10-20% relative for one sampler. Using three identical samplers, trends of, e.g. 3% per year will be generally detected in 6 years.

  7. Leaf vein length per unit area is not intrinsically dependent on image magnification: avoiding measurement artifacts for accuracy and precision.

    PubMed

    Sack, Lawren; Caringella, Marissa; Scoffoni, Christine; Mason, Chase; Rawls, Michael; Markesteijn, Lars; Poorter, Lourens

    2014-10-01

    Leaf vein length per unit leaf area (VLA; also known as vein density) is an important determinant of water and sugar transport, photosynthetic function, and biomechanical support. A range of software methods are in use to visualize and measure vein systems in cleared leaf images; typically, users locate veins by digital tracing, but recent articles introduced software by which users can locate veins using thresholding (i.e. based on the contrasting of veins in the image). Based on the use of this method, a recent study argued against the existence of a fixed VLA value for a given leaf, proposing instead that VLA increases with the magnification of the image due to intrinsic properties of the vein system, and recommended that future measurements use a common, low image magnification for measurements. We tested these claims with new measurements using the software LEAFGUI in comparison with digital tracing using ImageJ software. We found that the apparent increase of VLA with magnification was an artifact of (1) using low-quality and low-magnification images and (2) errors in the algorithms of LEAFGUI. Given the use of images of sufficient magnification and quality, and analysis with error-free software, the VLA can be measured precisely and accurately. These findings point to important principles for improving the quantity and quality of important information gathered from leaf vein systems. © 2014 American Society of Plant Biologists. All Rights Reserved.

  8. Assessment of the metrological performance of an in situ storage image sensor ultra-high speed camera for full-field deformation measurements

    NASA Astrophysics Data System (ADS)

    Rossi, Marco; Pierron, Fabrice; Forquin, Pascal

    2014-02-01

    Ultra-high speed (UHS) cameras allow us to acquire images typically up to about 1 million frames s-1 for a full spatial resolution of the order of 1 Mpixel. Different technologies are available nowadays to achieve these performances, an interesting one is the so-called in situ storage image sensor architecture where the image storage is incorporated into the sensor chip. Such an architecture is all solid state and does not contain movable devices as occurs, for instance, in the rotating mirror UHS cameras. One of the disadvantages of this system is the low fill factor (around 76% in the vertical direction and 14% in the horizontal direction) since most of the space in the sensor is occupied by memory. This peculiarity introduces a series of systematic errors when the camera is used to perform full-field strain measurements. The aim of this paper is to develop an experimental procedure to thoroughly characterize the performance of such kinds of cameras in full-field deformation measurement and identify the best operative conditions which minimize the measurement errors. A series of tests was performed on a Shimadzu HPV-1 UHS camera first using uniform scenes and then grids under rigid movements. The grid method was used as full-field measurement optical technique here. From these tests, it has been possible to appropriately identify the camera behaviour and utilize this information to improve actual measurements.

  9. Consequences of kriging and land use regression for PM2.5 predictions in epidemiologic analyses: Insights into spatial variability using high-resolution satellite data

    PubMed Central

    Alexeeff, Stacey E.; Schwartz, Joel; Kloog, Itai; Chudnovsky, Alexandra; Koutrakis, Petros; Coull, Brent A.

    2016-01-01

    Many epidemiological studies use predicted air pollution exposures as surrogates for true air pollution levels. These predicted exposures contain exposure measurement error, yet simulation studies have typically found negligible bias in resulting health effect estimates. However, previous studies typically assumed a statistical spatial model for air pollution exposure, which may be oversimplified. We address this shortcoming by assuming a realistic, complex exposure surface derived from fine-scale (1km x 1km) remote-sensing satellite data. Using simulation, we evaluate the accuracy of epidemiological health effect estimates in linear and logistic regression when using spatial air pollution predictions from kriging and land use regression models. We examined chronic (long-term) and acute (short-term) exposure to air pollution. Results varied substantially across different scenarios. Exposure models with low out-of-sample R2 yielded severe biases in the health effect estimates of some models, ranging from 60% upward bias to 70% downward bias. One land use regression exposure model with greater than 0.9 out-of-sample R2 yielded upward biases up to 13% for acute health effect estimates. Almost all models drastically underestimated the standard errors. Land use regression models performed better in chronic effects simulations. These results can help researchers when interpreting health effect estimates in these types of studies. PMID:24896768

  10. Information management in DNA replication modeled by directional, stochastic chains with memory

    NASA Astrophysics Data System (ADS)

    Arias-Gonzalez, J. Ricardo

    2016-11-01

    Stochastic chains represent a key variety of phenomena in many branches of science within the context of information theory and thermodynamics. They are typically approached by a sequence of independent events or by a memoryless Markov process. Stochastic chains are of special significance to molecular biology, where genes are conveyed by linear polymers made up of molecular subunits and transferred from DNA to proteins by specialized molecular motors in the presence of errors. Here, we demonstrate that when memory is introduced, the statistics of the chain depends on the mechanism by which objects or symbols are assembled, even in the slow dynamics limit wherein friction can be neglected. To analyze these systems, we introduce a sequence-dependent partition function, investigate its properties, and compare it to the standard normalization defined by the statistical physics of ensembles. We then apply this theory to characterize the enzyme-mediated information transfer involved in DNA replication under the real, non-equilibrium conditions, reproducing measured error rates and explaining the typical 100-fold increase in fidelity that is experimentally found when proofreading and edition take place. Our model further predicts that approximately 1 kT has to be consumed to elevate fidelity in one order of magnitude. We anticipate that our results are necessary to interpret configurational order and information management in many molecular systems within biophysics, materials science, communication, and engineering.

  11. The link evaluation terminal for the advanced communications technology satellite experiments program

    NASA Technical Reports Server (NTRS)

    May, Brian D.

    1992-01-01

    The experimental NASA satellite, Advanced Communications Technology Satellite (ACTS), introduces new technology for high throughput 30 to 20 GHz satellite services. Contained in a single communication payload is both a regenerative TDMA system and multiple 800 MHz 'bent pipe' channels routed to spot beams by a switch matrix. While only one mode of operation is typical during any experiment, both modes can operate simultaneously with reduced capability due to sharing of the transponder. NASA-Lewis instituted a ground terminal development program in anticipation of the satellite launch to verify the performance of the switch matrix mode of operations. Specific functions are built into the ground terminal to evaluate rain fade compensation with uplink power control and to monitor satellite transponder performance with bit error rate measurements. These functions were the genesis of the ground terminal's name, Link Evaluation Terminal, often referred to as LET. Connectors are included in LET that allow independent experimenters to run unique modulation or network experiments through ACTS using only the RF transmit and receive portions of LET. Test data indicate that LET will be able to verify important parts of ACTS technology and provide independent experimenters with a useful ground terminal. Lab measurements of major subsystems integrated into LET are presented. Bit error rate is measured with LET in an internal loopback mode.

  12. Effect of Multiple Testing Adjustment in Differential Item Functioning Detection

    ERIC Educational Resources Information Center

    Kim, Jihye; Oshima, T. C.

    2013-01-01

    In a typical differential item functioning (DIF) analysis, a significance test is conducted for each item. As a test consists of multiple items, such multiple testing may increase the possibility of making a Type I error at least once. The goal of this study was to investigate how to control a Type I error rate and power using adjustment…

  13. High-frequency signal and noise estimates of CSR GRACE RL04

    NASA Astrophysics Data System (ADS)

    Bonin, Jennifer A.; Bettadpur, Srinivas; Tapley, Byron D.

    2012-12-01

    A sliding window technique is used to create daily-sampled Gravity Recovery and Climate Experiment (GRACE) solutions with the same background processing as the official CSR RL04 monthly series. By estimating over shorter time spans, more frequent solutions are made using uncorrelated data, allowing for higher frequency resolution in addition to daily sampling. Using these data sets, high-frequency GRACE errors are computed using two different techniques: assuming the GRACE high-frequency signal in a quiet area of the ocean is the true error, and computing the variance of differences between multiple high-frequency GRACE series from different centers. While the signal-to-noise ratios prove to be sufficiently high for confidence at annual and lower frequencies, at frequencies above 3 cycles/year the signal-to-noise ratios in the large hydrological basins looked at here are near 1.0. Comparisons with the GLDAS hydrological model and high frequency GRACE series developed at other centers confirm CSR GRACE RL04's poor ability to accurately and reliably measure hydrological signal above 3-9 cycles/year, due to the low power of the large-scale hydrological signal typical at those frequencies compared to the GRACE errors.

  14. On the apparent velocity of integrated sunlight. I - 1983-1985

    NASA Technical Reports Server (NTRS)

    Deming, Drake; Espenak, Fred; Jennings, Donald E.; Brault, James W.; Wagner, Jeremy

    1987-01-01

    Frequency measurements for the Delta V = 2 transitions of CO in the integrated light spectrum of the sun are presented. The nature and magnitude of systematic errors which typically arise in absolute velocity measurements of integrated sunlight are explored in some detail, and measurements believed accurate at the level of about 5 m/s or less are presented. It is found that the integrated light velocity varies by about 3 m/s or less over a one-day period. Over the long term, the data indicate an increasing blue-shift in these weak infrared lines amounting to 30 m/s from 1983 to 1985. The sense of the drift is consistent with a lessening in the magnetic inhibition of granular convection at solar minimum. Such an effect has implications for the spectroscopic detectability of planetary-mass companions to solar-type stars.

  15. Context matters: the structure of task goals affects accuracy in multiple-target visual search.

    PubMed

    Clark, Kait; Cain, Matthew S; Adcock, R Alison; Mitroff, Stephen R

    2014-05-01

    Career visual searchers such as radiologists and airport security screeners strive to conduct accurate visual searches, but despite extensive training, errors still occur. A key difference between searches in radiology and airport security is the structure of the search task: Radiologists typically scan a certain number of medical images (fixed objective), and airport security screeners typically search X-rays for a specified time period (fixed duration). Might these structural differences affect accuracy? We compared performance on a search task administered either under constraints that approximated radiology or airport security. Some displays contained more than one target because the presence of multiple targets is an established source of errors for career searchers, and accuracy for additional targets tends to be especially sensitive to contextual conditions. Results indicate that participants searching within the fixed objective framework produced more multiple-target search errors; thus, adopting a fixed duration framework could improve accuracy for career searchers. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  16. Effects of Mesh Irregularities on Accuracy of Finite-Volume Discretization Schemes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2012-01-01

    The effects of mesh irregularities on accuracy of unstructured node-centered finite-volume discretizations are considered. The focus is on an edge-based approach that uses unweighted least-squares gradient reconstruction with a quadratic fit. For inviscid fluxes, the discretization is nominally third order accurate on general triangular meshes. For viscous fluxes, the scheme is an average-least-squares formulation that is nominally second order accurate and contrasted with a common Green-Gauss discretization scheme. Gradient errors, truncation errors, and discretization errors are separately studied according to a previously introduced comprehensive methodology. The methodology considers three classes of grids: isotropic grids in a rectangular geometry, anisotropic grids typical of adapted grids, and anisotropic grids over a curved surface typical of advancing layer grids. The meshes within the classes range from regular to extremely irregular including meshes with random perturbation of nodes. Recommendations are made concerning the discretization schemes that are expected to be least sensitive to mesh irregularities in applications to turbulent flows in complex geometries.

  17. Two-dimensional simulation of eccentric photorefraction images for ametropes: factors influencing the measurement.

    PubMed

    Wu, Yifei; Thibos, Larry N; Candy, T Rowan

    2018-05-07

    Eccentric photorefraction and Purkinje image tracking are used to estimate refractive state and eye position simultaneously. Beyond vision screening, they provide insight into typical and atypical visual development. Systematic analysis of the effect of refractive error and spectacles on photorefraction data is needed to gauge the accuracy and precision of the technique. Simulation of two-dimensional, double-pass eccentric photorefraction was performed (Zemax). The inward pass included appropriate light sources, lenses and a single surface pupil plane eye model to create an extended retinal image that served as the source for the outward pass. Refractive state, as computed from the luminance gradient in the image of the pupil captured by the model's camera, was evaluated for a range of refractive errors (-15D to +15D), pupil sizes (3 mm to 7 mm) and two sets of higher-order monochromatic aberrations. Instrument calibration was simulated using -8D to +8D trial lenses at the spectacle plane for: (1) vertex distances from 3 mm to 23 mm, (2) uncorrected and corrected hyperopic refractive errors of +4D and +7D, and (3) uncorrected and corrected astigmatism of 4D at four different axes. Empirical calibration of a commercial photorefractor was also compared with a wavefront aberrometer for human eyes. The pupil luminance gradient varied linearly with refractive state for defocus less than approximately 4D (5 mm pupil). For larger errors, the gradient magnitude saturated and then reduced, leading to under-estimation of refractive state. Additional inaccuracy (up to 1D for 8D of defocus) resulted from spectacle magnification in the pupil image, which would reduce precision in situations where vertex distance is variable. The empirical calibration revealed a constant offset between the two clinical instruments. Computational modelling demonstrates the principles and limitations of photorefraction to help users avoid potential measurement errors. Factors that could cause clinically significant errors in photorefraction estimates include high refractive error, vertex distance and magnification effects of a spectacle lens, increased higher-order monochromatic aberrations, and changes in primary spherical aberration with accommodation. The impact of these errors increases with increasing defocus. © 2018 The Authors Ophthalmic & Physiological Optics © 2018 The College of Optometrists.

  18. Magnetization transfer proportion: a simplified measure of dose response for polymer gel dosimetry.

    PubMed

    Whitney, Heather M; Gochberg, Daniel F; Gore, John C

    2008-12-21

    The response to radiation of polymer gel dosimeters has most often been described by measuring the nuclear magnetic resonance transverse relaxation rate as a function of dose. This approach is highly dependent upon the choice of experimental parameters, such as the echo spacing time for Carr-Purcell-Meiboom-Gill-type pulse sequences, and is difficult to optimize in imaging applications where a range of doses are applied to a single gel, as is typical for practical uses of polymer gel dosimetry. Moreover, errors in computing dose can arise when there are substantial variations in the radiofrequency (B1) field or resonant frequency, as may occur for large samples. Here we consider the advantages of using magnetization transfer imaging as an alternative approach and propose the use of a simplified quantity, the magnetization transfer proportion (MTP), to assess doses. This measure can be estimated through two simple acquisitions and is more robust in the presence of some sources of system imperfections. It also has a dependence upon experimental parameters that is independent of dose, allowing simultaneous optimization at all dose levels. The MTP is shown to be less susceptible to B1 errors than are CPMG measurements of R2. The dose response can be optimized through appropriate choices of the power and offset frequency of the pulses used in magnetization transfer imaging.

  19. Multiple Flux Footprints, Flux Divergences and Boundary Layer Mixing Ratios: Studies of Ecosystem-Atmosphere CO2 Exchange Using the WLEF Tall Tower.

    NASA Astrophysics Data System (ADS)

    Davis, K. J.; Bakwin, P. S.; Yi, C.; Cook, B. D.; Wang, W.; Denning, A. S.; Teclaw, R.; Isebrands, J. G.

    2001-05-01

    Long-term, tower-based measurements using the eddy-covariance method have revealed a wealth of detail about the temporal dynamics of netecosystem-atmosphere exchange (NEE) of CO2. The data also provide a measure of the annual net CO2 exchange. The area represented by these flux measurements, however, is limited, and doubts remain about possible systematic errors that may bias the annual net exchange measurements. Flux and mixing ratio measurements conducted at the WLEF tall tower as part of the Chequamegon Ecosystem-Atmosphere Study (ChEAS) allow for unique assessment of the uncertainties in NEE of CO2. The synergy between flux and mixing ratio observations shows the potential for comparing inverse and eddy-covariance methods of estimating NEE of CO2. Such comparisons may strengthen confidence in both results and begin to bridge the huge gap in spatial scales (at least 3 orders of magnitude) between continental or hemispheric scale inverse studies and kilometer-scale eddy covariance flux measurements. Data from WLEF and Willow Creek, another ChEAS tower, are used to estimate random and systematic errors in NEE of CO2. Random uncertainty in seasonal exchange rates and the annual integrated NEE, including both turbulent sampling errors and variability in enviromental conditions, is small. Systematic errors are identified by examining changes in flux as a function of atmospheric stability and wind direction, and by comparing the multiple level flux measurements on the WLEF tower. Nighttime drainage is modest but evident. Systematic horizontal advection occurs during the morning turbulence transition. The potential total systematic error appears to be larger than random uncertainty, but still modest. The total systematic error, however, is difficult to assess. It appears that the WLEF region ecosystems were a small net sink of CO2 in 1997. It is clear that the summer uptake rate at WLEF is much smaller than that at most deciduous forest sites, including the nearby Willow Creek site. The WLEF tower also allows us to study the potential for monitoring continental CO2 mixing ratios from tower sites. Despite concerns about the proximity to ecosystem sources and sinks, it is clear that boundary layer CO2 mixing ratios can be monitored using typical surface layer towers. Seasonal and annual land-ocean mixing ratio gradients are readily detectable, providing the motivation for a flux-tower based mixing ratio observation network that could greatly improve the accuracy of inversion-based estimates of NEE of CO2, and enable inversions to be applied on smaller temporal and spatial scales. Results from the WLEF tower illustrate the degree to which local flux measurements represent interannual, seasonal and synoptic CO2 mixing ratio trends. This coherence between fluxes and mixing ratios serves to "regionalize" the eddy-covariance based local NEE observations.

  20. Optimum data analysis procedures for Titan 4 and Space Shuttle payload acoustic measurements during lift-off

    NASA Technical Reports Server (NTRS)

    Piersol, Allan G.

    1991-01-01

    Analytical expressions have been derived to describe the mean square error in the estimation of the maximum rms value computed from a step-wise (or running) time average of a nonstationary random signal. These analytical expressions have been applied to the problem of selecting the optimum averaging times that will minimize the total mean square errors in estimates of the maximum sound pressure levels measured inside the Titan IV payload fairing (PLF) and the Space Shuttle payload bay (PLB) during lift-off. Based on evaluations of typical Titan IV and Space Shuttle launch data, it has been determined that the optimum averaging times for computing the maximum levels are (1) T (sub o) = 1.14 sec for the maximum overall level, and T(sub oi) = 4.88 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Titan IV PLF, and (2) T (sub o) = 1.65 sec for the maximum overall level, and T (sub oi) = 7.10 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Space Shuttle PLB, where f (sub i) is the 1/3 octave band center frequency. However, the results for both vehicles indicate that the total rms error in the maximum level estimates will be within 25 percent the minimum error for all averaging times within plus or minus 50 percent of the optimum averaging time, so a precise selection of the exact optimum averaging time is not critical. Based on these results, linear averaging times (T) are recommended for computing the maximum sound pressure level during lift-off.

  1. Effects of nitrate and water on the oxygen isotopic analysis of barium sulfate precipitated from water samples

    USGS Publications Warehouse

    Hannon, Janet E.; Böhlke, John Karl; Mroczkowski, Stanley J.

    2008-01-01

    BaSO4 precipitated from mixed salt solutions by common techniques for SO isotopic analysis may contain quantities of H2O and NO that introduce errors in O isotope measurements. Experiments with synthetic solutions indicate that δ18O values of CO produced by decomposition of precipitated BaSO4 in a carbon reactor may be either too low or too high, depending on the relative concentrations of SO and NO and the δ18O values of the H2O, NO, and SO. Typical δ18O errors are of the order of 0.5 to 1‰ in many sample types, and can be larger in samples containing atmospheric NO, which can cause similar errors in δ17O and Δ17O. These errors can be reduced by (1) ion chromatographic separation of SO from NO, (2) increasing the salinity of the solutions before precipitating BaSO4 to minimize incorporation of H2O, (3) heating BaSO4under vacuum to remove H2O, (4) preparing isotopic reference materials as aqueous samples to mimic the conditions of the samples, and (5) adjusting measured δ18O values based on amounts and isotopic compositions of coexisting H2O and NO. These procedures are demonstrated for SO isotopic reference materials, synthetic solutions with isotopically known reagents, atmospheric deposition from Shenandoah National Park, Virginia, USA, and sulfate salt deposits from the Atacama Desert, Chile, and Mojave Desert, California, USA. These results have implications for the calibration and use of O isotope data in studies of SO sources and reaction mechanisms.

  2. Robust colour calibration of an imaging system using a colour space transform and advanced regression modelling.

    PubMed

    Jackman, Patrick; Sun, Da-Wen; Elmasry, Gamal

    2012-08-01

    A new algorithm for the conversion of device dependent RGB colour data into device independent L*a*b* colour data without introducing noticeable error has been developed. By combining a linear colour space transform and advanced multiple regression methodologies it was possible to predict L*a*b* colour data with less than 2.2 colour units of error (CIE 1976). By transforming the red, green and blue colour components into new variables that better reflect the structure of the L*a*b* colour space, a low colour calibration error was immediately achieved (ΔE(CAL) = 14.1). Application of a range of regression models on the data further reduced the colour calibration error substantially (multilinear regression ΔE(CAL) = 5.4; response surface ΔE(CAL) = 2.9; PLSR ΔE(CAL) = 2.6; LASSO regression ΔE(CAL) = 2.1). Only the PLSR models deteriorated substantially under cross validation. The algorithm is adaptable and can be easily recalibrated to any working computer vision system. The algorithm was tested on a typical working laboratory computer vision system and delivered only a very marginal loss of colour information ΔE(CAL) = 2.35. Colour features derived on this system were able to safely discriminate between three classes of ham with 100% correct classification whereas colour features measured on a conventional colourimeter were not. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. [Prediction of soil nutrients spatial distribution based on neural network model combined with goestatistics].

    PubMed

    Li, Qi-Quan; Wang, Chang-Quan; Zhang, Wen-Jiang; Yu, Yong; Li, Bing; Yang, Juan; Bai, Gen-Chuan; Cai, Yan

    2013-02-01

    In this study, a radial basis function neural network model combined with ordinary kriging (RBFNN_OK) was adopted to predict the spatial distribution of soil nutrients (organic matter and total N) in a typical hilly region of Sichuan Basin, Southwest China, and the performance of this method was compared with that of ordinary kriging (OK) and regression kriging (RK). All the three methods produced the similar soil nutrient maps. However, as compared with those obtained by multiple linear regression model, the correlation coefficients between the measured values and the predicted values of soil organic matter and total N obtained by neural network model increased by 12. 3% and 16. 5% , respectively, suggesting that neural network model could more accurately capture the complicated relationships between soil nutrients and quantitative environmental factors. The error analyses of the prediction values of 469 validation points indicated that the mean absolute error (MAE) , mean relative error (MRE), and root mean squared error (RMSE) of RBFNN_OK were 6.9%, 7.4%, and 5. 1% (for soil organic matter), and 4.9%, 6.1% , and 4.6% (for soil total N) smaller than those of OK (P<0.01), and 2.4%, 2.6% , and 1.8% (for soil organic matter), and 2.1%, 2.8%, and 2.2% (for soil total N) smaller than those of RK, respectively (P<0.05).

  4. Uncertainty quantification in application of the enrichment meter principle for nondestructive assay of special nuclear material

    DOE PAGES

    Burr, Tom; Croft, Stephen; Jarman, Kenneth D.

    2015-09-05

    The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed andmore » achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.« less

  5. Measurement System Characterization in the Presence of Measurement Errors

    NASA Technical Reports Server (NTRS)

    Commo, Sean A.

    2012-01-01

    In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.

  6. Broadband distortion modeling in Lyman-α forest BAO fitting

    DOE PAGES

    Blomqvist, Michael; Kirkby, David; Bautista, Julian E.; ...

    2015-11-23

    Recently, the Lyman-α absorption observed in the spectra of high-redshift quasars has been used as a tracer of large-scale structure by means of the three-dimensional Lyman-α forest auto-correlation function at redshift z≃ 2.3, but the need to fit the quasar continuum in every absorption spectrum introduces a broadband distortion that is difficult to correct and causes a systematic error for measuring any broadband properties. Here, we describe a k-space model for this broadband distortion based on a multiplicative correction to the power spectrum of the transmitted flux fraction that suppresses power on scales corresponding to the typical length of amore » Lyman-α forest spectrum. In implementing the distortion model in fits for the baryon acoustic oscillation (BAO) peak position in the Lyman-α forest auto-correlation, we find that the fitting method recovers the input values of the linear bias parameter b F and the redshift-space distortion parameter β F for mock data sets with a systematic error of less than 0.5%. Applied to the auto-correlation measured for BOSS Data Release 11, our method improves on the previous treatment of broadband distortions in BAO fitting by providing a better fit to the data using fewer parameters and reducing the statistical errors on βF and the combination b F(1+β F) by more than a factor of seven. The measured values at redshift z=2.3 are βF=1.39 +0.11 +0.24 +0.38 -0.10 -0.19 -0.28 and bF(1+βF)=-0.374 +0.007 +0.013 +0.020 -0.007 -0.014 -0.022 (1σ, 2σ and 3σ statistical errors). Our fitting software and the input files needed to reproduce our main results are publicly available.« less

  7. Broadband distortion modeling in Lyman-α forest BAO fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blomqvist, Michael; Kirkby, David; Margala, Daniel, E-mail: cblomqvi@uci.edu, E-mail: dkirkby@uci.edu, E-mail: dmargala@uci.edu

    2015-11-01

    In recent years, the Lyman-α absorption observed in the spectra of high-redshift quasars has been used as a tracer of large-scale structure by means of the three-dimensional Lyman-α forest auto-correlation function at redshift z≅ 2.3, but the need to fit the quasar continuum in every absorption spectrum introduces a broadband distortion that is difficult to correct and causes a systematic error for measuring any broadband properties. We describe a k-space model for this broadband distortion based on a multiplicative correction to the power spectrum of the transmitted flux fraction that suppresses power on scales corresponding to the typical length of amore » Lyman-α forest spectrum. Implementing the distortion model in fits for the baryon acoustic oscillation (BAO) peak position in the Lyman-α forest auto-correlation, we find that the fitting method recovers the input values of the linear bias parameter b{sub F} and the redshift-space distortion parameter β{sub F} for mock data sets with a systematic error of less than 0.5%. Applied to the auto-correlation measured for BOSS Data Release 11, our method improves on the previous treatment of broadband distortions in BAO fitting by providing a better fit to the data using fewer parameters and reducing the statistical errors on β{sub F} and the combination b{sub F}(1+β{sub F}) by more than a factor of seven. The measured values at redshift z=2.3 are β{sub F}=1.39{sup +0.11 +0.24 +0.38}{sub −0.10 −0.19 −0.28} and b{sub F}(1+β{sub F})=−0.374{sup +0.007 +0.013 +0.020}{sub −0.007 −0.014 −0.022} (1σ, 2σ and 3σ statistical errors). Our fitting software and the input files needed to reproduce our main results are publicly available.« less

  8. ["Second victim" - error, crises and how to get out of it].

    PubMed

    von Laue, N; Schwappach, D; Hochreutener, M

    2012-06-01

    Medical errors do not only harm patients ("first victims"). Almost all health care professionals become a so-called "second victim" once in their career by being involved in a medical error. Studies show that error involvement can have a tremendous impact on health care workers leading to burnout, depression and professional crisis. Moreover persons involved in errors show a decline in job performance and jeopardize therefore patient safety. Blaming the person is one of the typical psychological reactions after an error happened as the attribution theory tells. The self-esteem gets stabilized if we can put blame on someone and pick out a scapegoat. But standing alone makes the emotional situation even worse. A vicious circle can evolve with tragic effect for the individual and negative implications for patient safety and the health care setting.

  9. The relation between working memory capacity and auditory lateralization in children with auditory processing disorders.

    PubMed

    Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed

    2014-11-01

    Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. Reduced change blindness suggests enhanced attention to detail in individuals with autism.

    PubMed

    Smith, Hayley; Milne, Elizabeth

    2009-03-01

    The phenomenon of change blindness illustrates that a limited number of items within the visual scene are attended to at any one time. It has been suggested that individuals with autism focus attention on less contextually relevant aspects of the visual scene, show superior perceptual discrimination and notice details which are often ignored by typical observers. In this study we investigated change blindness in autism by asking participants to detect continuity errors deliberately introduced into a short film. Whether the continuity errors involved central/marginal or social/non-social aspects of the visual scene was varied. Thirty adolescent participants, 15 with autistic spectrum disorder (ASD) and 15 typically developing (TD) controls participated. The participants with ASD detected significantly more errors than the TD participants. Both groups identified more errors involving central rather than marginal aspects of the scene, although this effect was larger in the TD participants. There was no difference in the number of social or non-social errors detected by either group of participants. In line with previous data suggesting an abnormally broad attentional spotlight and enhanced perceptual function in individuals with ASD, the results of this study suggest enhanced awareness of the visual scene in ASD. The results of this study could reflect superior top-down control of visual search in autism, enhanced perceptual function, or inefficient filtering of visual information in ASD.

  11. Simulation study of communication link for Pioneer Saturn/Uranus atmospheric entry probe. [signal acquisition by candidate modem for radio link

    NASA Technical Reports Server (NTRS)

    Hinrichs, C. A.

    1974-01-01

    A digital simulation is presented for a candidate modem in a modeled atmospheric scintillation environment with Doppler, Doppler rate, and signal attenuation typical of the radio link conditions for an outer planets atmospheric entry probe. The results indicate that the signal acquisition characteristics and the channel error rate are acceptable for the system requirements of the radio link. The simulation also outputs data for calculating other error statistics and a quantized symbol stream from which error correction decoding can be analyzed.

  12. Demand Controlled Economizer Cycles: A Direct Digital Control Scheme for Heating, Ventilating, and Air Conditioning Systems,

    DTIC Science & Technology

    1984-05-01

    Control Ignored any error of 1/10th degree or less. This was done by setting the error term E and the integral sum PREINT to zero If then absolute value of...signs of two errors jeq tdiff if equal, jump clr @preint else zero integal sum tdiff mov @diff,rl fetch absolute value of OAT-RAT ci rl,25 is...includes a heating coil and thermostatic control to maintain the air in this path at an elevated temperature, typically around 80 degrees Farenheit (80 F

  13. Wavefront Sensing Analysis of Grazing Incidence Optical Systems

    NASA Technical Reports Server (NTRS)

    Rohrbach, Scott; Saha, Timo

    2012-01-01

    Wavefront sensing is a process by which optical system errors are deduced from the aberrations in the image of an ideal source. The method has been used successfully in near-normal incidence, but not for grazing incidence systems. This innovation highlights the ability to examine out-of-focus images from grazing incidence telescopes (typically operating in the x-ray wavelengths, but integrated using optical wavelengths) and determine the lower-order deformations. This is important because as a metrology tool, this method would allow the integration of high angular resolution optics without the use of normal incidence interferometry, which requires direct access to the front surface of each mirror. Measuring the surface figure of mirror segments in a highly nested x-ray telescope mirror assembly is difficult due to the tight packing of elements and blockage of all but the innermost elements to normal incidence light. While this can be done on an individual basis in a metrology mount, once the element is installed and permanently bonded into the assembly, it is impossible to verify the figure of each element and ensure that the necessary imaging quality will be maintained. By examining on-axis images of an ideal point source, one can gauge the low-order figure errors of individual elements, even when integrated into an assembly. This technique is known as wavefront sensing (WFS). By shining collimated light down the optical axis of the telescope and looking at out-of-focus images, the blur due to low-order figure errors of individual elements can be seen, and the figure error necessary to produce that blur can be calculated. The method avoids the problem of requiring normal incidence access to the surface of each mirror segment. Mirror figure errors span a wide range of spatial frequencies, from the lowest-order bending to the highest order micro-roughness. While all of these can be measured in normal incidence, only the lowest-order contributors can be determined through this WFS technique.

  14. Influence of Tooth Spacing Error on Gears With and Without Profile Modifications

    NASA Technical Reports Server (NTRS)

    Padmasolala, Giri; Lin, Hsiang H.; Oswald, Fred B.

    2000-01-01

    A computer simulation was conducted to investigate the effectiveness of profile modification for reducing dynamic loads in gears with different tooth spacing errors. The simulation examined varying amplitudes of spacing error and differences in the span of teeth over which the error occurs. The modification considered included both linear and parabolic tip relief. The analysis considered spacing error that varies around most of the gear circumference (similar to a typical sinusoidal error pattern) as well as a shorter span of spacing errors that occurs on only a few teeth. The dynamic analysis was performed using a revised version of a NASA gear dynamics code, modified to add tooth spacing errors to the analysis. Results obtained from the investigation show that linear tip relief is more effective in reducing dynamic loads on gears with small spacing errors but parabolic tip relief becomes more effective as the amplitude of spacing error increases. In addition, the parabolic modification is more effective for the more severe error case where the error is spread over a longer span of teeth. The findings of this study can be used to design robust tooth profile modification for improving dynamic performance of gear sets with different tooth spacing errors.

  15. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    PubMed

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-05-01

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Is Weak Oral Language Associated with Poor Spelling in School-Age Children with Specific Language Impairment, Dyslexia, or Both?

    PubMed Central

    McCarthy, Jillian H.; Hogan, Tiffany P.; Catts, Hugh W.

    2013-01-01

    The purpose of this study was to test the hypothesis that word reading accuracy, not oral language, is associated with spelling performance in school-age children. We compared fourth grade spelling accuracy in children with specific language impairment (SLI), dyslexia, or both (SLI/dyslexia) to their typically developing grade-matched peers. Results of the study revealed that children with SLI performed similarly to their typically developing peers on a single word spelling task. Alternatively, those with dyslexia and SLI/dyslexia evidenced poor spelling accuracy. Errors made by both those with dyslexia and SLI/dyslexia were characterized by numerous phonologic, orthographic, and semantic errors. Cumulative results support the hypothesis that word reading accuracy, not oral language, is associated with spelling performance in typically developing school-age children and their peers with SLI and dyslexia. Findings are provided as further support for the notion that SLI and dyslexia are distinct, yet co-morbid, developmental disorders. PMID:22876769

  17. Short-arc measurement and fitting based on the bidirectional prediction of observed data

    NASA Astrophysics Data System (ADS)

    Fei, Zhigen; Xu, Xiaojie; Georgiadis, Anthimos

    2016-02-01

    To measure a short arc is a notoriously difficult problem. In this study, the bidirectional prediction method based on the Radial Basis Function Neural Network (RBFNN) to the observed data distributed along a short arc is proposed to increase the corresponding arc length, and thus improve its fitting accuracy. Firstly, the rationality of regarding observed data as a time series is discussed in accordance with the definition of a time series. Secondly, the RBFNN is constructed to predict the observed data where the interpolation method is used for enlarging the size of training examples in order to improve the learning accuracy of the RBFNN’s parameters. Finally, in the numerical simulation section, we focus on simulating how the size of the training sample and noise level influence the learning error and prediction error of the built RBFNN. Typically, the observed data coming from a 5{}^\\circ short arc are used to evaluate the performance of the Hyper method known as the ‘unbiased fitting method of circle’ with a different noise level before and after prediction. A number of simulation experiments reveal that the fitting stability and accuracy of the Hyper method after prediction are far superior to the ones before prediction.

  18. Estimation of Full-Body Poses Using Only Five Inertial Sensors: An Eager or Lazy Learning Approach?

    PubMed Central

    Wouda, Frank J.; Giuberti, Matteo; Bellusci, Giovanni; Veltink, Peter H.

    2016-01-01

    Human movement analysis has become easier with the wide availability of motion capture systems. Inertial sensing has made it possible to capture human motion without external infrastructure, therefore allowing measurements in any environment. As high-quality motion capture data is available in large quantities, this creates possibilities to further simplify hardware setups, by use of data-driven methods to decrease the number of body-worn sensors. In this work, we contribute to this field by analyzing the capabilities of using either artificial neural networks (eager learning) or nearest neighbor search (lazy learning) for such a problem. Sparse orientation features, resulting from sensor fusion of only five inertial measurement units with magnetometers, are mapped to full-body poses. Both eager and lazy learning algorithms are shown to be capable of constructing this mapping. The full-body output poses are visually plausible with an average joint position error of approximately 7 cm, and average joint angle error of 7∘. Additionally, the effects of magnetic disturbances typical in orientation tracking on the estimation of full-body poses was also investigated, where nearest neighbor search showed better performance for such disturbances. PMID:27983676

  19. Pirate Stealth or Inattentional Blindness? The Effects of Target Relevance and Sustained Attention on Security Monitoring for Experienced and Naïve Operators

    PubMed Central

    Näsholm, Erika; Rohlfing, Sarah; Sauer, James D.

    2014-01-01

    Closed Circuit Television (CCTV) operators are responsible for maintaining security in various applied settings. However, research has largely ignored human factors that may contribute to CCTV operator error. One important source of error is inattentional blindness – the failure to detect unexpected but clearly visible stimuli when attending to a scene. We compared inattentional blindness rates for experienced (84 infantry personnel) and naïve (87 civilians) operators in a CCTV monitoring task. The task-relevance of the unexpected stimulus and the length of the monitoring period were manipulated between participants. Inattentional blindness rates were measured using typical post-event questionnaires, and participants' real-time descriptions of the monitored event. Based on the post-event measure, 66% of the participants failed to detect salient, ongoing stimuli appearing in the spatial field of their attentional focus. The unexpected task-irrelevant stimulus was significantly more likely to go undetected (79%) than the unexpected task-relevant stimulus (55%). Prior task experience did not inoculate operators against inattentional blindness effects. Participants' real-time descriptions revealed similar patterns, ruling out inattentional amnesia accounts. PMID:24465932

  20. Estimation of Full-Body Poses Using Only Five Inertial Sensors: An Eager or Lazy Learning Approach?

    PubMed

    Wouda, Frank J; Giuberti, Matteo; Bellusci, Giovanni; Veltink, Peter H

    2016-12-15

    Human movement analysis has become easier with the wide availability of motion capture systems. Inertial sensing has made it possible to capture human motion without external infrastructure, therefore allowing measurements in any environment. As high-quality motion capture data is available in large quantities, this creates possibilities to further simplify hardware setups, by use of data-driven methods to decrease the number of body-worn sensors. In this work, we contribute to this field by analyzing the capabilities of using either artificial neural networks (eager learning) or nearest neighbor search (lazy learning) for such a problem. Sparse orientation features, resulting from sensor fusion of only five inertial measurement units with magnetometers, are mapped to full-body poses. Both eager and lazy learning algorithms are shown to be capable of constructing this mapping. The full-body output poses are visually plausible with an average joint position error of approximately 7 cm, and average joint angle error of 7 ∘ . Additionally, the effects of magnetic disturbances typical in orientation tracking on the estimation of full-body poses was also investigated, where nearest neighbor search showed better performance for such disturbances.

Top