Simplified Approach Charts Improve Data Retrieval Performance
Stewart, Michael; Laraway, Sean; Jordan, Kevin; Feary, Michael S.
2016-01-01
The effectiveness of different instrument approach charts to deliver minimum visibility and altitude information during airport equipment outages was investigated. Eighteen pilots flew simulated instrument approaches in three conditions: (a) normal operations using a standard approach chart (standard-normal), (b) equipment outage conditions using a standard approach chart (standard-outage), and (c) equipment outage conditions using a prototype decluttered approach chart (prototype-outage). Errors and retrieval times in identifying minimum altitudes and visibilities were measured. The standard-outage condition produced significantly more errors and longer retrieval times versus the standard-normal condition. The prototype-outage condition had significantly fewer errors and shorter retrieval times than did the standard-outage condition. The prototype-outage condition produced significantly fewer errors but similar retrieval times when compared with the standard-normal condition. Thus, changing the presentation of minima may reduce risk and increase safety in instrument approaches, specifically with airport equipment outages. PMID:28491009
The preclinical pharmacological profile of WAY-132983, a potent M1 preferring agonist.
Bartolomeo, A C; Morris, H; Buccafusco, J J; Kille, N; Rosenzweig-Lipson, S; Husbands, M G; Sabb, A L; Abou-Gharbia, M; Moyer, J A; Boast, C A
2000-02-01
Muscarinic M1 preferring agonists may improve cognitive deficits associated with Alzheimer's disease. Side effect assessment of the M1 preferring agonist WAY-132983 showed significant salivation (10 mg/kg i.p. or p.o.) and produced dose-dependent hypothermia after i. p. or p.o. administration. WAY-132983 significantly reduced scopolamine (0.3 mg/kg i.p.)-induced hyperswimming in mice. Cognitive assessment in rats used pretrained animals in a forced choice, 1-h delayed nonmatch-to-sample radial arm maze task. WAY-132983 (0.3 mg/kg i.p) significantly reduced scopolamine (0.3 mg/kg s.c.)-induced errors. Oral WAY-132983 attenuated scopolamine-induced errors; that is, errors produced after combining scopolamine and WAY-132983 (to 3 mg/kg p.o.) were not significantly increased compared with those of vehicle-treated control animals, whereas errors after scopolamine were significantly higher than those of control animals. With the use of miniosmotic pumps, 0.03 mg/kg/day (s.c.) WAY-132983 significantly reduced AF64A (3 nmol/3 microliter/lateral ventricle)-induced errors. Verification of AF64A cholinotoxicity showed significantly lower choline acetyltransferase activity in the hippocampi of AF64A-treated animals, with no significant changes in the striatal or frontal cortex. Cognitive assessment in primates involved the use of pretrained aged animals in a visual delayed match-to-sample procedure. Oral WAY-132983 significantly increased the number of correct responses during short and long delay interval testing. These effects were also apparent 24 h after administration. WAY-132983 exhibited cognitive benefit at doses lower than those producing undesirable effects; therefore, WAY-132983 is a potential candidate for improving the cognitive status of patients with Alzheimer's disease.
Effects of learning climate and registered nurse staffing on medication errors.
Chang, Yunkyung; Mark, Barbara
2011-01-01
Despite increasing recognition of the significance of learning from errors, little is known about how learning climate contributes to error reduction. The purpose of this study was to investigate whether learning climate moderates the relationship between error-producing conditions and medication errors. A cross-sectional descriptive study was done using data from 279 nursing units in 146 randomly selected hospitals in the United States. Error-producing conditions included work environment factors (work dynamics and nurse mix), team factors (communication with physicians and nurses' expertise), personal factors (nurses' education and experience), patient factors (age, health status, and previous hospitalization), and medication-related support services. Poisson models with random effects were used with the nursing unit as the unit of analysis. A significant negative relationship was found between learning climate and medication errors. It also moderated the relationship between nurse mix and medication errors: When learning climate was negative, having more registered nurses was associated with fewer medication errors. However, no relationship was found between nurse mix and medication errors at either positive or average levels of learning climate. Learning climate did not moderate the relationship between work dynamics and medication errors. The way nurse mix affects medication errors depends on the level of learning climate. Nursing units with fewer registered nurses and frequent medication errors should examine their learning climate. Future research should be focused on the role of learning climate as related to the relationships between nurse mix and medication errors.
Huang, Kuo-Chen; Wang, Hsiu-Feng; Chen, Chun-Ching
2010-06-01
Effects of shape, size, and chromaticity of stimuli on participants' errors when estimating the size of simultaneously presented standard and comparison stimuli were examined. 48 Taiwanese college students ages 20 to 24 years old (M = 22.3, SD = 1.3) participated. Analysis showed that the error for estimated size was significantly greater for those in the low-vision group than for those in the normal-vision and severe-myopia groups. The errors were significantly greater with green and blue stimuli than with red stimuli. Circular stimuli produced smaller mean errors than did square stimuli. The actual size of the standard stimulus significantly affected the error for estimated size. Errors for estimations using smaller sizes were significantly higher than when the sizes were larger. Implications of the results for graphics-based interface design, particularly when taking account of visually impaired users, are discussed.
Effects of alcohol on pilot performance in simulated flight
NASA Technical Reports Server (NTRS)
Billings, C. E.; Demosthenes, T.; White, T. R.; O'Hara, D. B.
1991-01-01
Ethyl alcohol's known ability to produce reliable decrements in pilot performance was used in a study designed to evaluate objective methods for assessing pilot performance. Four air carrier pilot volunteers were studied during eight simulated flights in a B727 simulator. Total errors increased linearly and significantly with increasing blood alcohol. Planning and performance errors, procedural errors and failures of vigilance each increased significantly in one or more pilots and in the group as a whole.
COMPLEX VARIABLE BOUNDARY ELEMENT METHOD: APPLICATIONS.
Hromadka, T.V.; Yen, C.C.; Guymon, G.L.
1985-01-01
The complex variable boundary element method (CVBEM) is used to approximate several potential problems where analytical solutions are known. A modeling result produced from the CVBEM is a measure of relative error in matching the known boundary condition values of the problem. A CVBEM error-reduction algorithm is used to reduce the relative error of the approximation by adding nodal points in boundary regions where error is large. From the test problems, overall error is reduced significantly by utilizing the adaptive integration algorithm.
NASA Technical Reports Server (NTRS)
Loughman, R.; Flittner, D.; Herman, B.; Bhartia, P.; Hilsenrath, E.; McPeters, R.; Rault, D.
2002-01-01
The SOLSE (Shuttle Ozone Limb Sounding Experiment) and LORE (Limb Ozone Retrieval Experiment) instruments are scheduled for reflight on Space Shuttle flight STS-107 in July 2002. In addition, the SAGE III (Stratospheric Aerosol and Gas Experiment) instrument will begin to make limb scattering measurements during Spring 2002. The optimal estimation technique is used to analyze visible and ultraviolet limb scattered radiances and produce a retrieved ozone profile. The algorithm used to analyze data from the initial flight of the SOLSE/LORE instruments (on Space Shuttle flight STS-87 in November 1997) forms the basis of the current algorithms, with expansion to take advantage of the increased multispectral information provided by SOLSE/LORE-2 and SAGE III. We also present detailed sensitivity analysis for these ozone retrieval algorithms. The primary source of ozone retrieval error is tangent height misregistration (i.e., instrument pointing error), which is relevant throughout the altitude range of interest, and can produce retrieval errors on the order of 10-20 percent due to a tangent height registration error of 0.5 km at the tangent point. Other significant sources of error are sensitivity to stratospheric aerosol and sensitivity to error in the a priori ozone estimate (given assumed instrument signal-to-noise = 200). These can produce errors up to 10 percent for the ozone retrieval at altitudes less than 20 km, but produce little error above that level.
Improved Quality in Aerospace Testing Through the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
DeLoach, R.
2000-01-01
This paper illustrates how, in the presence of systematic error, the quality of an experimental result can be influenced by the order in which the independent variables are set. It is suggested that in typical experimental circumstances in which systematic errors are significant, the common practice of organizing the set point order of independent variables to maximize data acquisition rate results in a test matrix that fails to produce the highest quality research result. With some care to match the volume of data required to satisfy inference error risk tolerances, it is possible to accept a lower rate of data acquisition and still produce results of higher technical quality (lower experimental error) with less cost and in less time than conventional test procedures, simply by optimizing the sequence in which independent variable levels are set.
Di Pietro, M; Schnider, A; Ptak, R
2011-10-01
Patients with peripheral dysgraphia due to impairment at the allographic level produce writing errors that affect the letter-form and are characterized by case confusions or the failure to write in a specific case or style (e.g., cursive). We studied the writing errors of a patient with pure peripheral dysgraphia who had entirely intact oral spelling, but produced many well-formed letter errors in written spelling. The comparison of uppercase print and lowercase cursive spelling revealed an uncommon pattern: while most uppercase errors were case substitutions (e.g., A - a), almost all lowercase errors were letter substitutions (e.g., n - r). Analyses of the relationship between target letters and substitution errors showed that errors were neither influenced by consonant-vowel status nor by letter frequency, though word length affected error frequency in lowercase writing. Moreover, while graphomotor similarity did not predict either the occurrence of uppercase or lowercase errors, visuospatial similarity was a significant predictor of lowercase errors. These results suggest that lowercase representations of cursive letter-forms are based on a description of entire letters (visuospatial features) and are not - as previously found for uppercase letters - specified in terms of strokes (graphomotor features). Copyright © 2010 Elsevier Srl. All rights reserved.
Optimizer convergence and local minima errors and their clinical importance
NASA Astrophysics Data System (ADS)
Jeraj, Robert; Wu, Chuan; Mackie, Thomas R.
2003-09-01
Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.
Optimizer convergence and local minima errors and their clinical importance.
Jeraj, Robert; Wu, Chuan; Mackie, Thomas R
2003-09-07
Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.
Farwell, Lawrence A.; Richardson, Drew C.; Richardson, Graham M.; Furedy, John J.
2014-01-01
A classification concealed information test (CIT) used the “brain fingerprinting” method of applying P300 event-related potential (ERP) in detecting information that is (1) acquired in real life and (2) unique to US Navy experts in military medicine. Military medicine experts and non-experts were asked to push buttons in response to three types of text stimuli. Targets contain known information relevant to military medicine, are identified to subjects as relevant, and require pushing one button. Subjects are told to push another button to all other stimuli. Probes contain concealed information relevant to military medicine, and are not identified to subjects. Irrelevants contain equally plausible, but incorrect/irrelevant information. Error rate was 0%. Median and mean statistical confidences for individual determinations were 99.9% with no indeterminates (results lacking sufficiently high statistical confidence to be classified). We compared error rate and statistical confidence for determinations of both information present and information absent produced by classification CIT (Is a probe ERP more similar to a target or to an irrelevant ERP?) vs. comparison CIT (Does a probe produce a larger ERP than an irrelevant?) using P300 plus the late negative component (LNP; together, P300-MERMER). Comparison CIT produced a significantly higher error rate (20%) and lower statistical confidences: mean 67%; information-absent mean was 28.9%, less than chance (50%). We compared analysis using P300 alone with the P300 + LNP. P300 alone produced the same 0% error rate but significantly lower statistical confidences. These findings add to the evidence that the brain fingerprinting methods as described here provide sufficient conditions to produce less than 1% error rate and greater than 95% median statistical confidence in a CIT on information obtained in the course of real life that is characteristic of individuals with specific training, expertise, or organizational affiliation. PMID:25565941
Impact of Tropospheric Aerosol Absorption on Ozone Retrieval from buv Measurements
NASA Technical Reports Server (NTRS)
Torres, O.; Bhartia, P. K.
1998-01-01
The impact of tropospheric aerosols on the retrieval of column ozone amounts using spaceborne measurements of backscattered ultraviolet radiation is examined. Using radiative transfer calculations, we show that uv-absorbing desert dust may introduce errors as large as 10% in ozone column amount, depending on the aerosol layer height and optical depth. Smaller errors are produced by carbonaceous aerosols that result from biomass burning. Though the error is produced by complex interactions between ozone absorption (both stratospheric and tropospheric), aerosol scattering, and aerosol absorption, a surprisingly simple correction procedure reduces the error to about 1%, for a variety of aerosols and for a wide range of aerosol loading. Comparison of the corrected TOMS data with operational data indicates that though the zonal mean total ozone derived from TOMS are not significantly affected by these errors, localized affects in the tropics can be large enough to seriously affect the studies of tropospheric ozone that are currently undergoing using the TOMS data.
Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Abotteen, K. M. (Principal Investigator)
1980-01-01
The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.
Bacon, Dave; Flammia, Steven T
2009-09-18
The difficulty in producing precisely timed and controlled quantum gates is a significant source of error in many physical implementations of quantum computers. Here we introduce a simple universal primitive, adiabatic gate teleportation, which is robust to timing errors and many control errors and maintains a constant energy gap throughout the computation above a degenerate ground state space. This construction allows for geometric robustness based upon the control of two independent qubit interactions. Further, our piecewise adiabatic evolution easily relates to the quantum circuit model, enabling the use of standard methods from fault-tolerance theory for establishing thresholds.
QSRA: a quality-value guided de novo short read assembler.
Bryant, Douglas W; Wong, Weng-Keen; Mockler, Todd C
2009-02-24
New rapid high-throughput sequencing technologies have sparked the creation of a new class of assembler. Since all high-throughput sequencing platforms incorporate errors in their output, short-read assemblers must be designed to account for this error while utilizing all available data. We have designed and implemented an assembler, Quality-value guided Short Read Assembler, created to take advantage of quality-value scores as a further method of dealing with error. Compared to previous published algorithms, our assembler shows significant improvements not only in speed but also in output quality. QSRA generally produced the highest genomic coverage, while being faster than VCAKE. QSRA is extremely competitive in its longest contig and N50/N80 contig lengths, producing results of similar quality to those of EDENA and VELVET. QSRA provides a step closer to the goal of de novo assembly of complex genomes, improving upon the original VCAKE algorithm by not only drastically reducing runtimes but also increasing the viability of the assembly algorithm through further error handling capabilities.
NASA Technical Reports Server (NTRS)
Gordon, Steven C.
1993-01-01
Spacecraft in orbit near libration point L1 in the Sun-Earth system are excellent platforms for research concerning solar effects on the terrestrial environment. One spacecraft mission launched in 1978 used an L1 orbit for nearly 4 years, and future L1 orbital missions are also being planned. Orbit determination and station-keeping are, however, required for these orbits. In particular, orbit determination error analysis may be used to compute the state uncertainty after a predetermined tracking period; the predicted state uncertainty levels then will impact the control costs computed in station-keeping simulations. Error sources, such as solar radiation pressure and planetary mass uncertainties, are also incorporated. For future missions, there may be some flexibility in the type and size of the spacecraft's nominal trajectory, but different orbits may produce varying error analysis and station-keeping results. The nominal path, for instance, can be (nearly) periodic or distinctly quasi-periodic. A periodic 'halo' orbit may be constructed to be significantly larger than a quasi-periodic 'Lissajous' path; both may meet mission requirements, but perhaps the required control costs for these orbits are probably different. Also for this spacecraft tracking and control simulation problem, experimental design methods can be used to determine the most significant uncertainties. That is, these methods can determine the error sources in the tracking and control problem that most impact the control cost (output); it also produces an equation that gives the approximate functional relationship between the error inputs and the output.
Tropical forecasting - Predictability perspective
NASA Technical Reports Server (NTRS)
Shukla, J.
1989-01-01
Results are presented of classical predictability studies and forecast experiments with observed initial conditions to show the nature of initial error growth and final error equilibration for the tropics and midlatitudes, separately. It is found that the theoretical upper limit of tropical circulation predictability is far less than for midlatitudes. The error growth for a complete general circulation model is compared to a dry version of the same model in which there is no prognostic equation for moisture, and diabatic heat sources are prescribed. It is found that the growth rate of synoptic-scale errors for the dry model is significantly smaller than for the moist model, suggesting that the interactions between dynamics and moist processes are among the important causes of atmospheric flow predictability degradation. Results are then presented of numerical experiments showing that correct specification of the slowly varying boundary condition of SST produces significant improvement in the prediction of time-averaged circulation and rainfall over the tropics.
Kim, Myoungsoo
2010-04-01
The purpose of this study was to examine the impact of strategies to promote reporting of errors on nurses' attitude to reporting errors, organizational culture related to patient safety, intention to report and reporting rate in hospital nurses. A nonequivalent control group non-synchronized design was used for this study. The program was developed and then administered to the experimental group for 12 weeks. Data were analyzed using descriptive analysis, X(2)-test, t-test, and ANCOVA with the SPSS 12.0 program. After the intervention, the experimental group showed significantly higher scores for nurses' attitude to reporting errors (experimental: 20.73 vs control: 20.52, F=5.483, p=.021) and reporting rate (experimental: 3.40 vs control: 1.33, F=1998.083, p<.001). There was no significant difference in some categories for organizational culture and intention to report. The study findings indicate that strategies that promote reporting of errors play an important role in producing positive attitudes to reporting errors and improving behavior of reporting. Further advanced strategies for reporting errors that can lead to improved patient safety should be developed and applied in a broad range of hospitals.
Simplification of the Kalman filter for meteorological data assimilation
NASA Technical Reports Server (NTRS)
Dee, Dick P.
1991-01-01
The paper proposes a new statistical method of data assimilation that is based on a simplification of the Kalman filter equations. The forecast error covariance evolution is approximated simply by advecting the mass-error covariance field, deriving the remaining covariances geostrophically, and accounting for external model-error forcing only at the end of each forecast cycle. This greatly reduces the cost of computation of the forecast error covariance. In simulations with a linear, one-dimensional shallow-water model and data generated artificially, the performance of the simplified filter is compared with that of the Kalman filter and the optimal interpolation (OI) method. The simplified filter produces analyses that are nearly optimal, and represents a significant improvement over OI.
Anticipatory synergy adjustments reflect individual performance of feedforward force control.
Togo, Shunta; Imamizu, Hiroshi
2016-10-06
We grasp and dexterously manipulate an object through multi-digit synergy. In the framework of the uncontrolled manifold (UCM) hypothesis, multi-digit synergy is defined as the coordinated control mechanism of fingers to stabilize variable important for task success, e.g., total force. Previous studies reported anticipatory synergy adjustments (ASAs) that correspond to a drop of the synergy index before a quick change of the total force. The present study compared ASA's properties with individual performances of feedforward force control to investigate a relationship of those. Subjects performed a total finger force production task that consisted of a phase in which subjects tracked target line with visual information and a phase in which subjects produced total force pulse without visual information. We quantified their multi-digit synergy through UCM analysis and observed significant ASAs before producing total force pulse. The time of the ASA initiation and the magnitude of the drop of the synergy index were significantly correlated with the error of force pulse, but not with the tracking error. Almost all subjects showed a significant increase of the variance that affected the total force. Our study directly showed that ASA reflects the individual performance of feedforward force control independently of target-tracking performance and suggests that the multi-digit synergy was weakened to adjust the multi-digit movements based on a prediction error so as to reduce the future error. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A multifaceted program for improving quality of care in intensive care units: IATROREF study.
Garrouste-Orgeas, Maite; Soufir, Lilia; Tabah, Alexis; Schwebel, Carole; Vesin, Aurelien; Adrie, Christophe; Thuong, Marie; Timsit, Jean Francois
2012-02-01
To test the effects of three multifaceted safety programs designed to decrease insulin administration errors, anticoagulant prescription and administration errors, and errors leading to accidental removal of endotracheal tubes and central venous catheters, respectively. Medical errors and adverse events are associated with increased mortality in intensive care patients, indicating an urgent need for prevention programs. Multicenter cluster-randomized study. One medical intensive care unit in a university hospital and two medical-surgical intensive care units in community hospitals belonging to the Outcomerea Study Group. Consecutive patients >18 yrs admitted from January 2007 to January 2008 to the intensive care units. We tested three multifaceted safety programs vs. standard care in random order, each over 2.5 months, after a 1.5-month observation period. Incidence rates of medical errors/1000 patient-days in the multifaceted safety program and standard-care groups were compared using adjusted hierarchical models. In 2117 patients with 15,014 patient-days, 8520 medical errors (567.5/1000 patient-days) were reported, including 1438 adverse events (16.9%, 95.8/1000 patient-days). The insulin multifaceted safety program significantly decreased errors during implementation (risk ratio 0.65; 95% confidence interval [CI] 0.52-0.82; p = .0003) and after implementation (risk ratio 0.51; 95% CI 0.35-0.73; p = .0004). A significant Hawthorne effect was found. The accidental tube/catheter removal multifaceted safety program decreased errors significantly during implementation (odds ratio [OR] 0.34; 95% CI 0.15-0.81; p = .01]) and nonsignificantly after implementation (OR 1.65; 95% CI 0.78-3.48). The anticoagulation multifaceted safety program was not significantly effective (OR 0.64; 95% CI 0.26-1.59) but produced a significant Hawthorne effect. A multifaceted program was effective in preventing insulin errors and accidental tube/catheter removal. Significant Hawthorne effects occurred, emphasizing the need for appropriately designed studies before definitively implementing strategies. clinicaltrials.gov Identifier: NCT00461461.
Winsauer, Peter J; Sutton, Jessie L
2014-02-01
This study examined whether chronic Δ(9)-THC during early adulthood would produce the same hormonally-dependent deficits in learning that are produced by chronic Δ(9)-THC during adolescence. To do this, either sham-operated (intact) or ovariectomized (OVX) female rats received daily saline or 5.6 mg/kg of Δ(9)-THC i.p. for 40 days during early adulthood. Following chronic administration, and a drug-free period to train both a learning and performance task, acute dose-effect curves for Δ(9)-THC (0.56-10 mg/kg) were established in each of the four groups (intact/saline, intact/THC, OVX/saline and OVX/THC). The dependent measures of responding under the learning and performance tasks were the overall response rate and the percentage of errors. Although the history of OVX and chronic Δ(9)-THC in early adulthood did not significantly affect non-drug or baseline behavior under the tasks, acute administration of Δ(9)-THC produced both rate-decreasing and error-increasing effects on learning and performance behavior, and these effects were dependent on their hormone condition. More specifically, both intact groups were more sensitive to the rate-decreasing and error-increasing effects of Δ(9)-THC than the OVX groups irrespective of chronic Δ(9)-THC administration, as there was no significant main effect of chronic treatment and no significant interaction between chronic treatment (saline or Δ(9)-THC) and the dose of Δ(9)-THC administered as an adult. Post mortem examination of 10 brain regions also indicated there were significant differences in agonist-stimulated GTPγS binding across brain regions, but no significant effects of chronic treatment and no significant interaction between the chronic treatment and cannabinoid signaling. Thus, acute Δ(9)-THC produced hormonally-dependent effects on learning and performance behavior, but a period of chronic administration during early adulthood did not alter these effects significantly, which is contrary to what we and others have shown for chronic administration during adolescence. Copyright © 2013 Elsevier Inc. All rights reserved.
Lobb, M L; Stern, J A
1986-08-01
Sequential patterns of eye and eyelid motion were identified in seven subjects performing a modified serial probe recognition task under drowsy conditions. Using simultaneous EOG and video recordings, eyelid motion was divided into components above, within, and below the pupil and the durations in sequence were recorded. A serial probe recognition task was modified to allow for distinguishing decision errors from attention errors. Decision errors were found to be more frequent following a downward shift in the gaze angle which the eyelid closing sequence was reduced from a five element to a three element sequence. The velocity of the eyelid moving over the pupil during decision errors was slow in the closing and fast in the reopening phase, while on decision correct trials it was fast in closing and slower in reopening. Due to the high variability of eyelid motion under drowsy conditions these findings were only marginally significant. When a five element blink occurred, the velocity of the lid over pupil motion component of these endogenous eye blinks was significantly faster on decision correct than on decision error trials. Furthermore, the highly variable, long duration closings associated with the decision response produced slow eye movements in the horizontal plane (SEM) which were more frequent and significantly longer in duration on decision error versus decision correct responses.
Aquatic habitat mapping with an acoustic doppler current profiler: Considerations for data quality
Gaeuman, David; Jacobson, Robert B.
2005-01-01
When mounted on a boat or other moving platform, acoustic Doppler current profilers (ADCPs) can be used to map a wide range of ecologically significant phenomena, including measures of fluid shear, turbulence, vorticity, and near-bed sediment transport. However, the instrument movement necessary for mapping applications can generate significant errors, many of which have not been inadequately described. This report focuses on the mechanisms by which moving-platform errors are generated, and quantifies their magnitudes under typical habitat-mapping conditions. The potential for velocity errors caused by mis-alignment of the instrument?s internal compass are widely recognized, but has not previously been quantified for moving instruments. Numerical analyses show that even relatively minor compass mis-alignments can produce significant velocity errors, depending on the ratio of absolute instrument velocity to the target velocity and on the relative directions of instrument and target motion. A maximum absolute instrument velocity of about 1 m/s is recommended for most mapping applications. Lower velocities are appropriate when making bed velocity measurements, an emerging application that makes use of ADCP bottom-tracking to measure the velocity of sediment particles at the bed. The mechanisms by which heterogeneities in the flow velocity field generate horizontal velocities errors are also quantified, and some basic limitations in the effectiveness of standard error-detection criteria for identifying these errors are described. Bed velocity measurements may be particularly vulnerable to errors caused by spatial variability in the sediment transport field.
Observations of cloud liquid water path over oceans: Optical and microwave remote sensing methods
NASA Technical Reports Server (NTRS)
Lin, Bing; Rossow, William B.
1994-01-01
Published estimates of cloud liquid water path (LWP) from satellite-measured microwave radiation show little agreement, even about the relative magnitudes of LWP in the tropics and midlatitudes. To understand these differences and to obtain more reliable estimate, optical and microwave LWP retrieval methods are compared using the International Satellite Cloud Climatology Project (ISCCP) and special sensor microwave/imager (SSM/I) data. Errors in microwave LWP retrieval associated with uncertainties in surface, atmosphere, and cloud properties are assessed. Sea surface temperature may not produce great LWP errors, if accurate contemporaneous measurements are used in the retrieval. An uncertainty of estimated near-surface wind speed as high as 2 m/s produces uncertainty in LWP of about 5 mg/sq cm. Cloud liquid water temperature has only a small effect on LWP retrievals (rms errors less than 2 mg/sq cm), if errors in the temperature are less than 5 C; however, such errors can produce spurious variations of LWP with latitude and season. Errors in atmospheric column water vapor (CWV) are strongly coupled with errors in LWP (for some retrieval methods) causing errors as large as 30 mg/sq cm. Because microwave radiation is much less sensitive to clouds with small LWP (less than 7 mg/sq cm) than visible wavelength radiation, the microwave results are very sensitive to the process used to separate clear and cloudy conditions. Different cloud detection sensitivities in different microwave retrieval methods bias estimated LWP values. Comparing ISCCP and SSM/I LWPs, we find that the two estimated values are consistent in global, zonal, and regional means for warm, nonprecipitating clouds, which have average LWP values of about 5 mg/sq cm and occur much more frequently than precipitating clouds. Ice water path (IWP) can be roughly estimated from the differences between ISCCP total water path and SSM/I LWP for cold, nonprecipitating clouds. IWP in the winter hemisphere is about 3 times the LWP but only half the LWP in the summer hemisphere. Precipitating clouds contribute significantly to monthly, zonal mean LWP values determined from microwave, especially in the intertropical convergence zone (ITCZ), because they have almost 10 times the liquid water (cloud plus precipitation) of nonprecipitating clouds on average. There are significant differences among microwave LWP estimates associated with the treatment of precipitating clouds.
Report of the 1988 2-D Intercomparison Workshop, chapter 3
NASA Technical Reports Server (NTRS)
Jackman, Charles H.; Brasseur, Guy; Soloman, Susan; Guthrie, Paul D.; Garcia, Rolando; Yung, Yuk L.; Gray, Lesley J.; Tung, K. K.; Ko, Malcolm K. W.; Isaken, Ivar
1989-01-01
Several factors contribute to the errors encountered. With the exception of the line-by-line model, all of the models employ simplifying assumptions that place fundamental limits on their accuracy and range of validity. For example, all 2-D modeling groups use the diffusivity factor approximation. This approximation produces little error in tropospheric H2O and CO2 cooling rates, but can produce significant errors in CO2 and O3 cooling rates at the stratopause. All models suffer from fundamental uncertainties in shapes and strengths of spectral lines. Thermal flux algorithms being used in 2-D tracer tranport models produce cooling rates that differ by as much as 40 percent for the same input model atmosphere. Disagreements of this magnitude are important since the thermal cooling rates must be subtracted from the almost-equal solar heating rates to derive the net radiative heating rates and the 2-D model diabatic circulation. For much of the annual cycle, the net radiative heating rates are comparable in magnitude to the cooling rate differences described. Many of the models underestimate the cooling rates in the middle and lower stratosphere. The consequences of these errors for the net heating rates and the diabatic circulation will depend on their meridional structure, which was not tested here. Other models underestimate the cooling near 1 mbar. Suchs errors pose potential problems for future interactive ozone assessment studies, since they could produce artificially-high temperatures and increased O3 destruction at these levels. These concerns suggest that a great deal of work is needed to improve the performance of thermal cooling rate algorithms used in the 2-D tracer transport models.
Data Mining on Numeric Error in Computerized Physician Order Entry System Prescriptions.
Wu, Xue; Wu, Changxu
2017-01-01
This study revealed the numeric error patterns related to dosage when doctors prescribed in computerized physician order entry system. Error categories showed that the '6','7', and '9' key produced a higher incidence of errors in Numpad typing, while the '2','3', and '0' key produced a higher incidence of errors in main keyboard digit line typing. Errors categorized as omission and substitution were higher in prevalence than transposition and intrusion.
An improved procedure for the validation of satellite-based precipitation estimates
NASA Astrophysics Data System (ADS)
Tang, Ling; Tian, Yudong; Yan, Fang; Habib, Emad
2015-09-01
The objective of this study is to propose and test a new procedure to improve the validation of remote-sensing, high-resolution precipitation estimates. Our recent studies show that many conventional validation measures do not accurately capture the unique error characteristics in precipitation estimates to better inform both data producers and users. The proposed new validation procedure has two steps: 1) an error decomposition approach to separate the total retrieval error into three independent components: hit error, false precipitation and missed precipitation; and 2) the hit error is further analyzed based on a multiplicative error model. In the multiplicative error model, the error features are captured by three model parameters. In this way, the multiplicative error model separates systematic and random errors, leading to more accurate quantification of the uncertainties. The proposed procedure is used to quantitatively evaluate the recent two versions (Version 6 and 7) of TRMM's Multi-sensor Precipitation Analysis (TMPA) real-time and research product suite (3B42 and 3B42RT) for seven years (2005-2011) over the continental United States (CONUS). The gauge-based National Centers for Environmental Prediction (NCEP) Climate Prediction Center (CPC) near-real-time daily precipitation analysis is used as the reference. In addition, the radar-based NCEP Stage IV precipitation data are also model-fitted to verify the effectiveness of the multiplicative error model. The results show that winter total bias is dominated by the missed precipitation over the west coastal areas and the Rocky Mountains, and the false precipitation over large areas in Midwest. The summer total bias is largely coming from the hit bias in Central US. Meanwhile, the new version (V7) tends to produce more rainfall in the higher rain rates, which moderates the significant underestimation exhibited in the previous V6 products. Moreover, the error analysis from the multiplicative error model provides a clear and concise picture of the systematic and random errors, with both versions of 3B42RT have higher errors in varying degrees than their research (post-real-time) counterparts. The new V7 algorithm shows obvious improvements in reducing random errors in both winter and summer seasons, compared to its predecessors V6. Stage IV, as expected, surpasses the satellite-based datasets in all the metrics over CONUS. Based on the results, we recommend the new procedure be adopted for routine validation of satellite-based precipitation datasets, and we expect the procedure will work effectively for higher resolution data to be produced in the Global Precipitation Measurement (GPM) era.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marous, L; Muryn, J; Liptak, C
2016-06-15
Purpose: Monte Carlo simulation is a frequently used technique for assessing patient dose in CT. The accuracy of a Monte Carlo program is often validated using the standard CT dose index (CTDI) phantoms by comparing simulated and measured CTDI{sub 100}. To achieve good agreement, many input parameters in the simulation (e.g., energy spectrum and effective beam width) need to be determined. However, not all the parameters have equal importance. Our aim was to assess the relative importance of the various factors that influence the accuracy of simulated CTDI{sub 100}. Methods: A Monte Carlo program previously validated for a clinical CTmore » system was used to simulate CTDI{sub 100}. For the standard CTDI phantoms (32 and 16 cm in diameter), CTDI{sub 100} values from central and four peripheral locations at 70 and 120 kVp were first simulated using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which intentional errors were introduced into the input parameters, the effects of which on simulated CTDI{sub 100} were analyzed. Results: At 38.4-mm collimation, errors in effective beam width up to 5.0 mm showed negligible effects on simulated CTDI{sub 100} (<1.0%). Likewise, errors in acrylic density of up to 0.01 g/cm{sup 3} resulted in small CTDI{sub 100} errors (<2.5%). In contrast, errors in spectral HVL produced more significant effects: slight deviations (±0.2 mm Al) produced errors up to 4.4%, whereas more extreme deviations (±1.4 mm Al) produced errors as high as 25.9%. Lastly, ignoring the CT table introduced errors up to 13.9%. Conclusion: Monte Carlo simulated CTDI{sub 100} is insensitive to errors in effective beam width and acrylic density. However, they are sensitive to errors in spectral HVL. To obtain accurate results, the CT table should not be ignored. This work was supported by a Faculty Research and Development Award from Cleveland State University.« less
Gildersleeve-Neumann, Christina E; Kester, Ellen S; Davis, Barbara L; Peña, Elizabeth D
2008-07-01
English speech acquisition by typically developing 3- to 4-year-old children with monolingual English was compared to English speech acquisition by typically developing 3- to 4-year-old children with bilingual English-Spanish backgrounds. We predicted that exposure to Spanish would not affect the English phonetic inventory but would increase error frequency and type in bilingual children. Single-word speech samples were collected from 33 children. Phonetically transcribed samples for the 3 groups (monolingual English children, English-Spanish bilingual children who were predominantly exposed to English, and English-Spanish bilingual children with relatively equal exposure to English and Spanish) were compared at 2 time points and for change over time for phonetic inventory, phoneme accuracy, and error pattern frequencies. Children demonstrated similar phonetic inventories. Some bilingual children produced Spanish phonemes in their English and produced few consonant cluster sequences. Bilingual children with relatively equal exposure to English and Spanish averaged more errors than did bilingual children who were predominantly exposed to English. Both bilingual groups showed higher error rates than English-only children overall, particularly for syllable-level error patterns. All language groups decreased in some error patterns, although the ones that decreased were not always the same across language groups. Some group differences of error patterns and accuracy were significant. Vowel error rates did not differ by language group. Exposure to English and Spanish may result in a higher English error rate in typically developing bilinguals, including the application of Spanish phonological properties to English. Slightly higher error rates are likely typical for bilingual preschool-aged children. Change over time at these time points for all 3 groups was similar, suggesting that all will reach an adult-like system in English with exposure and practice.
Acquiring Research-grade ALSM Data in the Commercial Marketplace
NASA Astrophysics Data System (ADS)
Haugerud, R. A.; Harding, D. J.; Latypov, D.; Martinez, D.; Routh, S.; Ziegler, J.
2003-12-01
The Puget Sound Lidar Consortium, working with TerraPoint, LLC, has procured a large volume of ALSM (topographic lidar) data for scientific research. Research-grade ALSM data can be characterized by their completeness, density, and accuracy. Complete data include-at a minimum-X, Y, Z, time, and classification (ground, vegetation, structure, blunder) for each laser reflection. Off-nadir angle and return number for multiple returns are also useful. We began with a pulse density of 1/sq m, and after limited experiments still find this density satisfactory in the dense second-growth forests of western Washington. Lower pulse densities would have produced unacceptably limited sampling in forested areas and aliased some topographic features. Higher pulse densities do not produce markedly better topographic models, in part because of limitations of reproducibility between the overlapping survey swaths used to achieve higher density. Our experience in a variety of forest types demonstrates that the fraction of pulses that produce ground returns varies with vegetation cover, laser beam divergence, laser power, and detector sensitivity, but have not quantified this relationship. The most significant operational limits on vertical accuracy of ALSM appear to be instrument calibration and the accuracy with which returns are classified as ground or vegetation. TerraPoint has recently implemented in-situ calibration using overlapping swaths (Latypov and Zosse, 2002, see http://www.terrapoint.com/News_damirACSM_ASPRS2002.html). On the consumer side, we routinely perform a similar overlap analysis to produce maps of relative Z error between swaths; we find that in bare, low-slope regions the in-situ calibration has reduced this internal Z error to 6-10 cm RMSE. Comparison with independent ground control points commonly illuminates inconsistencies in how GPS heights have been reduced to orthometric heights. Once these inconsistencies are resolved, it appears that the internal errors are the bulk of the error of the survey. The error maps suggest that with in-situ calibration, minor time-varying errors with a period of circa 1 sec are the largest remaining source of survey error. For forested terrain, limited ground penetration and errors in return classification can severely limit the accuracy of resulting topographic models. Initial work by Haugerud and Harding demonstrated the feasibility of fully-automatic return classification; however, TerraPoint has found that better results can be obtained more effectively with 3rd-party classification software that allows a mix of automated routines and human intervention. Our relationship has been evolving since early 2000. Important aspects of this relationship include close communication between data producer and consumer, a willingness to learn from each other, significant technical expertise and resources on the consumer side, and continued refinement of achievable, quantitative performance and accuracy specifications. Most recently we have instituted a slope-dependent Z accuracy specification that TerraPoint first developed as a heuristic for surveying mountainous terrain in Switzerland. We are now working on quantifying the internal consistency of topographic models in forested areas, using a variant of overlap analysis, and standards for the spatial distribution of internal errors.
Impact of lateral boundary conditions on regional analyses
NASA Astrophysics Data System (ADS)
Chikhar, Kamel; Gauthier, Pierre
2017-04-01
Regional and global climate models are usually validated by comparison to derived observations or reanalyses. Using a model in data assimilation results in a direct comparison to observations to produce its own analyses that may reveal systematic errors. In this study, regional analyses over North America are produced based on the fifth-generation Canadian Regional Climate Model (CRCM5) combined with the variational data assimilation system of the Meteorological Service of Canada (MSC). CRCM5 is driven at its boundaries by global analyses from ERA-interim or produced with the global configuration of the CRCM5. Assimilation cycles for the months of January and July 2011 revealed systematic errors in winter through large values in the mean analysis increments. This bias is attributed to the coupling of the lateral boundary conditions of the regional model with the driving data particularly over the northern boundary where a rapidly changing large scale circulation created significant cross-boundary flows. Increasing the time frequency of the lateral driving and applying a large-scale spectral nudging improved significantly the circulation through the lateral boundaries which translated in a much better agreement with observations.
Residents' numeric inputting error in computerized physician order entry prescription.
Wu, Xue; Wu, Changxu; Zhang, Kan; Wei, Dong
2016-04-01
Computerized physician order entry (CPOE) system with embedded clinical decision support (CDS) can significantly reduce certain types of prescription error. However, prescription errors still occur. Various factors such as the numeric inputting methods in human computer interaction (HCI) produce different error rates and types, but has received relatively little attention. This study aimed to examine the effects of numeric inputting methods and urgency levels on numeric inputting errors of prescription, as well as categorize the types of errors. Thirty residents participated in four prescribing tasks in which two factors were manipulated: numeric inputting methods (numeric row in the main keyboard vs. numeric keypad) and urgency levels (urgent situation vs. non-urgent situation). Multiple aspects of participants' prescribing behavior were measured in sober prescribing situations. The results revealed that in urgent situations, participants were prone to make mistakes when using the numeric row in the main keyboard. With control of performance in the sober prescribing situation, the effects of the input methods disappeared, and urgency was found to play a significant role in the generalized linear model. Most errors were either omission or substitution types, but the proportion of transposition and intrusion error types were significantly higher than that of the previous research. Among numbers 3, 8, and 9, which were the less common digits used in prescription, the error rate was higher, which was a great risk to patient safety. Urgency played a more important role in CPOE numeric typing error-making than typing skills and typing habits. It was recommended that inputting with the numeric keypad had lower error rates in urgent situation. An alternative design could consider increasing the sensitivity of the keys with lower frequency of occurrence and decimals. To improve the usability of CPOE, numeric keyboard design and error detection could benefit from spatial incidence of errors found in this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
The Error Structure of the SMAP Single and Dual Channel Soil Moisture Retrievals
NASA Astrophysics Data System (ADS)
Dong, Jianzhi; Crow, Wade T.; Bindlish, Rajat
2018-01-01
Knowledge of the temporal error structure for remotely sensed surface soil moisture retrievals can improve our ability to exploit them for hydrologic and climate studies. This study employs a triple collocation analysis to investigate both the total variance and temporal autocorrelation of errors in Soil Moisture Active and Passive (SMAP) products generated from two separate soil moisture retrieval algorithms, the vertically polarized brightness temperature-based single-channel algorithm (SCA-V, the current baseline SMAP algorithm) and the dual-channel algorithm (DCA). A key assumption made in SCA-V is that real-time vegetation opacity can be accurately captured using only a climatology for vegetation opacity. Results demonstrate that while SCA-V generally outperforms DCA, SCA-V can produce larger total errors when this assumption is significantly violated by interannual variability in vegetation health and biomass. Furthermore, larger autocorrelated errors in SCA-V retrievals are found in areas with relatively large vegetation opacity deviations from climatological expectations. This implies that a significant portion of the autocorrelated error in SCA-V is attributable to the violation of its vegetation opacity climatology assumption and suggests that utilizing a real (as opposed to climatological) vegetation opacity time series in the SCA-V algorithm would reduce the magnitude of autocorrelated soil moisture retrieval errors.
Predicted blood glucose from insulin administration based on values from miscoded glucose meters.
Raine, Charles H; Pardo, Scott; Parkes, Joan Lee
2008-07-01
The proper use of many types of self-monitored blood glucose (SMBG) meters requires calibration to match strip code. Studies have demonstrated the occurrence and impact on insulin dose of coding errors with SMBG meters. This paper reflects additional analyses performed with data from Raine et al. (JDST, 2:205-210, 2007). It attempts to relate potential insulin dose errors to possible adverse blood glucose outcomes when glucose meters are miscoded. Five sets of glucose meters were used. Two sets of meters were autocoded and therefore could not be miscoded, and three sets required manual coding. Two of each set of manually coded meters were deliberately miscoded, and one from each set was properly coded. Subjects (n = 116) had finger stick blood glucose obtained at fasting, as well as at 1 and 2 hours after a fixed meal (Boost((R)); Novartis Medical Nutrition U.S., Basel, Switzerland). Deviations of meter blood glucose results from the reference method (YSI) were used to predict insulin dose errors and resultant blood glucose outcomes based on these deviations. Using insulin sensitivity data, it was determined that, given an actual blood glucose of 150-400 mg/dl, an error greater than +40 mg/dl would be required to calculate an insulin dose sufficient to produce a blood glucose of less than 70 mg/dl. Conversely, an error less than or equal to -70 mg/dl would be required to derive an insulin dose insufficient to correct an elevated blood glucose to less than 180 mg/dl. For miscoded meters, the estimated probability to produce a blood glucose reduction to less than or equal to 70 mg/dl was 10.40%. The corresponding probabilities for autocoded and correctly coded manual meters were 2.52% (p < 0.0001) and 1.46% (p < 0.0001), respectively. Furthermore, the errors from miscoded meters were large enough to produce a calculated blood glucose outcome less than or equal to 50 mg/dl in 42 of 833 instances. Autocoded meters produced zero (0) outcomes less than or equal to 50 mg/dl out of 279 instances, and correctly coded manual meters produced 1 of 416. Improperly coded blood glucose meters present the potential for insulin dose errors and resultant clinically significant hypoglycemia or hyperglycemia. Patients should be instructed and periodically reinstructed in the proper use of blood glucose meters, particularly for meters that require coding.
Relative peripheral hyperopic defocus alters central refractive development in infant monkeys
Smith, Earl L.; Hung, Li-Fang; Huang, Juan
2009-01-01
Understanding the role of peripheral defocus on central refractive development is critical because refractive errors can vary significantly with eccentricity and peripheral refractions have been implicated in the genesis of central refractive errors in humans. Two rearing strategies were used to determine whether peripheral hyperopia alters central refractive development in rhesus monkeys. In intact eyes, lens-induced relative peripheral hyperopia produced central axial myopia. Moreover, eliminating the fovea by laser photoablation did not prevent compensating myopic changes in response to optically imposed hyperopia. These results show that peripheral refractive errors can have a substantial impact on central refractive development in primates. PMID:19632261
Modeling and characterization of multipath in global navigation satellite system ranging signals
NASA Astrophysics Data System (ADS)
Weiss, Jan Peter
The Global Positioning System (GPS) provides position, velocity, and time information to users in anywhere near the earth in real-time and regardless of weather conditions. Since the system became operational, improvements in many areas have reduced systematic errors affecting GPS measurements such that multipath, defined as any signal taking a path other than the direct, has become a significant, if not dominant, error source for many applications. This dissertation utilizes several approaches to characterize and model multipath errors in GPS measurements. Multipath errors in GPS ranging signals are characterized for several receiver systems and environments. Experimental P(Y) code multipath data are analyzed for ground stations with multipath levels ranging from minimal to severe, a C-12 turboprop, an F-18 jet, and an aircraft carrier. Comparisons between receivers utilizing single patch antennas and multi-element arrays are also made. In general, the results show significant reductions in multipath with antenna array processing, although large errors can occur even with this kind of equipment. Analysis of airborne platform multipath shows that the errors tend to be small in magnitude because the size of the aircraft limits the geometric delay of multipath signals, and high in frequency because aircraft dynamics cause rapid variations in geometric delay. A comprehensive multipath model is developed and validated. The model integrates 3D structure models, satellite ephemerides, electromagnetic ray-tracing algorithms, and detailed antenna and receiver models to predict multipath errors. Validation is performed by comparing experimental and simulated multipath via overall error statistics, per satellite time histories, and frequency content analysis. The validation environments include two urban buildings, an F-18, an aircraft carrier, and a rural area where terrain multipath dominates. The validated models are used to identify multipath sources, characterize signal properties, evaluate additional antenna and receiver tracking configurations, and estimate the reflection coefficients of multipath-producing surfaces. Dynamic models for an F-18 landing on an aircraft carrier correlate aircraft dynamics to multipath frequency content; the model also characterizes the separate contributions of multipath due to the aircraft, ship, and ocean to the overall error statistics. Finally, reflection coefficients for multipath produced by terrain are estimated via a least-squares algorithm.
DOT National Transportation Integrated Search
2011-06-01
The unregulated hours and frequent night work characteristic of maintenance can produce significant levels of : employee fatigue, with a resultant risk of maintenance error. Fatigue Risk Management Systems (FRMS) are : widely used to manage fatigue a...
Publication bias was not a good reason to discourage trials with low power.
Borm, George F; den Heijer, Martin; Zielhuis, Gerhard A
2009-01-01
The objective was to investigate whether it is justified to discourage trials with less than 80% power. Trials with low power are unlikely to produce conclusive results, but their findings can be used by pooling then in a meta-analysis. However, such an analysis may be biased, because trials with low power are likely to have a nonsignificant result and are less likely to be published than trials with a statistically significant outcome. We simulated several series of studies with varying degrees of publication bias and then calculated the "real" one-sided type I error and the bias of meta-analyses with a "nominal" error rate (significance level) of 2.5%. In single trials, in which heterogeneity was set at zero, low, and high, the error rates were 2.3%, 4.7%, and 16.5%, respectively. In multiple trials with 80%-90% power and a publication rate of 90% when the results were nonsignificant, the error rates could be as high as 5.1%. When the power was 50% and the publication rate of non-significant results was 60%, the error rates did not exceed 5.3%, whereas the bias was at most 15% of the difference used in the power calculation. The impact of publication bias does not warrant the exclusion of trials with 50% power.
Gagnon, Bernadine; Miozzo, Michele
2017-01-01
Purpose This study aimed to test whether an approach to distinguishing errors arising in phonological processing from those arising in motor planning also predicts the extent to which repetition-based training can lead to improved production of difficult sound sequences. Method Four individuals with acquired speech production impairment who produced consonant cluster errors involving deletion were examined using a repetition task. We compared the acoustic details of productions with deletion errors in target consonant clusters to singleton consonants. Changes in accuracy over the course of the study were also compared. Results Two individuals produced deletion errors consistent with a phonological locus of the errors, and 2 individuals produced errors consistent with a motoric locus of the errors. The 2 individuals who made phonologically driven errors showed no change in performance on a repetition training task, whereas the 2 individuals with motoric errors improved in their production of both trained and untrained items. Conclusions The results extend previous findings about a metric for identifying the source of sound production errors in individuals with both apraxia of speech and aphasia. In particular, this work may provide a tool for identifying predominant error types in individuals with complex deficits. PMID:28655044
[Analysis of intrusion errors in free recall].
Diesfeldt, H F A
2017-06-01
Extra-list intrusion errors during five trials of the eight-word list-learning task of the Amsterdam Dementia Screening Test (ADST) were investigated in 823 consecutive psychogeriatric patients (87.1% suffering from major neurocognitive disorder). Almost half of the participants (45.9%) produced one or more intrusion errors on the verbal recall test. Correct responses were lower when subjects made intrusion errors, but learning slopes did not differ between subjects who committed intrusion errors and those who did not so. Bivariate regression analyses revealed that participants who committed intrusion errors were more deficient on measures of eight-word recognition memory, delayed visual recognition and tests of executive control (the Behavioral Dyscontrol Scale and the ADST-Graphical Sequences as measures of response inhibition). Using hierarchical multiple regression, only free recall and delayed visual recognition retained an independent effect in the association with intrusion errors, such that deficient scores on tests of episodic memory were sufficient to explain the occurrence of intrusion errors. Measures of inhibitory control did not add significantly to the explanation of intrusion errors in free recall, which makes insufficient strength of memory traces rather than a primary deficit in inhibition the preferred account for intrusion errors in free recall.
DNA assembly with error correction on a droplet digital microfluidics platform.
Khilko, Yuliya; Weyman, Philip D; Glass, John I; Adams, Mark D; McNeil, Melanie A; Griffin, Peter B
2018-06-01
Custom synthesized DNA is in high demand for synthetic biology applications. However, current technologies to produce these sequences using assembly from DNA oligonucleotides are costly and labor-intensive. The automation and reduced sample volumes afforded by microfluidic technologies could significantly decrease materials and labor costs associated with DNA synthesis. The purpose of this study was to develop a gene assembly protocol utilizing a digital microfluidic device. Toward this goal, we adapted bench-scale oligonucleotide assembly methods followed by enzymatic error correction to the Mondrian™ digital microfluidic platform. We optimized Gibson assembly, polymerase chain reaction (PCR), and enzymatic error correction reactions in a single protocol to assemble 12 oligonucleotides into a 339-bp double- stranded DNA sequence encoding part of the human influenza virus hemagglutinin (HA) gene. The reactions were scaled down to 0.6-1.2 μL. Initial microfluidic assembly methods were successful and had an error frequency of approximately 4 errors/kb with errors originating from the original oligonucleotide synthesis. Relative to conventional benchtop procedures, PCR optimization required additional amounts of MgCl 2 , Phusion polymerase, and PEG 8000 to achieve amplification of the assembly and error correction products. After one round of error correction, error frequency was reduced to an average of 1.8 errors kb - 1 . We demonstrated that DNA assembly from oligonucleotides and error correction could be completely automated on a digital microfluidic (DMF) platform. The results demonstrate that enzymatic reactions in droplets show a strong dependence on surface interactions, and successful on-chip implementation required supplementation with surfactants, molecular crowding agents, and an excess of enzyme. Enzymatic error correction of assembled fragments improved sequence fidelity by 2-fold, which was a significant improvement but somewhat lower than expected compared to bench-top assays, suggesting an additional capacity for optimization.
An investigation of motion base cueing and G-seat cueing on pilot performance in a simulator
NASA Technical Reports Server (NTRS)
Mckissick, B. T.; Ashworth, B. R.; Parrish, R. V.
1983-01-01
The effect of G-seat cueing (GSC) and motion-base cueing (MBC) on performance of a pursuit-tracking task is studied using the visual motion simulator (VMS) at Langley Research Center. The G-seat, the six-degree-of-freedom synergistic platform motion system, the visual display, the cockpit hardware, and the F-16 aircraft mathematical model are characterized. Each of 8 active F-15 pilots performed the 2-min-43-sec task 10 times for each experimental mode: no cue, GSC, MBC, and GSC + MBC; the results were analyzed statistically in terms of the RMS values of vertical and lateral tracking error. It is shown that lateral error is significantly reduced by either GSC or MBC, and that the combination of cues produces a further, significant decrease. Vertical error is significantly decreased by GSC with or without MBC, whereas MBC effects vary for different pilots. The pattern of these findings is roughly duplicated in measurements of stick force applied for roll and pitch correction.
Finkelstein's test: a descriptive error that can produce a false positive.
Elliott, B G
1992-08-01
Over the last three decades an error in performing Finkelstein's test has crept into the English literature in both text books and journals. This error can produce a false-positive, and if relied upon, a wrong diagnosis can be made, leading to inappropriate surgery.
Characteristics of Chinese-English bilingual dyslexia in right occipito-temporal lesion.
Ting, Simon Kang Seng; Chia, Pei Shi; Chan, Yiong Huak; Kwek, Kevin Jun Hong; Tan, Wilnard; Hameed, Shahul; Tan, Eng-King
2017-11-01
Current literature suggests that right hemisphere lesions produce predominant spatial-related dyslexic error in English speakers. However, little is known regarding such lesions in Chinese speakers. In this paper, we describe the dyslexic characteristics of a Chinese-English bilingual patient with a right posterior cortical lesion. He was found to have profound spatial-related errors during his English word reading, in both real and non-words. During Chinese word reading, there was significantly less error compared to English, probably due to the ideographic nature of the Chinese language. He was also found to commit phonological-like visual errors in English, characterized by error responses that were visually similar to the actual word. There was no significant difference in visual errors during English word reading compared with Chinese. In general, our patient's performance in both languages appears to be consistent with the current literature on right posterior hemisphere lesions. Additionally, his performance also likely suggests that the right posterior cortical region participates in the visual analysis of orthographical word representation, both in ideographical and alphabetic languages, at least from a bilingual perspective. Future studies should further examine the role of the right posterior region in initial visual analysis of both languages. Copyright © 2017 Elsevier Ltd. All rights reserved.
Quasi-static shape adjustment of a 15 meter diameter space antenna
NASA Technical Reports Server (NTRS)
Belvin, W. Keith; Herstrom, Catherine L.; Edighoffer, Harold H.
1987-01-01
A 15 meter diameter Hoop-Column antenna has been analyzed and tested to study shape adjustment of the reflector surface. The Hoop-Column antenna concept employs pretensioned cables and mesh to produce a paraboloidal reflector surface. Fabrication errors and thermal distortions may significantly reduce surface accuracy and consequently degrade electromagnetic performance. Thus, the ability to adjust the surface shape is desirable. The shape adjustment algorithm consisted of finite element and least squares error analyses to minimize the surface distortions. Experimental results verified the analysis. Application of the procedure resulted in a reduction of surface error by 38 percent. Quasi-static shape adjustment has the potential for on-orbit compensation for a variety of surface shape distortions.
Goldmann tonometry tear film error and partial correction with a shaped applanation surface.
McCafferty, Sean J; Enikov, Eniko T; Schwiegerling, Jim; Ashley, Sean M
2018-01-01
The aim of the study was to quantify the isolated tear film adhesion error in a Goldmann applanation tonometer (GAT) prism and in a correcting applanation tonometry surface (CATS) prism. The separation force of a tonometer prism adhered by a tear film to a simulated cornea was measured to quantify an isolated tear film adhesion force. Acrylic hemispheres (7.8 mm radius) used as corneas were lathed over the apical 3.06 mm diameter to simulate full applanation contact with the prism surface for both GAT and CATS prisms. Tear film separation measurements were completed with both an artificial tear and fluorescein solutions as a fluid bridge. The applanation mire thicknesses were measured and correlated with the tear film separation measurements. Human cadaver eyes were used to validate simulated cornea tear film separation measurement differences between the GAT and CATS prisms. The CATS prism tear film adhesion error (2.74±0.21 mmHg) was significantly less than the GAT prism (4.57±0.18 mmHg, p <0.001). Tear film adhesion error was independent of applanation mire thickness ( R 2 =0.09, p =0.04). Fluorescein produces more tear film error than artificial tears (+0.51±0.04 mmHg; p <0.001). Cadaver eye validation indicated the CATS prism's tear film adhesion error (1.40±0.51 mmHg) was significantly less than that of the GAT prism (3.30±0.38 mmHg; p =0.002). Measured GAT tear film adhesion error is more than previously predicted. A CATS prism significantly reduced tear film adhesion error bŷ41%. Fluorescein solution increases the tear film adhesion compared to artificial tears, while mire thickness has a negligible effect.
Preston, Jonathan L; Hull, Margaret; Edwards, Mary Louise
2013-05-01
To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up at age 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors was used to predict later speech sound production, PA, and literacy outcomes. Group averages revealed below-average school-age articulation scores and low-average PA but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom >10% of their speech sound errors were atypical had lower PA and literacy scores at school age than children who produced <10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores than preschoolers who produced fewer distortion errors. Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschoolers may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschoolers' distortions may be resistant to change over time, leading to persisting speech sound production problems.
Adaptive Constructive Processes and the Future of Memory
ERIC Educational Resources Information Center
Schacter, Daniel L.
2012-01-01
Memory serves critical functions in everyday life but is also prone to error. This article examines adaptive constructive processes, which play a functional role in memory and cognition but can also produce distortions, errors, and illusions. The article describes several types of memory errors that are produced by adaptive constructive processes…
Split torque transmission load sharing
NASA Technical Reports Server (NTRS)
Krantz, T. L.; Rashidi, M.; Kish, J. G.
1992-01-01
Split torque transmissions are attractive alternatives to conventional planetary designs for helicopter transmissions. The split torque designs can offer lighter weight and fewer parts but have not been used extensively for lack of experience, especially with obtaining proper load sharing. Two split torque designs that use different load sharing methods have been studied. Precise indexing and alignment of the geartrain to produce acceptable load sharing has been demonstrated. An elastomeric torque splitter that has large torsional compliance and damping produces even better load sharing while reducing dynamic transmission error and noise. However, the elastomeric torque splitter as now configured is not capable over the full range of operating conditions of a fielded system. A thrust balancing load sharing device was evaluated. Friction forces that oppose the motion of the balance mechanism are significant. A static analysis suggests increasing the helix angle of the input pinion of the thrust balancing design. Also, dynamic analysis of this design predicts good load sharing and significant torsional response to accumulative pitch errors of the gears.
Error image aware content restoration
NASA Astrophysics Data System (ADS)
Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee
2015-12-01
As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.
Analysis of DSN software anomalies
NASA Technical Reports Server (NTRS)
Galorath, D. D.; Hecht, H.; Hecht, M.; Reifer, D. J.
1981-01-01
A categorized data base of software errors which were discovered during the various stages of development and operational use of the Deep Space Network DSN/Mark 3 System was developed. A study team identified several existing error classification schemes (taxonomies), prepared a detailed annotated bibliography of the error taxonomy literature, and produced a new classification scheme which was tuned to the DSN anomaly reporting system and encapsulated the work of others. Based upon the DSN/RCI error taxonomy, error data on approximately 1000 reported DSN/Mark 3 anomalies were analyzed, interpreted and classified. Next, error data are summarized and histograms were produced highlighting key tendencies.
Machine Learning for Discriminating Quantum Measurement Trajectories and Improving Readout.
Magesan, Easwar; Gambetta, Jay M; Córcoles, A D; Chow, Jerry M
2015-05-22
Current methods for classifying measurement trajectories in superconducting qubit systems produce fidelities systematically lower than those predicted by experimental parameters. Here, we place current classification methods within the framework of machine learning (ML) algorithms and improve on them by investigating more sophisticated ML approaches. We find that nonlinear algorithms and clustering methods produce significantly higher assignment fidelities that help close the gap to the fidelity possible under ideal noise conditions. Clustering methods group trajectories into natural subsets within the data, which allows for the diagnosis of systematic errors. We find large clusters in the data associated with T1 processes and show these are the main source of discrepancy between our experimental and ideal fidelities. These error diagnosis techniques help provide a path forward to improve qubit measurements.
Sequential lineup laps and eyewitness accuracy.
Steblay, Nancy K; Dietrich, Hannah L; Ryan, Shannon L; Raczynski, Jeanette L; James, Kali A
2011-08-01
Police practice of double-blind sequential lineups prompts a question about the efficacy of repeated viewings (laps) of the sequential lineup. Two laboratory experiments confirmed the presence of a sequential lap effect: an increase in witness lineup picks from first to second lap, when the culprit was a stranger. The second lap produced more errors than correct identifications. In Experiment 2, lineup diagnosticity was significantly higher for sequential lineup procedures that employed a single versus double laps. Witnesses who elected to view a second lap made significantly more errors than witnesses who chose to stop after one lap or those who were required to view two laps. Witnesses with prior exposure to the culprit did not exhibit a sequential lap effect.
Predicted Blood Glucose from Insulin Administration Based on Values from Miscoded Glucose Meters
Raine, Charles H.; Pardo, Scott; Parkes, Joan Lee
2008-01-01
Objectives The proper use of many types of self-monitored blood glucose (SMBG) meters requires calibration to match strip code. Studies have demonstrated the occurrence and impact on insulin dose of coding errors with SMBG meters. This paper reflects additional analyses performed with data from Raine et al. (JDST, 2:205–210, 2007). It attempts to relate potential insulin dose errors to possible adverse blood glucose outcomes when glucose meters are miscoded. Methods Five sets of glucose meters were used. Two sets of meters were autocoded and therefore could not be miscoded, and three sets required manual coding. Two of each set of manually coded meters were deliberately miscoded, and one from each set was properly coded. Subjects (n = 116) had finger stick blood glucose obtained at fasting, as well as at 1 and 2 hours after a fixed meal (Boost®; Novartis Medical Nutrition U.S., Basel, Switzerland). Deviations of meter blood glucose results from the reference method (YSI) were used to predict insulin dose errors and resultant blood glucose outcomes based on these deviations. Results Using insulin sensitivity data, it was determined that, given an actual blood glucose of 150–400 mg/dl, an error greater than +40 mg/dl would be required to calculate an insulin dose sufficient to produce a blood glucose of less than 70 mg/dl. Conversely, an error less than or equal to -70 mg/dl would be required to derive an insulin dose insufficient to correct an elevated blood glucose to less than 180 mg/dl. For miscoded meters, the estimated probability to produce a blood glucose reduction to less than or equal to 70 mg/dl was 10.40%. The corresponding probabilities for autocoded and correctly coded manual meters were 2.52% (p < 0.0001) and 1.46% (p < 0.0001), respectively. Furthermore, the errors from miscoded meters were large enough to produce a calculated blood glucose outcome less than or equal to 50 mg/dl in 42 of 833 instances. Autocoded meters produced zero (0) outcomes less than or equal to 50 mg/dl out of 279 instances, and correctly coded manual meters produced 1 of 416. Conclusions Improperly coded blood glucose meters present the potential for insulin dose errors and resultant clinically significant hypoglycemia or hyperglycemia. Patients should be instructed and periodically reinstructed in the proper use of blood glucose meters, particularly for meters that require coding. PMID:19885229
Roland, Michelle; Hull, M L; Howell, S M
2011-05-01
In a previous paper, we reported the virtual axis finder, which is a new method for finding the rotational axes of the knee. The virtual axis finder was validated through simulations that were subject to limitations. Hence, the objective of the present study was to perform a mechanical validation with two measurement modalities: 3D video-based motion analysis and marker-based roentgen stereophotogrammetric analysis (RSA). A two rotational axis mechanism was developed, which simulated internal-external (or longitudinal) and flexion-extension (FE) rotations. The actual axes of rotation were known with respect to motion analysis and RSA markers within ± 0.0006 deg and ± 0.036 mm and ± 0.0001 deg and ± 0.016 mm, respectively. The orientation and position root mean squared errors for identifying the longitudinal rotation (LR) and FE axes with video-based motion analysis (0.26 deg, 0.28 m, 0.36 deg, and 0.25 mm, respectively) were smaller than with RSA (1.04 deg, 0.84 mm, 0.82 deg, and 0.32 mm, respectively). The random error or precision in the orientation and position was significantly better (p=0.01 and p=0.02, respectively) in identifying the LR axis with video-based motion analysis (0.23 deg and 0.24 mm) than with RSA (0.95 deg and 0.76 mm). There was no significant difference in the bias errors between measurement modalities. In comparing the mechanical validations to virtual validations, the virtual validations produced comparable errors to those of the mechanical validation. The only significant difference between the errors of the mechanical and virtual validations was the precision in the position of the LR axis while simulating video-based motion analysis (0.24 mm and 0.78 mm, p=0.019). These results indicate that video-based motion analysis with the equipment used in this study is the superior measurement modality for use with the virtual axis finder but both measurement modalities produce satisfactory results. The lack of significant differences between validation techniques suggests that the virtual sensitivity analysis previously performed was appropriately modeled. Thus, the virtual axis finder can be applied with a thorough understanding of its errors in a variety of test conditions.
Type I and Type II error concerns in fMRI research: re-balancing the scale
Cunningham, William A.
2009-01-01
Statistical thresholding (i.e. P-values) in fMRI research has become increasingly conservative over the past decade in an attempt to diminish Type I errors (i.e. false alarms) to a level traditionally allowed in behavioral science research. In this article, we examine the unintended negative consequences of this single-minded devotion to Type I errors: increased Type II errors (i.e. missing true effects), a bias toward studying large rather than small effects, a bias toward observing sensory and motor processes rather than complex cognitive and affective processes and deficient meta-analyses. Power analyses indicate that the reductions in acceptable P-values over time are producing dramatic increases in the Type II error rate. Moreover, the push for a mapwide false discovery rate (FDR) of 0.05 is based on the assumption that this is the FDR in most behavioral research; however, this is an inaccurate assessment of the conventions in actual behavioral research. We report simulations demonstrating that combined intensity and cluster size thresholds such as P < 0.005 with a 10 voxel extent produce a desirable balance between Types I and II error rates. This joint threshold produces high but acceptable Type II error rates and produces a FDR that is comparable to the effective FDR in typical behavioral science articles (while a 20 voxel extent threshold produces an actual FDR of 0.05 with relatively common imaging parameters). We recommend a greater focus on replication and meta-analysis rather than emphasizing single studies as the unit of analysis for establishing scientific truth. From this perspective, Type I errors are self-erasing because they will not replicate, thus allowing for more lenient thresholding to avoid Type II errors. PMID:20035017
Validity of BMI-Based Body Fat Equations in Men and Women: A 4-Compartment Model Comparison.
Nickerson, Brett S; Esco, Michael R; Bishop, Phillip A; Fedewa, Michael V; Snarr, Ronald L; Kliszczewicz, Brian M; Park, Kyung-Shin
2018-01-01
Nickerson, BS, Esco, MR, Bishop, PA, Fedewa, MV, Snarr, RL, Kliszczewicz, BM, and Park, K-S. Validity of BMI-based body fat equations in men and women: a 4-compartment model comparison. J Strength Cond Res 32(1): 121-129, 2018-The purpose of this study was to compare body mass index (BMI)-based body fat percentage (BF%) equations and skinfolds with a 4-compartment (4C) model in men and women. One hundred thirty adults (63 women and 67 men) volunteered to participate (age = 23 ± 5 years). BMI was calculated as weight (kg) divided by height squared (m). BF% was predicted with the BMI-based equations of Jackson et al. (BMIJA), Deurenberg et al. (BMIDE), Gallagher et al. (BMIGA), Zanovec et al. (BMIZA), Womersley and Durnin (BMIWO), and from 7-site skinfolds using the generalized skinfold equation of Jackson et al. (SF7JP). The 4C model BF% was the criterion and derived from underwater weighing for body volume, dual-energy X-ray absorptiometry for bone mineral content, and bioimpedance spectroscopy for total body water. The constant error (CE) was not significantly different for BMIZA compared with the 4C model (p = 0.74, CE = -0.2%). However, BMIJA, BMIDE, BMIGA, and BMIWO produced significantly higher mean values than the 4C model (all p < 0.001, CEs = 1.8-3.2%), whereas SF7JP was significantly lower (p < 0.001, CE = -4.8%). The standard error of estimate ranged from 3.4 (SF7JP) to 6.4% (BMIJA) while the total error varied from 6.0 (SF7JP) to 7.3% (BMIJA). The 95% limits of agreement were the smallest for SF7JP (±7.2%) and widest for BMIJA (±13.5%). Although the BMI-based equations produced similar group mean values as the 4C model, SF7JP produced the smallest individual errors. Therefore, SF7JP is recommended over the BMI-based equations, but practitioners should consider the associated CE.
ERIC Educational Resources Information Center
Fillingham, Joanne; Sage, Karen; Ralph, Matthew Lambon
2005-01-01
Background: Studies from the amnesia literature suggest that errorless learning can produce superior results to errorful learning. However, it was found in a previous investigation by the present authors that errorless and errorful therapy produced equivalent results for patients with aphasic word-finding difficulties. A study in the academic…
A method to estimate the effect of deformable image registration uncertainties on daily dose mapping
Murphy, Martin J.; Salguero, Francisco J.; Siebers, Jeffrey V.; Staub, David; Vaman, Constantin
2012-01-01
Purpose: To develop a statistical sampling procedure for spatially-correlated uncertainties in deformable image registration and then use it to demonstrate their effect on daily dose mapping. Methods: Sequential daily CT studies are acquired to map anatomical variations prior to fractionated external beam radiotherapy. The CTs are deformably registered to the planning CT to obtain displacement vector fields (DVFs). The DVFs are used to accumulate the dose delivered each day onto the planning CT. Each DVF has spatially-correlated uncertainties associated with it. Principal components analysis (PCA) is applied to measured DVF error maps to produce decorrelated principal component modes of the errors. The modes are sampled independently and reconstructed to produce synthetic registration error maps. The synthetic error maps are convolved with dose mapped via deformable registration to model the resulting uncertainty in the dose mapping. The results are compared to the dose mapping uncertainty that would result from uncorrelated DVF errors that vary randomly from voxel to voxel. Results: The error sampling method is shown to produce synthetic DVF error maps that are statistically indistinguishable from the observed error maps. Spatially-correlated DVF uncertainties modeled by our procedure produce patterns of dose mapping error that are different from that due to randomly distributed uncertainties. Conclusions: Deformable image registration uncertainties have complex spatial distributions. The authors have developed and tested a method to decorrelate the spatial uncertainties and make statistical samples of highly correlated error maps. The sample error maps can be used to investigate the effect of DVF uncertainties on daily dose mapping via deformable image registration. An initial demonstration of this methodology shows that dose mapping uncertainties can be sensitive to spatial patterns in the DVF uncertainties. PMID:22320766
Fleming, Kevin K; Bandy, Carole L; Kimble, Matthew O
2010-01-01
The decision to shoot a gun engages executive control processes that can be biased by cultural stereotypes and perceived threat. The neural locus of the decision to shoot is likely to be found in the anterior cingulate cortex (ACC), where cognition and affect converge. Male military cadets at Norwich University (N=37) performed a weapon identification task in which they made rapid decisions to shoot when images of guns appeared briefly on a computer screen. Reaction times, error rates, and electroencephalogram (EEG) activity were recorded. Cadets reacted more quickly and accurately when guns were primed by images of Middle-Eastern males wearing traditional clothing. However, cadets also made more false positive errors when tools were primed by these images. Error-related negativity (ERN) was measured for each response. Deeper ERNs were found in the medial-frontal cortex following false positive responses. Cadets who made fewer errors also produced deeper ERNs, indicating stronger executive control. Pupil size was used to measure autonomic arousal related to perceived threat. Images of Middle-Eastern males in traditional clothing produced larger pupil sizes. An image of Osama bin Laden induced the largest pupil size, as would be predicted for the exemplar of Middle East terrorism. Cadets who showed greater increases in pupil size also made more false positive errors. Regression analyses were performed to evaluate predictions based on current models of perceived threat, stereotype activation, and cognitive control. Measures of pupil size (perceived threat) and ERN (cognitive control) explained significant proportions of the variance in false positive errors to Middle-Eastern males in traditional clothing, while measures of reaction time, signal detection response bias, and stimulus discriminability explained most of the remaining variance.
Fleming, Kevin K.; Bandy, Carole L.; Kimble, Matthew O.
2014-01-01
The decision to shoot engages executive control processes that can be biased by cultural stereotypes and perceived threat. The neural locus of the decision to shoot is likely to be found in the anterior cingulate cortex (ACC) where cognition and affect converge. Male military cadets at Norwich University (N=37) performed a weapon identification task in which they made rapid decisions to shoot when images of guns appeared briefly on a computer screen. Reaction times, error rates, and EEG activity were recorded. Cadets reacted more quickly and accurately when guns were primed by images of middle-eastern males wearing traditional clothing. However, cadets also made more false positive errors when tools were primed by these images. Error-related negativity (ERN) was measured for each response. Deeper ERN’s were found in the medial-frontal cortex following false positive responses. Cadets who made fewer errors also produced deeper ERN’s, indicating stronger executive control. Pupil size was used to measure autonomic arousal related to perceived threat. Images of middle-eastern males in traditional clothing produced larger pupil sizes. An image of Osama bin Laden induced the largest pupil size, as would be predicted for the exemplar of Middle East terrorism. Cadets who showed greater increases in pupil size also made more false positive errors. Regression analyses were performed to evaluate predictions based on current models of perceived threat, stereotype activation, and cognitive control. Measures of pupil size (perceived threat) and ERN (cognitive control) explained significant proportions of the variance in false positive errors to middle-eastern males in traditional clothing, while measures of reaction time, signal detection response bias, and stimulus discriminability explained most of the remaining variance. PMID:19813139
Kinematic Analysis of Speech Sound Sequencing Errors Induced by Delayed Auditory Feedback.
Cler, Gabriel J; Lee, Jackson C; Mittelman, Talia; Stepp, Cara E; Bohland, Jason W
2017-06-22
Delayed auditory feedback (DAF) causes speakers to become disfluent and make phonological errors. Methods for assessing the kinematics of speech errors are lacking, with most DAF studies relying on auditory perceptual analyses, which may be problematic, as errors judged to be categorical may actually represent blends of sounds or articulatory errors. Eight typical speakers produced nonsense syllable sequences under normal and DAF (200 ms). Lip and tongue kinematics were captured with electromagnetic articulography. Time-locked acoustic recordings were transcribed, and the kinematics of utterances with and without perceived errors were analyzed with existing and novel quantitative methods. New multivariate measures showed that for 5 participants, kinematic variability for productions perceived to be error free was significantly increased under delay; these results were validated by using the spatiotemporal index measure. Analysis of error trials revealed both typical productions of a nontarget syllable and productions with articulatory kinematics that incorporated aspects of both the target and the perceived utterance. This study is among the first to characterize articulatory changes under DAF and provides evidence for different classes of speech errors, which may not be perceptually salient. New methods were developed that may aid visualization and analysis of large kinematic data sets. https://doi.org/10.23641/asha.5103067.
Kinematic Analysis of Speech Sound Sequencing Errors Induced by Delayed Auditory Feedback
Lee, Jackson C.; Mittelman, Talia; Stepp, Cara E.; Bohland, Jason W.
2017-01-01
Purpose Delayed auditory feedback (DAF) causes speakers to become disfluent and make phonological errors. Methods for assessing the kinematics of speech errors are lacking, with most DAF studies relying on auditory perceptual analyses, which may be problematic, as errors judged to be categorical may actually represent blends of sounds or articulatory errors. Method Eight typical speakers produced nonsense syllable sequences under normal and DAF (200 ms). Lip and tongue kinematics were captured with electromagnetic articulography. Time-locked acoustic recordings were transcribed, and the kinematics of utterances with and without perceived errors were analyzed with existing and novel quantitative methods. Results New multivariate measures showed that for 5 participants, kinematic variability for productions perceived to be error free was significantly increased under delay; these results were validated by using the spatiotemporal index measure. Analysis of error trials revealed both typical productions of a nontarget syllable and productions with articulatory kinematics that incorporated aspects of both the target and the perceived utterance. Conclusions This study is among the first to characterize articulatory changes under DAF and provides evidence for different classes of speech errors, which may not be perceptually salient. New methods were developed that may aid visualization and analysis of large kinematic data sets. Supplemental Material https://doi.org/10.23641/asha.5103067 PMID:28655038
2009-01-01
Background Increasing reports of carbapenem resistant Acinetobacter baumannii infections are of serious concern. Reliable susceptibility testing results remains a critical issue for the clinical outcome. Automated systems are increasingly used for species identification and susceptibility testing. This study was organized to evaluate the accuracies of three widely used automated susceptibility testing methods for testing the imipenem susceptibilities of A. baumannii isolates, by comparing to the validated test methods. Methods Selected 112 clinical isolates of A. baumanii collected between January 2003 and May 2006 were tested to confirm imipenem susceptibility results. Strains were tested against imipenem by the reference broth microdilution (BMD), disk diffusion (DD), Etest, BD Phoenix, MicroScan WalkAway and Vitek 2 automated systems. Data were analysed by comparing the results from each test method to those produced by the reference BMD test. Results MicroScan performed true identification of all A. baumannii strains while Vitek 2 unidentified one strain, Phoenix unidentified two strains and misidentified two strains. Eighty seven of the strains (78%) were resistant to imipenem by BMD. Etest, Vitek 2 and BD Phoenix produced acceptable error rates when tested against imipenem. Etest showed the best performance with only two minor errors (1.8%). Vitek 2 produced eight minor errors(7.2%). BD Phoenix produced three major errors (2.8%). DD produced two very major errors (1.8%) (slightly higher (0.3%) than the acceptable limit) and three major errors (2.7%). MicroScan showed the worst performance in susceptibility testing with unacceptable error rates; 28 very major (25%) and 50 minor errors (44.6%). Conclusion Reporting errors for A. baumannii against imipenem do exist in susceptibility testing systems. We suggest clinical laboratories using MicroScan system for routine use should consider using a second, independent antimicrobial susceptibility testing method to validate imipenem susceptibility. Etest, whereever available, may be used as an easy method to confirm imipenem susceptibility. PMID:19291298
FMLRC: Hybrid long read error correction using an FM-index.
Wang, Jeremy R; Holt, James; McMillan, Leonard; Jones, Corbin D
2018-02-09
Long read sequencing is changing the landscape of genomic research, especially de novo assembly. Despite the high error rate inherent to long read technologies, increased read lengths dramatically improve the continuity and accuracy of genome assemblies. However, the cost and throughput of these technologies limits their application to complex genomes. One solution is to decrease the cost and time to assemble novel genomes by leveraging "hybrid" assemblies that use long reads for scaffolding and short reads for accuracy. We describe a novel method leveraging a multi-string Burrows-Wheeler Transform with auxiliary FM-index to correct errors in long read sequences using a set of complementary short reads. We demonstrate that our method efficiently produces significantly more high quality corrected sequence than existing hybrid error-correction methods. We also show that our method produces more contiguous assemblies, in many cases, than existing state-of-the-art hybrid and long-read only de novo assembly methods. Our method accurately corrects long read sequence data using complementary short reads. We demonstrate higher total throughput of corrected long reads and a corresponding increase in contiguity of the resulting de novo assemblies. Improved throughput and computational efficiency than existing methods will help better economically utilize emerging long read sequencing technologies.
Using First Differences to Reduce Inhomogeneity in Radiosonde Temperature Datasets.
NASA Astrophysics Data System (ADS)
Free, Melissa; Angell, James K.; Durre, Imke; Lanzante, John; Peterson, Thomas C.; Seidel, Dian J.
2004-11-01
The utility of a “first difference” method for producing temporally homogeneous large-scale mean time series is assessed. Starting with monthly averages, the method involves dropping data around the time of suspected discontinuities and then calculating differences in temperature from one year to the next, resulting in a time series of year-to-year differences for each month at each station. These first difference time series are then combined to form large-scale means, and mean temperature time series are constructed from the first difference series. When applied to radiosonde temperature data, the method introduces random errors that decrease with the number of station time series used to create the large-scale time series and increase with the number of temporal gaps in the station time series. Root-mean-square errors for annual means of datasets produced with this method using over 500 stations are estimated at no more than 0.03 K, with errors in trends less than 0.02 K decade-1 for 1960 97 at 500 mb. For a 50-station dataset, errors in trends in annual global means introduced by the first differencing procedure may be as large as 0.06 K decade-1 (for six breaks per series), which is greater than the standard error of the trend. Although the first difference method offers significant resource and labor advantages over methods that attempt to adjust the data, it introduces an error in large-scale mean time series that may be unacceptable in some cases.
Blind Braille readers mislocate tactile stimuli.
Sterr, Annette; Green, Lisa; Elbert, Thomas
2003-05-01
In a previous experiment, we observed that blind Braille readers produce errors when asked to identify on which finger of one hand a light tactile stimulus had occurred. With the present study, we aimed to specify the characteristics of this perceptual error in blind and sighted participants. The experiment confirmed that blind Braille readers mislocalised tactile stimuli more often than sighted controls, and that the localisation errors occurred significantly more often at the right reading hand than at the non-reading hand. Most importantly, we discovered that the reading fingers showed the smallest error frequency, but the highest rate of stimulus attribution. The dissociation of perceiving and locating tactile stimuli in the blind suggests altered tactile information processing. Neuroplasticity, changes in tactile attention mechanisms as well as the idea that blind persons may employ different strategies for tactile exploration and object localisation are discussed as possible explanations for the results obtained.
Motor skills under varied gravitoinertial force in parabolic flight
NASA Astrophysics Data System (ADS)
Ross, Helen E.
Parabolic flight produces brief alternating periods of high and low gravitoinertial force. Subjects were tested on various paper-and-pencil aiming and tapping tasks during both normal and varied gravity in flight. It was found that changes in g level caused directional errors in the z body axis (the gravity axis), the arm aiming too high under 0g and too low under 2g. The standard deviation also increased for both vertical and lateral movements in the mid-frontal plane. Both variable and directional errors were greater under 0g than 2g. In an unpaced reciprocal tapping task subjects tended to increase their error rate rather than their movement time, but showed a non-significant trend towards slower speeds under 0g for all movement orientations. Larger variable errors or slower speeds were probably due to the difficulty of re-organising a motor skill in an unfamiliar force environment, combined with anchorage difficulties under 0g.
Rousset, Sylvie; Fardet, Anthony; Lacomme, Philippe; Normand, Sylvie; Montaurier, Christophe; Boirie, Yves; Morio, Béatrice
2015-01-01
The objective of this study was to evaluate the validity of total energy expenditure (TEE) provided by Actiheart and Armband. Normal-weight adult volunteers wore both devices either for 17 hours in a calorimetric chamber (CC, n = 49) or for 10 days in free-living conditions (FLC) outside the laboratory (n = 41). The two devices and indirect calorimetry or doubly labelled water, respectively, were used to estimate TEE in the CC group and FLC group. In the CC, the relative value of TEE error was not significant (p > 0.05) for Actiheart but significantly different from zero for Armband, showing TEE underestimation (-4.9%, p < 0.0001). However, the mean absolute values of errors were significantly different between Actiheart and Armband: 8.6% and 6.7%, respectively (p = 0.05). Armband was more accurate for estimating TEE during sleeping, rest, recovery periods and sitting-standing. Actiheart provided better estimation during step and walking. In FLC, no significant error in relative value was detected. Nevertheless, Armband produced smaller errors in absolute value than Actiheart (8.6% vs. 12.8%). The distributions of differences were more scattered around the means, suggesting a higher inter-individual variability in TEE estimated by Actiheart than by Armband. Our results show that both monitors are appropriate for estimating TEE. Armband is more effective than Actiheart at the individual level for daily light-intensity activities.
The Accuracy of GBM GRB Localizations
NASA Astrophysics Data System (ADS)
Briggs, Michael Stephen; Connaughton, V.; Meegan, C.; Hurley, K.
2010-03-01
We report an study of the accuracy of GBM GRB localizations, analyzing three types of localizations: those produced automatically by the GBM Flight Software on board GBM, those produced automatically with ground software in near real time, and localizations produced with human guidance. The two types of automatic locations are distributed in near real-time via GCN Notices; the human-guided locations are distributed on timescale of many minutes or hours using GCN Circulars. This work uses a Bayesian analysis that models the distribution of the GBM total location error by comparing GBM locations to more accurate locations obtained with other instruments. Reference locations are obtained from Swift, Super-AGILE, the LAT, and with the IPN. We model the GBM total location errors as having systematic errors in addition to the statistical errors and use the Bayesian analysis to constrain the systematic errors.
Dewey, Deborah; Cantell, Marja; Crawford, Susan G
2007-03-01
Motor and gestural skills of children with autism spectrum disorders (ASD), developmental coordination disorder (DCD), and/or attention deficit hyperactivity disorder (ADHD) were investigated. A total of 49 children with ASD, 46 children with DCD, 38 children with DCD+ADHD, 27 children with ADHD, and 78 typically developing control children participated. Motor skills were assessed with the Bruininks-Oseretsky Test of Motor Proficiency Short Form, and gestural skills were assessed using a test that required children to produce meaningful gestures to command and imitation. Children with ASD, DCD, and DCD+ADHD were significantly impaired on motor coordination skills; however, only children with ASD showed a generalized impairment in gestural performance. Examination of types of gestural errors revealed that children with ASD made significantly more incorrect action and orientation errors to command, and significantly more orientation and distortion errors to imitation than children with DCD, DCD+ADHD, ADHD, and typically developing control children. These findings suggest that gestural impairments displayed by the children with ASD were not solely attributable to deficits in motor coordination skills.
Seli, Paul; Cheyne, James Allan; Smilek, Daniel
2012-03-01
In two studies of a GO-NOGO task assessing sustained attention, we examined the effects of (1) altering speed-accuracy trade-offs through instructions (emphasizing both speed and accuracy or accuracy only) and (2) auditory alerts distributed throughout the task. Instructions emphasizing accuracy reduced errors and changed the distribution of GO trial RTs. Additionally, correlations between errors and increasing RTs produced a U-function; excessively fast and slow RTs accounted for much of the variance of errors. Contrary to previous reports, alerts increased errors and RT variability. The results suggest that (1) standard instructions for sustained attention tasks, emphasizing speed and accuracy equally, produce errors arising from attempts to conform to the misleading requirement for speed, which become conflated with attention-lapse produced errors and (2) auditory alerts have complex, and sometimes deleterious, effects on attention. We argue that instructions emphasizing accuracy provide a more precise assessment of attention lapses in sustained attention tasks. Copyright © 2011 Elsevier Inc. All rights reserved.
Shannon, Harlan E; Love, Patrick L
2007-02-01
Patients with epilepsy can have impaired cognitive abilities. Antiepileptic drugs (AEDs) may contribute to the cognitive deficits observed in patients with epilepsy, and have been shown to induce cognitive impairments in healthy individuals. However, there are few systematic data on the effects of AEDs on specific cognitive domains. We have previously demonstrated that a number of AEDs can impair working memory and attention. The purpose of the present study was to evaluate the effects of AEDs on learning as measured by a repeated acquisition of response sequences task in nonepileptic rats. The GABA-related AEDs phenobarbital and chlordiazepoxide significantly disrupted performance by shifting the learning curve to the right and increasing errors, whereas tiagabine and valproate did not. The sodium channel blockers carbamazepine and phenytoin suppressed responding at higher doses, whereas lamotrigine shifted the learning curve to the right and increased errors, and topiramate was without significant effect. Levetiracetam also shifted the learning curve to the right and increased errors. The disruptions produced by triazolam, chlordiazepoxide, lamotrigine, and levetiracetam were qualitatively similar to the effects of the muscarinic cholinergic receptor antagonist scopolamine. The present results indicate that AEDs can impair learning, but there are differences among AEDs in the magnitude of the disruption in nonepileptic rats, with drugs that enhance GABA receptor function and some that block sodium channels producing the most consistent impairment of learning.
Wicherts, Jelte M.; Bakker, Marjan; Molenaar, Dylan
2011-01-01
Background The widespread reluctance to share published research data is often hypothesized to be due to the authors' fear that reanalysis may expose errors in their work or may produce conclusions that contradict their own. However, these hypotheses have not previously been studied systematically. Methods and Findings We related the reluctance to share research data for reanalysis to 1148 statistically significant results reported in 49 papers published in two major psychology journals. We found the reluctance to share data to be associated with weaker evidence (against the null hypothesis of no effect) and a higher prevalence of apparent errors in the reporting of statistical results. The unwillingness to share data was particularly clear when reporting errors had a bearing on statistical significance. Conclusions Our findings on the basis of psychological papers suggest that statistical results are particularly hard to verify when reanalysis is more likely to lead to contrasting conclusions. This highlights the importance of establishing mandatory data archiving policies. PMID:22073203
Locatelli, R.; Bousquet, P.; Chevallier, F.; ...
2013-10-08
A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System) inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10more » synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure) is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. Here in our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr -1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr -1 in North America to 7 Tg yr -1 in Boreal Eurasia (from 23 to 48%, respectively). At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly question the consistency of transport model errors in current inverse systems.« less
Nickerson, Brett S; Tinsley, Grant M
2018-03-21
The purpose of this study was to compare body fat estimates and fat-free mass (FFM) characteristics produced by multicompartment models when utilizing either dual energy X-ray absorptiometry (DXA) or single-frequency bioelectrical impedance analysis (SF-BIA) for bone mineral content (BMC) in a sample of physically active adults. Body fat percentage (BF%) was estimated with 5-compartment (5C), 4-compartment (4C), 3-compartment (3C), and 2-compartment (2C) models, and DXA. The 5C-Wang with DXA for BMC (i.e., 5C-Wang DXA ) was the criterion. 5C-Wang using SF-BIA for BMC (i.e., 5C-Wang BIA ), 4C-Wang DXA (DXA for BMC), 4C-Wang BIA (BIA for BMC), and 3C-Siri all produced values similar to 5C-Wang DXA (r > 0.99; total error [TE] < 0.83%; standard error of estimate < 0.67%; 95% limits of agreement [LOAs] < ±1.35%). The 2C models (2C-Pace, 2C-Siri, and 2C-Brozek) and DXA each produced similar standard error of estimate and 95% LOAs (2.13%-3.12% and ±4.15%-6.14%, respectively). Furthermore, 3C-Lohman DXA (underwater weighing for body volume and DXA for BMC) and 3C-Lohman BIA (underwater weighing for body volume and SF-BIA for BMC) produced the largest 95% LOAs (±5.94%-8.63%). The FFM characteristics (i.e., FFM density, water/FFM, mineral/FFM, and protein/FFM) for 5C-Wang DXA and 5C-Wang BIA were each compared with the "reference body" cadavers of Brozek et al. 5C-Wang BIA FFM density differed significantly from the "reference body" in women (1.103 ± 0.007 g/cm 3 ; p < 0.001), but no differences were observed for 5C-Wang DXA or either 5C model in men. Moreover, water/FFM and mineral/FFM were significantly lower in men and women when comparing 5C-Wang DXA and 5C-Wang BIA with the "reference body," whereas protein/FFM was significantly higher (all p ≤ 0.001). 3C-Lohman BIA and 3C-Lohman DXA produced error similar to 2C models and DXA and are therefore not recommended multicompartment models. Although more advanced multicompartment models (e.g., 4C-Wang and 5C-Wang) can utilize BIA-derived BMC with minimal impact on body fat estimates, the increased accuracy of these models over 3C-Siri is minimal. Copyright © 2018 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Richards, W. Lance
1996-01-01
Significant strain-gage errors may exist in measurements acquired in transient-temperature environments if conventional correction methods are applied. As heating or cooling rates increase, temperature gradients between the strain-gage sensor and substrate surface increase proportionally. These temperature gradients introduce strain-measurement errors that are currently neglected in both conventional strain-correction theory and practice. Therefore, the conventional correction theory has been modified to account for these errors. A new experimental method has been developed to correct strain-gage measurements acquired in environments experiencing significant temperature transients. The new correction technique has been demonstrated through a series of tests in which strain measurements were acquired for temperature-rise rates ranging from 1 to greater than 100 degrees F/sec. Strain-gage data from these tests have been corrected with both the new and conventional methods and then compared with an analysis. Results show that, for temperature-rise rates greater than 10 degrees F/sec, the strain measurements corrected with the conventional technique produced strain errors that deviated from analysis by as much as 45 percent, whereas results corrected with the new technique were in good agreement with analytical results.
Scheduling periodic jobs using imprecise results
NASA Technical Reports Server (NTRS)
Chung, Jen-Yao; Liu, Jane W. S.; Lin, Kwei-Jay
1987-01-01
One approach to avoid timing faults in hard, real-time systems is to make available intermediate, imprecise results produced by real-time processes. When a result of the desired quality cannot be produced in time, an imprecise result of acceptable quality produced before the deadline can be used. The problem of scheduling periodic jobs to meet deadlines on a system that provides the necessary programming language primitives and run-time support for processes to return imprecise results is discussed. Since the scheduler may choose to terminate a task before it is completed, causing it to produce an acceptable but imprecise result, the amount of processor time assigned to any task in a valid schedule can be less than the amount of time required to complete the task. A meaningful formulation of the scheduling problem must take into account the overall quality of the results. Depending on the different types of undesirable effects caused by errors, jobs are classified as type N or type C. For type N jobs, the effects of errors in results produced in different periods are not cumulative. A reasonable performance measure is the average error over all jobs. Three heuristic algorithms that lead to feasible schedules with small average errors are described. For type C jobs, the undesirable effects of errors produced in different periods are cumulative. Schedulability criteria of type C jobs are discussed.
Wildlife management by habitat units: A preliminary plan of action
NASA Technical Reports Server (NTRS)
Frentress, C. D.; Frye, R. G.
1975-01-01
Procedures for yielding vegetation type maps were developed using LANDSAT data and a computer assisted classification analysis (LARSYS) to assist in managing populations of wildlife species by defined area units. Ground cover in Travis County, Texas was classified on two occasions using a modified version of the unsupervised approach to classification. The first classification produced a total of 17 classes. Examination revealed that further grouping was justified. A second analysis produced 10 classes which were displayed on printouts which were later color-coded. The final classification was 82 percent accurate. While the classification map appeared to satisfactorily depict the existing vegetation, two classes were determined to contain significant error. The major sources of error could have been eliminated by stratifying cluster sites more closely among previously mapped soil associations that are identified with particular plant associations and by precisely defining class nomenclature using established criteria early in the analysis.
ReQON: a Bioconductor package for recalibrating quality scores from next-generation sequencing data
2012-01-01
Background Next-generation sequencing technologies have become important tools for genome-wide studies. However, the quality scores that are assigned to each base have been shown to be inaccurate. If the quality scores are used in downstream analyses, these inaccuracies can have a significant impact on the results. Results Here we present ReQON, a tool that recalibrates the base quality scores from an input BAM file of aligned sequencing data using logistic regression. ReQON also generates diagnostic plots showing the effectiveness of the recalibration. We show that ReQON produces quality scores that are both more accurate, in the sense that they more closely correspond to the probability of a sequencing error, and do a better job of discriminating between sequencing errors and non-errors than the original quality scores. We also compare ReQON to other available recalibration tools and show that ReQON is less biased and performs favorably in terms of quality score accuracy. Conclusion ReQON is an open source software package, written in R and available through Bioconductor, for recalibrating base quality scores for next-generation sequencing data. ReQON produces a new BAM file with more accurate quality scores, which can improve the results of downstream analysis, and produces several diagnostic plots showing the effectiveness of the recalibration. PMID:22946927
Assessing the Impact of Analytical Error on Perceived Disease Severity.
Kroll, Martin H; Garber, Carl C; Bi, Caixia; Suffin, Stephen C
2015-10-01
The perception of the severity of disease from laboratory results assumes that the results are free of analytical error; however, analytical error creates a spread of results into a band and thus a range of perceived disease severity. To assess the impact of analytical errors by calculating the change in perceived disease severity, represented by the hazard ratio, using non-high-density lipoprotein (nonHDL) cholesterol as an example. We transformed nonHDL values into ranges using the assumed total allowable errors for total cholesterol (9%) and high-density lipoprotein cholesterol (13%). Using a previously determined relationship between the hazard ratio and nonHDL, we calculated a range of hazard ratios for specified nonHDL concentrations affected by analytical error. Analytical error, within allowable limits, created a band of values of nonHDL, with a width spanning 30 to 70 mg/dL (0.78-1.81 mmol/L), depending on the cholesterol and high-density lipoprotein cholesterol concentrations. Hazard ratios ranged from 1.0 to 2.9, a 16% to 50% error. Increased bias widens this range and decreased bias narrows it. Error-transformed results produce a spread of values that straddle the various cutoffs for nonHDL. The range of the hazard ratio obscures the meaning of results, because the spread of ratios at different cutoffs overlap. The magnitude of the perceived hazard ratio error exceeds that for the allowable analytical error, and significantly impacts the perceived cardiovascular disease risk. Evaluating the error in the perceived severity (eg, hazard ratio) provides a new way to assess the impact of analytical error.
Spatial range of illusory effects in Müller-Lyer figures.
Predebon, J
2001-11-01
The spatial range of the illusory effects in Müller-Lyer (M-L) figures was examined in three experiments. Experiments 1 and 2 assessed the pattern of bisection errors along the shaft of the standard or double-angle (experiment 1) and the single-angle (experiment 2) M-L figures: Subjects bisected the shaft and the resulting two half-segments of the shaft to produce apparently equal quarters, and then each of the quarters to produce eight equal-appearing segments. The bisection judgments of each segment were referenced to the segment's physical midpoints. The expansion or wings-out and the contraction or wings-in figures yielded similar patterns of bisection errors. For the standard M-L figures, there were significant errors in bisecting each half, and each end-quarter, but not the two central quarters of the shaft. For the single-angle M-L figures, there were significant errors in bisecting the length of the shaft, the half-segment, and the quarter, of the shaft adjacent to the vertex but not the second quarter from the vertex nor in dividing the half of the shaft at the open end of the figure into four equal intervals. Experiment 3 assessed the apparent length of the half-segment of the shaft at the open end of the single-angle figures. Length judgments were unaffected by the vertex at the opposite end of the shaft. Taken together, the results indicate that the length distortions in both the standard and single-angle M-L figures are not uniformly distributed along the shaft but rather are confined mainly to the quarters adjacent to the vertices. The present findings imply that theories of the M-L illusion which assume uniform expansion or contraction of the shafts are incomplete.
NASA Astrophysics Data System (ADS)
Evangelisti, Luca; Pate, Brooks
2017-06-01
A study of the minimally exciting topic of agreement between experimental and measured rotational constants of molecules was performed on a set of large molecules with 16-18 heavy atoms (carbon and oxygen). The molecules are: nootkatone (C_{15}H_{22}O), cedrol (C_{15}H_{26}O), ambroxide (C_{16}H_{28}O), sclareolide (C_{16}H_{22}O_{2}), and dihydroartemisinic acid (C_{15}H_{24}O_{2}). For this set of molecules we obtained 13C-subsitution structures for six molecules (this includes two conformers of nootkatone). A comparison of theoretical structures and experimental substitution structures was performed in the spirit of the recent work of Grimme and Steinmetz.[1] Our analysis focused the center-of-mass distance of the carbon atoms in the molecules. Four different computational methods were studied: standard DFT (B3LYP), dispersion corrected DFT (B3LYP-D3BJ), hybrid DFT with dispersion correction (B2PLYP-D3), and MP2. A significant difference in these theories is how they handle medium range correlation of electrons that produce dispersion forces. For larger molecules, these dispersion forces produce an overall contraction of the molecule around the center-of-mass. DFT poorly treats this effect and produces structures that are too expanded. MP2 calculations overestimate the correction and produce structures that are too compact. Both dispersion corrected DFT methods produce structures in excellent agreement with experiment. The analysis shows that the difference in computational methods can be described by a linear error in the center-of-mass distance. This makes it possible to correct poorer performing calculations with a single scale factor. We also reexamine the issue of the "Costain error" in substitution structures and show that it is significantly larger in these systems than in the smaller molecules used by Costain to establish the error limits. [1] Stefan Grimme and Marc Steinmetz, "Effects of London dispersion correction in density functional theory on structures of organic molecules in the gas phase", Phys. Chem. Chem. Phys. 15, 16031-16042 (2013).
The role of visual spatial attention in adult developmental dyslexia.
Collis, Nathan L; Kohnen, Saskia; Kinoshita, Sachiko
2013-01-01
The present study investigated the nature of visual spatial attention deficits in adults with developmental dyslexia, using a partial report task with five-letter, digit, and symbol strings. Participants responded by a manual key press to one of nine alternatives, which included other characters in the string, allowing an assessment of position errors as well as intrusion errors. The results showed that the dyslexic adults performed significantly worse than age-matched controls with letter and digit strings but not with symbol strings. Both groups produced W-shaped serial position functions with letter and digit strings. The dyslexics' deficits with letter string stimuli were limited to position errors, specifically at the string-interior positions 2 and 4. These errors correlated with letter transposition reading errors (e.g., reading slat as "salt"), but not with the Rapid Automatized Naming (RAN) task. Overall, these results suggest that the dyslexic adults have a visual spatial attention deficit; however, the deficit does not reflect a reduced span in visual-spatial attention, but a deficit in processing a string of letters in parallel, probably due to difficulty in the coding of letter position.
Verbal suppression and strategy use: a role for the right lateral prefrontal cortex?
Robinson, Gail A; Cipolotti, Lisa; Walker, David G; Biggs, Vivien; Bozzali, Marco; Shallice, Tim
2015-04-01
Verbal initiation, suppression and strategy generation/use are cognitive processes widely held to be supported by the frontal cortex. The Hayling Test was designed to tap these cognitive processes within the same sentence completion task. There are few studies specifically investigating the neural correlates of the Hayling Test but it has been primarily used to detect frontal lobe damage. This study investigates the components of the Hayling Test in a large sample of patients with unselected focal frontal (n = 60) and posterior (n = 30) lesions. Patients and controls (n = 40) matched for education, age and sex were administered the Hayling Test as well as background cognitive tests. The standard Hayling Test clinical measures (initiation response time, suppression response time, suppression errors and overall score), composite errors scores and strategy-based responses were calculated. Lesions were analysed by classical frontal/posterior subdivisions as well as a finer-grained frontal localization method and a specific contrast method that is somewhat analogous to voxel-based lesion mapping methods. Thus, patients with right lateral, left lateral and superior medial lesions were compared to controls and patients with right lateral lesions were compared to all other patients. The results show that all four standard Hayling Test clinical measures are sensitive to frontal lobe damage although only the suppression error and overall scores were specific to the frontal region. Although all frontal patients produced blatant suppression errors, a specific right lateral frontal effect was revealed for producing errors that were subtly wrong. In addition, frontal patients overall produced fewer correct responses indicative of developing an appropriate strategy but only the right lateral group showed a significant deficit. This problem in strategy attainment and implementation could explain, at least in part, the suppression error impairment. Contrary to previous studies there was no specific frontal effect for verbal initiation. Overall, our results support a role for the right lateral frontal region in verbal suppression and, for the first time, in strategy generation/use. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Loukusa, Soile; Leinonen, Eeva; Jussila, Katja; Mattila, Marja-Leena; Ryder, Nuala; Ebeling, Hanna; Moilanen, Irma
2007-01-01
This study examined irrelevant/incorrect answers produced by children with Asperger syndrome or high-functioning autism (7-9-year-olds and 10-12-year-olds) and normally developing children (7-9-year-olds). The errors produced were divided into three types: in Type 1, the child answered the original question incorrectly, in Type 2, the child gave a…
Brain signaling and behavioral responses induced by exposure to (56)Fe-particle radiation
NASA Technical Reports Server (NTRS)
Denisova, N. A.; Shukitt-Hale, B.; Rabin, B. M.; Joseph, J. A.
2002-01-01
Previous experiments have demonstrated that exposure to 56Fe-particle irradiation (1.5 Gy, 1 GeV) produced aging-like accelerations in neuronal and behavioral deficits. Astronauts on long-term space flights will be exposed to similar heavy-particle radiations that might have similar deleterious effects on neuronal signaling and cognitive behavior. Therefore, the present study evaluated whether radiation-induced spatial learning and memory behavioral deficits are associated with region-specific brain signaling deficits by measuring signaling molecules previously found to be essential for behavior [pre-synaptic vesicle proteins, synaptobrevin and synaptophysin, and protein kinases, calcium-dependent PRKCs (also known as PKCs) and PRKA (PRKA RIIbeta)]. The results demonstrated a significant radiation-induced increase in reference memory errors. The increases in reference memory errors were significantly negatively correlated with striatal synaptobrevin and frontal cortical synaptophysin expression. Both synaptophysin and synaptobrevin are synaptic vesicle proteins that are important in cognition. Striatal PRKA, a memory signaling molecule, was also significantly negatively correlated with reference memory errors. Overall, our findings suggest that radiation-induced pre-synaptic facilitation may contribute to some previously reported radiation-induced decrease in striatal dopamine release and for the disruption of the central dopaminergic system integrity and dopamine-mediated behavior.
Brain signaling and behavioral responses induced by exposure to (56)Fe-particle radiation.
Denisova, N A; Shukitt-Hale, B; Rabin, B M; Joseph, J A
2002-12-01
Previous experiments have demonstrated that exposure to 56Fe-particle irradiation (1.5 Gy, 1 GeV) produced aging-like accelerations in neuronal and behavioral deficits. Astronauts on long-term space flights will be exposed to similar heavy-particle radiations that might have similar deleterious effects on neuronal signaling and cognitive behavior. Therefore, the present study evaluated whether radiation-induced spatial learning and memory behavioral deficits are associated with region-specific brain signaling deficits by measuring signaling molecules previously found to be essential for behavior [pre-synaptic vesicle proteins, synaptobrevin and synaptophysin, and protein kinases, calcium-dependent PRKCs (also known as PKCs) and PRKA (PRKA RIIbeta)]. The results demonstrated a significant radiation-induced increase in reference memory errors. The increases in reference memory errors were significantly negatively correlated with striatal synaptobrevin and frontal cortical synaptophysin expression. Both synaptophysin and synaptobrevin are synaptic vesicle proteins that are important in cognition. Striatal PRKA, a memory signaling molecule, was also significantly negatively correlated with reference memory errors. Overall, our findings suggest that radiation-induced pre-synaptic facilitation may contribute to some previously reported radiation-induced decrease in striatal dopamine release and for the disruption of the central dopaminergic system integrity and dopamine-mediated behavior.
Parvin, Darius E; McDougle, Samuel D; Taylor, Jordan A; Ivry, Richard B
2018-05-09
Failures to obtain reward can occur from errors in action selection or action execution. Recently, we observed marked differences in choice behavior when the failure to obtain a reward was attributed to errors in action execution compared with errors in action selection (McDougle et al., 2016). Specifically, participants appeared to solve this credit assignment problem by discounting outcomes in which the absence of reward was attributed to errors in action execution. Building on recent evidence indicating relatively direct communication between the cerebellum and basal ganglia, we hypothesized that cerebellar-dependent sensory prediction errors (SPEs), a signal indicating execution failure, could attenuate value updating within a basal ganglia-dependent reinforcement learning system. Here we compared the SPE hypothesis to an alternative, "top-down" hypothesis in which changes in choice behavior reflect participants' sense of agency. In two experiments with male and female human participants, we manipulated the strength of SPEs, along with the participants' sense of agency in the second experiment. The results showed that, whereas the strength of SPE had no effect on choice behavior, participants were much more likely to discount the absence of rewards under conditions in which they believed the reward outcome depended on their ability to produce accurate movements. These results provide strong evidence that SPEs do not directly influence reinforcement learning. Instead, a participant's sense of agency appears to play a significant role in modulating choice behavior when unexpected outcomes can arise from errors in action execution. SIGNIFICANCE STATEMENT When learning from the outcome of actions, the brain faces a credit assignment problem: Failures of reward can be attributed to poor choice selection or poor action execution. Here, we test a specific hypothesis that execution errors are implicitly signaled by cerebellar-based sensory prediction errors. We evaluate this hypothesis and compare it with a more "top-down" hypothesis in which the modulation of choice behavior from execution errors reflects participants' sense of agency. We find that sensory prediction errors have no significant effect on reinforcement learning. Instead, instructions influencing participants' belief of causal outcomes appear to be the main factor influencing their choice behavior. Copyright © 2018 the authors 0270-6474/18/384521-10$15.00/0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oyeyemi, Victor B.; Keith, John A.; Pavone, Michele
2012-01-11
Density functional theory (DFT) is often used to determine the electronic and geometric structures of molecules. While studying alkynyl radicals, we discovered that DFT exchange-correlation (XC) functionals containing less than ~22% Hartree–Fock (HF) exchange led to qualitatively different structures than those predicted from ab initio HF and post-HF calculations or DFT XCs containing 25% or more HF exchange. We attribute this discrepancy to rehybridization at the radical center due to electron delocalization across the triple bonds of the alkynyl groups, which itself is an artifact of self-interaction and delocalization errors. Inclusion of sufficient exact exchange reduces these errors and suppressesmore » this erroneous delocalization; we find that a threshold amount is needed for accurate structure determinations. Finally, below this threshold, significant errors in predicted alkyne thermochemistry emerge as a consequence.« less
NASA Astrophysics Data System (ADS)
Roberts, William R.; Gould, Christopher J.; Smith, Adlai H.; Rebitz, Ken
2000-08-01
Several ideas have recently been presented which attempt to measure and predict lens aberrations for new low k1 imaging systems. Abbreviated sets of Zernike coefficients have been produced and used to predict Across Chip Linewidth Variation. Empirical use of the wavefront aberrations can now be used in commercially available lithography simulators to predict pattern distortion and placement errors. Measurement and Determination of Zernike coefficients has been a significant effort of many. However the use of this data has generally been limited to matching lenses or picking best fit lense pairs. We will use wavefront aberration data collected using the Litel InspecStep in-situ Interferometer as input data for Prolith/3D to model and predict pattern placement errors and intrafield overlay variation. Experiment data will be collected and compared to the simulated predictions.
The Sources of Error in Spanish Writing.
ERIC Educational Resources Information Center
Justicia, Fernando; Defior, Sylvia; Pelegrina, Santiago; Martos, Francisco J.
1999-01-01
Determines the pattern of errors in Spanish spelling. Analyzes and proposes a classification system for the errors made by children in the initial stages of the acquisition of spelling skills. Finds the diverse forms of only 20 Spanish words produces 36% of the spelling errors in Spanish; and substitution is the most frequent type of error. (RS)
NASA Technical Reports Server (NTRS)
Vasilkov, Alexander; Joiner, Joanna; Spurr, Robert; Bhartia, Pawan K.; Levelt, Pieternel; Stephens, Graeme
2009-01-01
In this paper we examine differences between cloud pressures retrieved from the Ozone Monitoring Instrument (OMI) using the ultraviolet rotational Raman scattering (RRS) algorithm and those from the thermal infrared (IR) Aqua/MODIS. Several cloud data sets are currently being used in OMI trace gas retrieval algorithms including climatologies based on IR measurements and simultaneous cloud parameters derived from OMI. From a validation perspective, it is important to understand the OMI retrieved cloud parameters and how they differ with those derived from the IR. To this end, we perform radiative transfer calculations to simulate the effects of different geophysical conditions on the OMI RRS cloud pressure retrievals. We also quantify errors related to the use of the Mixed Lambert-Equivalent Reflectivity (MLER) concept as currently implemented of the OMI algorithms. Using properties from the Cloudsat radar and MODIS, we show that radiative transfer calculations support the following: (1) The MLER model is adequate for single-layer optically thick, geometrically thin clouds, but can produce significant errors in estimated cloud pressure for optically thin clouds. (2) In a two-layer cloud, the RRS algorithm may retrieve a cloud pressure that is either between the two cloud decks or even beneath the top of the lower cloud deck because of scattering between the cloud layers; the retrieved pressure depends upon the viewing geometry and the optical depth of the upper cloud deck. (3) Absorbing aerosol in and above a cloud can produce significant errors in the retrieved cloud pressure. (4) The retrieved RRS effective pressure for a deep convective cloud will be significantly higher than the physical cloud top pressure derived with thermal IR.
NASA Astrophysics Data System (ADS)
Solazzo, Efisio; Hogrefe, Christian; Colette, Augustin; Garcia-Vivanco, Marta; Galmarini, Stefano
2017-09-01
The work here complements the overview analysis of the modelling systems participating in the third phase of the Air Quality Model Evaluation International Initiative (AQMEII3) by focusing on the performance for hourly surface ozone by two modelling systems, Chimere for Europe and CMAQ for North America. The evaluation strategy outlined in the course of the three phases of the AQMEII activity, aimed to build up a diagnostic methodology for model evaluation, is pursued here and novel diagnostic methods are proposed. In addition to evaluating the base case
simulation in which all model components are configured in their standard mode, the analysis also makes use of sensitivity simulations in which the models have been applied by altering and/or zeroing lateral boundary conditions, emissions of anthropogenic precursors, and ozone dry deposition. To help understand of the causes of model deficiencies, the error components (bias, variance, and covariance) of the base case and of the sensitivity runs are analysed in conjunction with timescale considerations and error modelling using the available error fields of temperature, wind speed, and NOx concentration. The results reveal the effectiveness and diagnostic power of the methods devised (which remains the main scope of this study), allowing the detection of the timescale and the fields that the two models are most sensitive to. The representation of planetary boundary layer (PBL) dynamics is pivotal to both models. In particular, (i) the fluctuations slower than ˜ 1.5 days account for 70-85 % of the mean square error of the full (undecomposed) ozone time series; (ii) a recursive, systematic error with daily periodicity is detected, responsible for 10-20 % of the quadratic total error; (iii) errors in representing the timing of the daily transition between stability regimes in the PBL are responsible for a covariance error as large as 9 ppb (as much as the standard deviation of the network-average ozone observations in summer in both Europe and North America); (iv) the CMAQ ozone error has a weak/negligible dependence on the errors in NO2, while the error in NO2 significantly impacts the ozone error produced by Chimere; (v) the response of the models to variations of anthropogenic emissions and boundary conditions show a pronounced spatial heterogeneity, while the seasonal variability of the response is found to be less marked. Only during the winter season does the zeroing of boundary values for North America produce a spatially uniform deterioration of the model accuracy across the majority of the continent.
Significance of acceleration period in a dynamic strength testing study.
Chen, W L; Su, F C; Chou, Y L
1994-06-01
The acceleration period that occurs during isokinetic tests may provide valuable information regarding neuromuscular readiness to produce maximal contraction. The purpose of this study was to collect the normative data of acceleration time during isokinetic knee testing, to calculate the acceleration work (Wacc), and to determine the errors (ERexp, ERwork, ERpower) due to ignoring Wacc during explosiveness, total work, and average power measurements. Seven male and 13 female subjects attended the test by using the Cybex 325 system and electronic stroboscope machine for 10 testing speeds (30-300 degrees/sec). A three-way ANOVA was used to assess gender, direction, and speed factors on acceleration time, Wacc, and errors. The results indicated that acceleration time was significantly affected by speed and direction; Wacc and ERexp by speed, direction, and gender; and ERwork and ERpower by speed and gender. The errors appeared to increase when testing the female subjects, during the knee flexion test, or when speed increased. To increase validity in clinical testing, it is important to consider the acceleration phase effect, especially in higher velocity isokinetic testing or for weaker muscle groups.
Hemispheric Asymmetries in the Activation and Monitoring of Memory Errors
ERIC Educational Resources Information Center
Giammattei, Jeannette; Arndt, Jason
2012-01-01
Previous research on the lateralization of memory errors suggests that the right hemisphere's tendency to produce more memory errors than the left hemisphere reflects hemispheric differences in semantic activation. However, all prior research that has examined the lateralization of memory errors has used self-paced recognition judgments. Because…
Phonological and Motor Errors in Individuals with Acquired Sound Production Impairment
ERIC Educational Resources Information Center
Buchwald, Adam; Miozzo, Michele
2012-01-01
Purpose: This study aimed to compare sound production errors arising due to phonological processing impairment with errors arising due to motor speech impairment. Method: Two speakers with similar clinical profiles who produced similar consonant cluster simplification errors were examined using a repetition task. We compared both overall accuracy…
Error-Eliciting Problems: Fostering Understanding and Thinking
ERIC Educational Resources Information Center
Lim, Kien H.
2014-01-01
Student errors are springboards for analyzing, reasoning, and justifying. The mathematics education community recognizes the value of student errors, noting that "mistakes are seen not as dead ends but rather as potential avenues for learning." To induce specific errors and help students learn, choose tasks that might produce mistakes.…
ERIC Educational Resources Information Center
Halpern, Orly; Tobin, Yishai
2008-01-01
"Non-vocalization" (N-V) is a newly described phonological error process in hearing impaired speakers. In N-V the hearing impaired person actually articulates the phoneme but without producing a voice. The result is an error process looking as if it is produced but sounding as if it is omitted. N-V was discovered by video recording the speech of…
Using doppler radar images to estimate aircraft navigational heading error
Doerry, Armin W [Albuquerque, NM; Jordan, Jay D [Albuquerque, NM; Kim, Theodore J [Albuquerque, NM
2012-07-03
A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.
Predictive momentum management for a space station measurement and computation requirements
NASA Technical Reports Server (NTRS)
Adams, John Carl
1986-01-01
An analysis is made of the effects of errors and uncertainties in the predicting of disturbance torques on the peak momentum buildup on a space station. Models of the disturbance torques acting on a space station in low Earth orbit are presented, to estimate how accurately they can be predicted. An analysis of the torque and momentum buildup about the pitch axis of the Dual Keel space station configuration is formulated, and a derivation of the Average Torque Equilibrium Attitude (ATEA) is presented, for the case of no MRMS (Mobile Remote Manipulation System) motion, Y vehicle axis MRMS motion, and Z vehicle axis MRMS motion. Results showed the peak momentum buildup to be approximately 20000 N-m-s and to be relatively insensitive to errors in the predicting torque models, for Z axis motion of the MRMS was found to vary significantly with model errors, but not exceed a value of approximately 15000 N-m-s for the Y axis MRMS motion with 1 deg attitude hold error. Minimum peak disturbance momentum was found not to occur at the ATEA angle, but at a slightly smaller angle. However, this minimum peak momentum attitude was found to produce significant disturbance momentum at the end of the predicting time interval.
The impact of rotator cuff tendinopathy on proprioception, measuring force sensation.
Maenhout, Annelies G; Palmans, Tanneke; De Muynck, Martine; De Wilde, Lieven F; Cools, Ann M
2012-08-01
The impact of rotator cuff tendinopathy and related impingement on proprioception is not well understood. Numerous quantitative and qualitative changes in shoulder muscles have been shown in patients with rotator cuff tendinopathy. These findings suggest that control of force might be affected. This investigation wants to evaluate force sensation, a submodality of proprioception, in patients with rotator cuff tendinopathy. Thirty-six patients with rotator cuff tendinopathy and 30 matched healthy subjects performed force reproduction tests to isometric external and internal rotation to investigate how accurately they could reproduce a fixed target (50% MVC). Relative error, constant error, and force steadiness were calculated to evaluate respectively magnitude of error made during the test, direction of this error (overshoot or undershoot), and fluctuations of produced forces. Patients significantly overshoot the target (mean, 6.04% of target) while healthy subjects underestimate the target (mean, -5.76% of target). Relative error and force steadiness are similar in patients with rotator cuff tendinopathy and healthy subjects. Force reproduction tests, as executed in this study, were found to be highly reliable (ICC 0.849 and 0.909). Errors were significantly larger during external rotation tests, compared to internal rotation. Patients overestimate the target during force reproduction tests. This should be taken into account in the rehabilitation of patients with rotator cuff tendinopathy; however, precision of force sensation and steadiness of force exertion remains unaltered. This might indicate that control of muscle force is preserved. Copyright © 2012 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.
Reduced cost and improved figure of sapphire optical components
NASA Astrophysics Data System (ADS)
Walters, Mark; Bartlett, Kevin; Brophy, Matthew R.; DeGroote Nelson, Jessica; Medicus, Kate
2015-10-01
Sapphire presents many challenges to optical manufacturers due to its high hardness and anisotropic properties. Long lead times and high prices are the typical result of such challenges. The cost of even a simple 'grind and shine' process can be prohibitive. The high precision surfaces required by optical sensor applications further exacerbate the challenge of processing sapphire thereby increasing cost further. Optimax has demonstrated a production process for such windows that delivers over 50% time reduction as compared to traditional manufacturing processes for sapphire, while producing windows with less than 1/5 wave rms figure error. Optimax's sapphire production process achieves significant improvement in cost by implementation of a controlled grinding process to present the best possible surface to the polishing equipment. Following the grinding process is a polishing process taking advantage of chemical interactions between slurry and substrate to deliver excellent removal rates and surface finish. Through experiments, the mechanics of the polishing process were also optimized to produce excellent optical figure. In addition to reducing the cost of producing large sapphire sensor windows, the grinding and polishing technology Optimax has developed aids in producing spherical sapphire components to better figure quality. In addition to reducing the cost of producing large sapphire sensor windows, the grinding and polishing technology Optimax has developed aids in producing spherical sapphire components to better figure quality. Through specially developed polishing slurries, the peak-to-valley figure error of spherical sapphire parts is reduced by over 80%.
Greenland, Sander; Gustafson, Paul
2006-07-01
Researchers sometimes argue that their exposure-measurement errors are independent of other errors and are nondifferential with respect to disease, resulting in estimation bias toward the null. Among well-known problems with such arguments are that independence and nondifferentiality are harder to satisfy than ordinarily appreciated (e.g., because of correlation of errors in questionnaire items, and because of uncontrolled covariate effects on error rates); small violations of independence or nondifferentiality may lead to bias away from the null; and, if exposure is polytomous, the bias produced by independent nondifferential error is not always toward the null. The authors add to this list by showing that, in a 2 x 2 table (for which independent nondifferential error produces bias toward the null), accounting for independent nondifferential error does not reduce the p value even though it increases the point estimate. Thus, such accounting should not increase certainty that an association is present.
Shadmehr, Reza; Ohminami, Shinya; Tsutsumi, Ryosuke; Shirota, Yuichiro; Shimizu, Takahiro; Tanaka, Nobuyuki; Terao, Yasuo; Tsuji, Shoji; Ugawa, Yoshikazu; Uchimura, Motoaki; Inoue, Masato; Kitazawa, Shigeru
2015-01-01
Cerebellar damage can profoundly impair human motor adaptation. For example, if reaching movements are perturbed abruptly, cerebellar damage impairs the ability to learn from the perturbation-induced errors. Interestingly, if the perturbation is imposed gradually over many trials, people with cerebellar damage may exhibit improved adaptation. However, this result is controversial, since the differential effects of gradual vs. abrupt protocols have not been observed in all studies. To examine this question, we recruited patients with pure cerebellar ataxia due to cerebellar cortical atrophy (n = 13) and asked them to reach to a target while viewing the scene through wedge prisms. The prisms were computer controlled, making it possible to impose the full perturbation abruptly in one trial, or build up the perturbation gradually over many trials. To control visual feedback, we employed shutter glasses that removed visual feedback during the reach, allowing us to measure trial-by-trial learning from error (termed error-sensitivity), and trial-by-trial decay of motor memory (termed forgetting). We found that the patients benefited significantly from the gradual protocol, improving their performance with respect to the abrupt protocol by exhibiting smaller errors during the exposure block, and producing larger aftereffects during the postexposure block. Trial-by-trial analysis suggested that this improvement was due to increased error-sensitivity in the gradual protocol. Therefore, cerebellar patients exhibited an improved ability to learn from error if they experienced those errors gradually. This improvement coincided with increased error-sensitivity and was present in both groups of subjects, suggesting that control of error-sensitivity may be spared despite cerebellar damage. PMID:26311179
At least some errors are randomly generated (Freud was wrong)
NASA Technical Reports Server (NTRS)
Sellen, A. J.; Senders, J. W.
1986-01-01
An experiment was carried out to expose something about human error generating mechanisms. In the context of the experiment, an error was made when a subject pressed the wrong key on a computer keyboard or pressed no key at all in the time allotted. These might be considered, respectively, errors of substitution and errors of omission. Each of seven subjects saw a sequence of three digital numbers, made an easily learned binary judgement about each, and was to press the appropriate one of two keys. Each session consisted of 1,000 presentations of randomly permuted, fixed numbers broken into 10 blocks of 100. One of two keys should have been pressed within one second of the onset of each stimulus. These data were subjected to statistical analyses in order to probe the nature of the error generating mechanisms. Goodness of fit tests for a Poisson distribution for the number of errors per 50 trial interval and for an exponential distribution of the length of the intervals between errors were carried out. There is evidence for an endogenous mechanism that may best be described as a random error generator. Furthermore, an item analysis of the number of errors produced per stimulus suggests the existence of a second mechanism operating on task driven factors producing exogenous errors. Some errors, at least, are the result of constant probability generating mechanisms with error rate idiosyncratically determined for each subject.
Cohen, Michael X
2015-09-01
The purpose of this paper is to compare the effects of different spatial transformations applied to the same scalp-recorded EEG data. The spatial transformations applied are two referencing schemes (average and linked earlobes), the surface Laplacian, and beamforming (a distributed source localization procedure). EEG data were collected during a speeded reaction time task that provided a comparison of activity between error vs. correct responses. Analyses focused on time-frequency power, frequency band-specific inter-electrode connectivity, and within-subject cross-trial correlations between EEG activity and reaction time. Time-frequency power analyses showed similar patterns of midfrontal delta-theta power for errors compared to correct responses across all spatial transformations. Beamforming additionally revealed error-related anterior and lateral prefrontal beta-band activity. Within-subject brain-behavior correlations showed similar patterns of results across the spatial transformations, with the correlations being the weakest after beamforming. The most striking difference among the spatial transformations was seen in connectivity analyses: linked earlobe reference produced weak inter-site connectivity that was attributable to volume conduction (zero phase lag), while the average reference and Laplacian produced more interpretable connectivity results. Beamforming did not reveal any significant condition modulations of connectivity. Overall, these analyses show that some findings are robust to spatial transformations, while other findings, particularly those involving cross-trial analyses or connectivity, are more sensitive and may depend on the use of appropriate spatial transformations. Copyright © 2014 Elsevier B.V. All rights reserved.
Bulik, Catharine C.; Fauntleroy, Kathy A.; Jenkins, Stephen G.; Abuali, Mayssa; LaBombardi, Vincent J.; Nicolau, David P.; Kuti, Joseph L.
2010-01-01
We describe the levels of agreement between broth microdilution, Etest, Vitek 2, Sensititre, and MicroScan methods to accurately define the meropenem MIC and categorical interpretation of susceptibility against carbapenemase-producing Klebsiella pneumoniae (KPC). A total of 46 clinical K. pneumoniae isolates with KPC genotypes, all modified Hodge test and blaKPC positive, collected from two hospitals in NY were included. Results obtained by each method were compared with those from broth microdilution (the reference method), and agreement was assessed based on MICs and Clinical Laboratory Standards Institute (CLSI) interpretative criteria using 2010 susceptibility breakpoints. Based on broth microdilution, 0%, 2.2%, and 97.8% of the KPC isolates were classified as susceptible, intermediate, and resistant to meropenem, respectively. Results from MicroScan demonstrated the most agreement with those from broth microdilution, with 95.6% agreement based on the MIC and 2.2% classified as minor errors, and no major or very major errors. Etest demonstrated 82.6% agreement with broth microdilution MICs, a very major error rate of 2.2%, and a minor error rate of 2.2%. Vitek 2 MIC agreement was 30.4%, with a 23.9% very major error rate and a 39.1% minor error rate. Sensititre demonstrated MIC agreement for 26.1% of isolates, with a 3% very major error rate and a 26.1% minor error rate. Application of FDA breakpoints had little effect on minor error rates but increased very major error rates to 58.7% for Vitek 2 and Sensititre. Meropenem MIC results and categorical interpretations for carbapenemase-producing K. pneumoniae differ by methodology. Confirmation of testing results is encouraged when an accurate MIC is required for antibiotic dosing optimization. PMID:20484603
Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise
2012-01-01
Purpose To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost four years later. Method Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 and followed up at 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors were used to predict later speech sound production, PA, and literacy outcomes. Results Group averages revealed below-average school-age articulation scores and low-average PA, but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom more than 10% of their speech sound errors were atypical had lower PA and literacy scores at school-age than children who produced fewer than 10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores. Conclusions Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschool may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschool distortions may be resistant to change over time, leading to persisting speech sound production problems. PMID:23184137
Effects of Contextual Sight-Singing and Aural Skills Training on Error-Detection Abilities.
ERIC Educational Resources Information Center
Sheldon, Deborah A.
1998-01-01
Examines the effects of contextual sight-singing and ear training on pitch and rhythm error detection abilities among undergraduate instrumental music education majors. Shows that additional training produced better error detection, particularly with rhythm errors and in one-part examples. Maintains that differences attributable to texture were…
Human leader and robot follower team: correcting leader's position from follower's heading
NASA Astrophysics Data System (ADS)
Borenstein, Johann; Thomas, David; Sights, Brandon; Ojeda, Lauro; Bankole, Peter; Fellars, Donald
2010-04-01
In multi-agent scenarios, there can be a disparity in the quality of position estimation amongst the various agents. Here, we consider the case of two agents - a leader and a follower - following the same path, in which the follower has a significantly better estimate of position and heading. This may be applicable to many situations, such as a robotic "mule" following a soldier. Another example is that of a convoy, in which only one vehicle (not necessarily the leading one) is instrumented with precision navigation instruments while all other vehicles use lower-precision instruments. We present an algorithm, called Follower-derived Heading Correction (FDHC), which substantially improves estimates of the leader's heading and, subsequently, position. Specifically, FHDC produces a very accurate estimate of heading errors caused by slow-changing errors (e.g., those caused by drift in gyros) of the leader's navigation system and corrects those errors.
An Analysis of Ripple and Error Fields Induced by a Blanket in the CFETR
NASA Astrophysics Data System (ADS)
Yu, Guanying; Liu, Xufeng; Liu, Songlin
2016-10-01
The Chinese Fusion Engineering Tokamak Reactor (CFETR) is an important intermediate device between ITER and DEMO. The Water Cooled Ceramic Breeder (WCCB) blanket whose structural material is mainly made of Reduced Activation Ferritic/Martensitic (RAFM) steel, is one of the candidate conceptual blanket design. An analysis of ripple and error field induced by RAFM steel in WCCB is evaluated with the method of static magnetic analysis in the ANSYS code. Significant additional magnetic field is produced by blanket and it leads to an increased ripple field. Maximum ripple along the separatrix line reaches 0.53% which is higher than 0.5% of the acceptable design value. Simultaneously, one blanket module is taken out for heating purpose and the resulting error field is calculated to be seriously against the requirement. supported by National Natural Science Foundation of China (No. 11175207) and the National Magnetic Confinement Fusion Program of China (No. 2013GB108004)
Evaluation of a visual layering methodology for colour coding control room displays.
Van Laar, Darren; Deshe, Ofer
2002-07-01
Eighteen people participated in an experiment in which they were asked to search for targets on control room like displays which had been produced using three different coding methods. The monochrome coding method displayed the information in black and white only, the maximally discriminable method contained colours chosen for their high perceptual discriminability, the visual layers method contained colours developed from psychological and cartographic principles which grouped information into a perceptual hierarchy. The visual layers method produced significantly faster search times than the other two coding methods which did not differ significantly from each other. Search time also differed significantly for presentation order and for the method x order interaction. There was no significant difference between the methods in the number of errors made. Participants clearly preferred the visual layers coding method. Proposals are made for the design of experiments to further test and develop the visual layers colour coding methodology.
Willadsen, Elisabeth; Boers, Maria; Schöps, Antje; Kisling-Møller, Mia; Nielsen, Joan Bogh; Jørgensen, Line Dahl; Andersen, Mikael; Bolund, Stig; Andersen, Helene Søgaard
2018-01-01
Differing results regarding articulation skills in young children with cleft palate (CP) have been reported and often interpreted as a consequence of different surgical protocols. To assess the influence of different timing of hard palate closure in a two-stage procedure on articulation skills in 3-year-olds born with unilateral cleft lip and palate (UCLP). Secondary aims were to compare results with peers without CP, and to investigate if there are gender differences in articulation skills. Furthermore, burden of treatment was to be estimated in terms of secondary surgery, hearing and speech therapy. A randomized controlled trial (RCT). Early hard palate closure (EHPC) at 12 months versus late hard palate closure (LHPC) at 36 months in a two-stage procedure was tested in a cohort of 126 Danish-speaking children born with non-syndromic UCLP. All participants had the lip and soft palate closed around 4 months of age. Audio and video recordings of a naming test were available from 113 children (32 girls and 81 boys) and were transcribed phonetically. Recordings were obtained prior to hard palate closure in the LHPC group. The main outcome measures were percentage consonants correct adjusted (PCC-A) and consonant errors from blinded assessments. Results from 36 Danish-speaking children without CP obtained previously by Willadsen in 2012 were used for comparison. Children with EHPC produced significantly more target consonants correctly (83%) than children with LHPC (48%; p < .001). In addition, children with LHPC produced significantly more active cleft speech characteristics than children with EHPC (p < .001). Boys achieved significantly lower PCC-A scores than girls (p = .04) and produced significantly more consonant errors than girls (p = .02). No significant differences were found between groups regarding burden of treatment. The control group performed significantly better than the EHPC and LHPC groups on all compared variables. © 2017 Royal College of Speech and Language Therapists.
No Substitute for Going to the Field: Correcting Lidar DEMs in Salt Marshes
NASA Astrophysics Data System (ADS)
Renken, K.; Morris, J. T.; Lynch, J.; Bayley, H.; Neil, A.; Rasmussen, S.; Tyrrell, M.; Tanis, M.
2016-12-01
Models that forecast the response of salt marshes to current and future trends in sea level rise increasingly are used to guide management of these vulnerable ecosystems. Lidar-derived DEMs serve as the foundation for modeling landform change. However, caution is advised when using these DEMs as the starting point for models of salt marsh evolution. While broad vegetation class (i.e., young forest, old forest, grasslands, desert, etc.) has proven to be a significant predictor of vertical displacement error in terrestrial environments, differentiating error among different species or community types within the same ecosystem has received less attention. Salt marshes are dominated by monocultures of grass species and thus are an ideal environment to examine the within-species effect on lidar DEM error. We analyzed error of lidar DEMs using elevations from real-time kinematic (RTK) surveys in saltmarshes in multiple national parks and wildlife refuge areas from the mouth of the Chesapeake Bay to Massachusetts. Error of the lidar DEMs was sometimes large, on the order of 0.25 m, and varied significantly between sites because vegetation cover varies seasonally and lidar data was not always collected in the same season for each park. Vegetation cover and composition were used to explain differences between RTK elevations and lidar DEMs. This research underscores the importance of collecting RTK elevation data and vegetation cover data coincident with lidar data to produce correction factors specific to individual salt marsh sites.
Improving Global Net Surface Heat Flux with Ocean Reanalysis
NASA Astrophysics Data System (ADS)
Carton, J.; Chepurin, G. A.; Chen, L.; Grodsky, S.
2017-12-01
This project addresses the current level of uncertainty in surface heat flux estimates. Time mean surface heat flux estimates provided by atmospheric reanalyses differ by 10-30W/m2. They are generally unbalanced globally, and have been shown by ocean simulation studies to be incompatible with ocean temperature and velocity measurements. Here a method is presented 1) to identify the spatial and temporal structure of the underlying errors and 2) to reduce them by exploiting hydrographic observations and the analysis increments produced by an ocean reanalysis using sequential data assimilation. The method is applied to fluxes computed from daily state variables obtained from three widely used reanalyses: MERRA2, ERA-Interim, and JRA-55, during an eight year period 2007-2014. For each of these seasonal heat flux errors/corrections are obtained. In a second set of experiments the heat fluxes are corrected and the ocean reanalysis experiments are repeated. This second round of experiments shows that the time mean error in the corrected fluxes is reduced to within ±5W/m2 over the interior subtropical and midlatitude oceans, with the most significant changes occuring over the Southern Ocean. The global heat flux imbalance of each reanalysis is reduced to within a few W/m2 with this single correction. Encouragingly, the corrected forms of the three sets of fluxes are also shown to converge. In the final discussion we present experiments beginning with a modified form of the ERA-Int reanalysis, produced by the DAKKAR program, in which state variables have been individually corrected based on independent measurements. Finally, we discuss the separation of flux error from model error.
High accuracy switched-current circuits using an improved dynamic mirror
NASA Technical Reports Server (NTRS)
Zweigle, G.; Fiez, T.
1991-01-01
The switched-current technique, a recently developed circuit approach to analog signal processing, has emerged as an alternative/compliment to the well established switched-capacitor circuit technique. High speed switched-current circuits offer potential cost and power savings over slower switched-capacitor circuits. Accuracy improvements are a primary concern at this stage in the development of the switched-current technique. Use of the dynamic current mirror has produced circuits that are insensitive to transistor matching errors. The dynamic current mirror has been limited by other sources of error including clock-feedthrough and voltage transient errors. In this paper we present an improved switched-current building block using the dynamic current mirror. Utilizing current feedback the errors due to current imbalance in the dynamic current mirror are reduced. Simulations indicate that this feedback can reduce total harmonic distortion by as much as 9 dB. Additionally, we have developed a clock-feedthrough reduction scheme for which simulations reveal a potential 10 dB total harmonic distortion improvement. The clock-feedthrough reduction scheme also significantly reduces offset errors and allows for cancellation with a constant current source. Experimental results confirm the simulated improvements.
Effects of data selection on the assimilation of AIRS data
NASA Technical Reports Server (NTRS)
Joiner, Joanna; Brin, E.; Treadon, R.; Derber, J.; VanDelst, P.; DeSilva, A.; Marshall, J. Le; Poli, P.; Atlas, R.; Cruz, C.;
2006-01-01
The Atmospheric InfraRed Sounder (AIRS), flying aboard NASA's Earth Observing System (EOS) Aqua satellite with the Advanced Microwave Sounding Unit-A (AMSU-A), has been providing data for use in numerical weather prediction (NWP) and data assimilation systems (DAS) for over three years. The full AIRS data set is currently not transmitted in near-real-time (NRT) to the NWP centers. Instead, data sets with reduced spatial and spectral information are produced and made available in NRT. In this paper, we evaluate the use of different channel selections and error specifications. We achieved significant positive impact from the Aqua AIRS/AMSU-A combination in both hemispheres during our experimental time period of January 2003. The best results were obtained using a set of 156 channels that did not include any in the 6.7micron water vapor band. The latter have a large influence on both temperature and humidity analyses. If observation and background errors are not properly specified, the partitioning of temperature and humidity information from these channels will not be correct, and this can lead to a degradation in forecast skill. We found that changing the specified channel errors had a significant effect on the amount of data that entered into the analysis as a result of quality control thresholds that are related to the errors. However, changing the channel errors within a relatively small window did not significantly impact forecast skill with the 155 channel set. We also examined the effects of different types of spatial data reduction on assimilated data sets and NWP forecast skill. Whether we picked the center or the warmest AIRS pixel in a 3x3 array affected the amount of data ingested by the analysis but had a negligible impact on the forecast skill.
Detecting ‘Wrong Blood in Tube’ Errors: Evaluation of a Bayesian Network Approach
Doctor, Jason N.; Strylewicz, Greg
2010-01-01
Objective In an effort to address the problem of laboratory errors, we develop and evaluate a method to detect mismatched specimens from nationally collected blood laboratory data in two experiments. Methods In Experiment 1 and 2 using blood labs from National Health and Nutrition Examination Survey (NHANES) and values derived from the Diabetes Prevention Program (DPP) respectively, a proportion of glucose and HbA1c specimens were randomly mismatched. A Bayesian network that encoded probabilistic relationships among analytes was used to predict mismatches. In Experiment 1 the performance of the network was compared against existing error detection software. In Experiment 2 the network was compared against 11 human experts recruited from the American Academy of Clinical Chemists. Results were compared via area under the receiver-operating characteristics curves (AUCs) and with agreement statistics. Results In Experiment 1 the network was most predictive of mismatches that produced clinically significant discrepancies between true and mismatched scores ((AUC of 0.87 (±0.04) for HbA1c and 0.83 (±0.02) for glucose), performed well in identifying errors among those self-reporting diabetes (N = 329) (AUC = 0.79 (± 0.02)) and performed significantly better than the established approach it was tested against (in all cases p < .0.05). In Experiment 2 it performed better (and in no case worse) than 7 of the 11 human experts. Average percent agreement was 0.79. and Kappa (κ) was 0.59, between experts and the Bayesian network. Conclusions Bayesian network can accurately identify mismatched specimens. The algorithm is best at identifying mismatches that result in a clinically significant magnitude of error. PMID:20566275
Coarticulatory evidence in stuttered disfluencies
NASA Astrophysics Data System (ADS)
Arbisi-Kelm, Timothy
2005-09-01
While the disfluencies produced in stuttered speech surface at a significantly higher rate than those found in normal speech, it is less clear from the previous stuttering literature how exactly these disfluency patterns might differ in kind [Wingate (1988)]. One tendency found in normal speech is for disfluencies to remove acoustic evidence of coarticulation patterns [Shriberg (1999)]. This appears attributable to lexical search errors which prevent a speaker from accessing a word's phonological form; that is, coarticulation between words will fail to occur when segmental material from the following word is not retrieved. Since stuttering is a disorder which displays evidence of phonological but not lexical impairment, it was predicted that stuttered disfluencies would differ from normal errors in that the former would reveal acoustic evidence of word transitions. Eight speakers four stutterers and four control subjects participated in a narrative-production task, spontaneously describing a picture book. Preliminary results suggest that while both stutterers and controls did produce similar rates of disfluencies occurring without coarticulatory evidence, only the stutterers regularly produced disfluencies reflecting this transitional evidence. These results support the argument that disfluencies proper to stuttering result from a phonological deficit, while normal disfluencies are generally lexically based.
Elliott, Rachel A; Putman, Koen D; Franklin, Matthew; Annemans, Lieven; Verhaeghe, Nick; Eden, Martin; Hayre, Jasdeep; Rodgers, Sarah; Sheikh, Aziz; Avery, Anthony J
2014-06-01
We recently showed that a pharmacist-led information technology-based intervention (PINCER) was significantly more effective in reducing medication errors in general practices than providing simple feedback on errors, with cost per error avoided at £79 (US$131). We aimed to estimate cost effectiveness of the PINCER intervention by combining effectiveness in error reduction and intervention costs with the effect of the individual errors on patient outcomes and healthcare costs, to estimate the effect on costs and QALYs. We developed Markov models for each of six medication errors targeted by PINCER. Clinical event probability, treatment pathway, resource use and costs were extracted from literature and costing tariffs. A composite probabilistic model combined patient-level error models with practice-level error rates and intervention costs from the trial. Cost per extra QALY and cost-effectiveness acceptability curves were generated from the perspective of NHS England, with a 5-year time horizon. The PINCER intervention generated £2,679 less cost and 0.81 more QALYs per practice [incremental cost-effectiveness ratio (ICER): -£3,037 per QALY] in the deterministic analysis. In the probabilistic analysis, PINCER generated 0.001 extra QALYs per practice compared with simple feedback, at £4.20 less per practice. Despite this extremely small set of differences in costs and outcomes, PINCER dominated simple feedback with a mean ICER of -£3,936 (standard error £2,970). At a ceiling 'willingness-to-pay' of £20,000/QALY, PINCER reaches 59 % probability of being cost effective. PINCER produced marginal health gain at slightly reduced overall cost. Results are uncertain due to the poor quality of data to inform the effect of avoiding errors.
A spectrally tunable solid-state source for radiometric, photometric, and colorimetric applications
NASA Astrophysics Data System (ADS)
Fryc, Irena; Brown, Steven W.; Eppeldauer, George P.; Ohno, Yoshihiro
2004-10-01
A spectrally tunable light source using a large number of LEDs and an integrating sphere has been designed and being developed at NIST. The source is designed to have a capability of producing any spectral distributions mimicking various light sources in the visible region by feedback control of individual LEDs. The output spectral irradiance or radiance of the source will be calibrated by a reference instrument, and the source will be used as a spectroradiometric as well as photometric and colorimetric standard. The use of the tunable source mimicking spectra of display colors, for example, rather than a traditional incandescent standard lamp for calibration of colorimeters, can reduce the spectral mismatch errors of the colorimeter measuring displays significantly. A series of simulations have been conducted to predict the performance of the designed tunable source when used for calibration of colorimeters. The results indicate that the errors can be reduced by an order of magnitude compared with those when the colorimeters are calibrated against Illuminant A. Stray light errors of a spectroradiometer can also be effectively reduced by using the tunable source producing a blackbody spectrum at higher temperature (e.g., 9000 K). The source can also approximate various CIE daylight illuminants and common lamp spectral distributions for other photometric and colorimetric applications.
NASA Astrophysics Data System (ADS)
Rausch, Kameron; Houchin, Scott; Cardema, Jason; Moy, Gabriel; Haas, Evan; De Luccia, Frank J.
2013-12-01
National Polar-Orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) reflective bands are currently calibrated via weekly updates to look-up tables (LUTs) utilized by operational ground processing in the Joint Polar Satellite System Interface Data Processing Segment (IDPS). The parameters in these LUTs must be predicted ahead 2 weeks and cannot adequately track the dynamically varying response characteristics of the instrument. As a result, spurious "predict-ahead" calibration errors of the order of 0.1% or greater are routinely introduced into the calibrated reflectances and radiances produced by IDPS in sensor data records (SDRs). Spurious calibration errors of this magnitude adversely impact the quality of downstream environmental data records (EDRs) derived from VIIRS SDRs such as Ocean Color/Chlorophyll and cause increased striping and band-to-band radiometric calibration uncertainty of SDR products. A novel algorithm that fully automates reflective band calibration has been developed for implementation in IDPS in late 2013. Automating the reflective solar band (RSB) calibration is extremely challenging and represents a significant advancement over the manner in which RSB calibration has traditionally been performed in heritage instruments such as the Moderate Resolution Imaging Spectroradiometer. The automated algorithm applies calibration data almost immediately after their acquisition by the instrument from views of space and on-onboard calibration sources, thereby eliminating the predict-ahead errors associated with the current offline calibration process. This new algorithm, when implemented, will significantly improve the quality of VIIRS reflective band SDRs and consequently the quality of EDRs produced from these SDRs.
Impact of human error on lumber yield in rough mills
Urs Buehlmann; R. Edward Thomas; R. Edward Thomas
2002-01-01
Rough sawn, kiln-dried lumber contains characteristics such as knots and bark pockets that are considered by most people to be defects. When using boards to produce furniture components, these defects are removed to produce clear, defect-free parts. Currently, human operators identify and locate the unusable board areas containing defects. Errors in determining a...
Article Errors in the English Writing of Saudi EFL Preparatory Year Students
ERIC Educational Resources Information Center
Alhaisoni, Eid; Gaudel, Daya Ram; Al-Zuoud, Khalid M.
2017-01-01
This study aims at providing a comprehensive account of the types of errors produced by Saudi EFL students enrolled in the preparatory year programe in their use of articles, based on the Surface Structure Taxonomies (SST) of errors. The study describes the types, frequency and sources of the definite and indefinite article errors in writing…
Error Analysis of Brailled Instructional Materials Produced by Public School Personnel in Texas
ERIC Educational Resources Information Center
Herzberg, Tina
2010-01-01
In this study, a detailed error analysis was performed to determine if patterns of errors existed in braille transcriptions. The most frequently occurring errors were the insertion of letters or words that were not contained in the original print material; the incorrect usage of the emphasis indicator; and the incorrect formatting of titles,…
Retinal Image Quality During Accommodation
López-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; Díaz-Muñoz, D.; Thibos, L.
2013-01-01
Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye’s higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. PMID:23786386
Retinal image quality during accommodation.
López-Gil, Norberto; Martin, Jesson; Liu, Tao; Bradley, Arthur; Díaz-Muñoz, David; Thibos, Larry N
2013-07-01
We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Subjects viewed a monochromatic (552 nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye's higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
Effect of Correlated Precision Errors on Uncertainty of a Subsonic Venturi Calibration
NASA Technical Reports Server (NTRS)
Hudson, S. T.; Bordelon, W. J., Jr.; Coleman, H. W.
1996-01-01
An uncertainty analysis performed in conjunction with the calibration of a subsonic venturi for use in a turbine test facility produced some unanticipated results that may have a significant impact in a variety of test situations. Precision uncertainty estimates using the preferred propagation techniques in the applicable American National Standards Institute/American Society of Mechanical Engineers standards were an order of magnitude larger than precision uncertainty estimates calculated directly from a sample of results (discharge coefficient) obtained at the same experimental set point. The differences were attributable to the effect of correlated precision errors, which previously have been considered negligible. An analysis explaining this phenomenon is presented. The article is not meant to document the venturi calibration, but rather to give a real example of results where correlated precision terms are important. The significance of the correlated precision terms could apply to many test situations.
Impairments in prehension produced by early postnatal sensory motor cortex activity blockade.
Martin, J H; Donarummo, L; Hacking, A
2000-02-01
This study examined the effects of blocking neural activity in sensory motor cortex during early postnatal development on prehension. We infused muscimol, either unilaterally or bilaterally, into the sensory motor cortex of cats to block activity continuously between postnatal weeks 3-7. After stopping infusion, we trained animals to reach and grasp a cube of meat and tested behavior thereafter. Animals that had not received muscimol infusion (unilateral saline infusion; age-matched) reached for the meat accurately with small end-point errors. They grasped the meat using coordinated digit flexion followed by forearm supination on 82.7% of trials. Performance using either limb did not differ significantly. In animals receiving unilateral muscimol infusion, reaching and grasping using the limb ipsilateral to the infusion were similar to controls. The limb contralateral to infusion showed significant increases in systematic and variable reaching end-point errors, often requiring subsequent corrective movements to contact the meat. Grasping occurred on only 14.8% of trials, replaced on most trials by raking without distal movements. Compensatory adjustments in reach length and angle, to maintain end-point accuracy as movements were started from a more lateral position, were less effective using the contralateral limb than ipsilateral limb. With bilateral inactivations, the form of reaching and grasping impairments was identical to that produced by unilateral inactivation, but the magnitude of the reaching impairments was less. We discuss these results in terms of the differential effects of unilateral and bilateral inactivation on corticospinal tract development. We also investigated the degree to which these prehension impairments after unilateral blockade reflect control by each hemisphere. In animals that had received unilateral blockade between postnatal weeks (PWs) 3 and 7, we silenced on-going activity (after PW 11) during task performance using continuous muscimol infusion. We inactivated the right (previously active) and then the left (previously silenced) sensory motor cortex. Inactivation of the ipsilateral (right) sensory motor cortex produced a further increase in systematic error and less frequent normal grasping. Reinactivation of the contralateral (left) cortex produced larger increases in reaching and grasping impairments than those produced by ipsilateral inactivation. This suggests that the impaired limb receives bilateral sensory motor cortex control but that control by the contralateral (initially silenced) cortex predominates. Our data are consistent with the hypothesis that the normal development of skilled motor behavior requires activity in sensory motor cortex during early postnatal life.
Kim, Haksoo; Park, Samuel B; Monroe, James I; Traughber, Bryan J; Zheng, Yiran; Lo, Simon S; Yao, Min; Mansur, David; Ellis, Rodney; Machtay, Mitchell; Sohn, Jason W
2015-08-01
This article proposes quantitative analysis tools and digital phantoms to quantify intrinsic errors of deformable image registration (DIR) systems and establish quality assurance (QA) procedures for clinical use of DIR systems utilizing local and global error analysis methods with clinically realistic digital image phantoms. Landmark-based image registration verifications are suitable only for images with significant feature points. To address this shortfall, we adapted a deformation vector field (DVF) comparison approach with new analysis techniques to quantify the results. Digital image phantoms are derived from data sets of actual patient images (a reference image set, R, a test image set, T). Image sets from the same patient taken at different times are registered with deformable methods producing a reference DVFref. Applying DVFref to the original reference image deforms T into a new image R'. The data set, R', T, and DVFref, is from a realistic truth set and therefore can be used to analyze any DIR system and expose intrinsic errors by comparing DVFref and DVFtest. For quantitative error analysis, calculating and delineating differences between DVFs, 2 methods were used, (1) a local error analysis tool that displays deformation error magnitudes with color mapping on each image slice and (2) a global error analysis tool that calculates a deformation error histogram, which describes a cumulative probability function of errors for each anatomical structure. Three digital image phantoms were generated from three patients with a head and neck, a lung and a liver cancer. The DIR QA was evaluated using the case with head and neck. © The Author(s) 2014.
Uncorrected and corrected refractive error experiences of Nepalese adults: a qualitative study.
Kandel, Himal; Khadka, Jyoti; Shrestha, Mohan Krishna; Sharma, Sadhana; Neupane Kandel, Sandhya; Dhungana, Purushottam; Pradhan, Kishore; Nepal, Bhagavat P; Thapa, Suman; Pesudovs, Konrad
2018-04-01
The aim of this study was to explore the impact of corrected and uncorrected refractive error (URE) on Nepalese people's quality of life (QoL), and to compare the QoL status between refractive error subgroups. Participants were recruited from Tilganga Institute of Ophthalmology and Dhulikhel Hospital, Nepal. Semi-structured in-depth interviews were conducted with 101 people with refractive error. Thematic analysis was used with matrices produced to compare the occurrence of themes and categories across participants. Themes were identified using an inductive approach. Seven major themes emerged that determined refractive error-specific QoL: activity limitation, inconvenience, health concerns, psycho-social impact, economic impact, general and ocular comfort symptoms, and visual symptoms. Activity limitation, economic impact, and symptoms were the most important themes for the participants with URE, whereas inconvenience associated with wearing glasses was the most important issue in glasses wearers. Similarly, possibilities of having side effects or complications were the major concerns for participants wearing contact lens. In general, refractive surgery addressed socio-emotional impact of wearing glasses or contact lens. However, the surgery participants had concerns such as possibility of having to wear glasses again due to relapse of refractive error. Impact of refractive error on people's QoL is multifaceted. Significance of the identified themes varies by refractive error subgroups. Refractive correction may not always address QoL impact of URE but often add unique QoL issues. This study findings also provide content for developing an item-bank for quantitatively measuring refractive error-specific QoL in developing country setting.
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks
NASA Astrophysics Data System (ADS)
Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.
2015-03-01
The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which is to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.
Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks
NASA Astrophysics Data System (ADS)
Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.
2014-11-01
The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which are to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.
Adaptive control system for pulsed megawatt klystrons
Bolie, Victor W.
1992-01-01
The invention provides an arrangement for reducing waveform errors such as errors in phase or amplitude in output pulses produced by pulsed power output devices such as klystrons by generating an error voltage representing the extent of error still present in the trailing edge of the previous output pulse, using the error voltage to provide a stored control voltage, and applying the stored control voltage to the pulsed power output device to limit the extent of error in the leading edge of the next output pulse.
Determining relative error bounds for the CVBEM
Hromadka, T.V.
1985-01-01
The Complex Variable Boundary Element Methods provides a measure of relative error which can be utilized to subsequently reduce the error or provide information for further modeling analysis. By maximizing the relative error norm on each boundary element, a bound on the total relative error for each boundary element can be evaluated. This bound can be utilized to test CVBEM convergence, to analyze the effects of additional boundary nodal points in reducing the modeling error, and to evaluate the sensitivity of resulting modeling error within a boundary element from the error produced in another boundary element as a function of geometric distance. ?? 1985.
ERIC Educational Resources Information Center
Sun, Wei; And Others
1992-01-01
Identifies types and distributions of errors in text produced by optical character recognition (OCR) and proposes a process using machine learning techniques to recognize and correct errors in OCR texts. Results of experiments indicating that this strategy can reduce human interaction required for error correction are reported. (25 references)…
Multiple levels of bilingual language control: evidence from language intrusions in reading aloud.
Gollan, Tamar H; Schotter, Elizabeth R; Gomez, Joanne; Murillo, Mayra; Rayner, Keith
2014-02-01
Bilinguals rarely produce words in an unintended language. However, we induced such intrusion errors (e.g., saying el instead of he) in 32 Spanish-English bilinguals who read aloud single-language (English or Spanish) and mixed-language (haphazard mix of English and Spanish) paragraphs with English or Spanish word order. These bilinguals produced language intrusions almost exclusively in mixed-language paragraphs, and most often when attempting to produce dominant-language targets (accent-only errors also exhibited reversed language-dominance effects). Most intrusion errors occurred for function words, especially when they were not from the language that determined the word order in the paragraph. Eye movements showed that fixating a word in the nontarget language increased intrusion errors only for function words. Together, these results imply multiple mechanisms of language control, including (a) inhibition of the dominant language at both lexical and sublexical processing levels, (b) special retrieval mechanisms for function words in mixed-language utterances, and (c) attentional monitoring of the target word for its match with the intended language.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-20
...The Food and Drug Administration (FDA or we) is correcting the preamble to a proposed rule that published in the Federal Register of January 16, 2013. That proposed rule would establish science-based minimum standards for the safe growing, harvesting, packing, and holding of produce, meaning fruits and vegetables grown for human consumption. FDA proposed these standards as part of our implementation of the FDA Food Safety Modernization Act. The document published with several technical errors, including some errors in cross references, as well as several errors in reference numbers cited throughout the document. This document corrects those errors. We are also placing a corrected copy of the proposed rule in the docket.
ERIC Educational Resources Information Center
Deutsch, Avital; Dank, Maya
2011-01-01
A common characteristic of subject-predicate agreement errors (usually termed attraction errors) in complex noun phrases is an asymmetrical pattern of error distribution, depending on the inflectional state of the nouns comprising the complex noun phrase. That is, attraction is most likely to occur when the head noun is the morphologically…
Kurzweil Reading Machine: A Partial Evaluation of Its Optical Character Recognition Error Rate.
ERIC Educational Resources Information Center
Goodrich, Gregory L.; And Others
1979-01-01
A study designed to assess the ability of the Kurzweil reading machine (a speech reading device for the visually handicapped) to read three different type styles produced by five different means indicated that the machines tested had different error rates depending upon the means of producing the copy and upon the type style used. (Author/CL)
Quantifying Errors in TRMM-Based Multi-Sensor QPE Products Over Land in Preparation for GPM
NASA Technical Reports Server (NTRS)
Peters-Lidard, Christa D.; Tian, Yudong
2011-01-01
Determining uncertainties in satellite-based multi-sensor quantitative precipitation estimates over land of fundamental importance to both data producers and hydro climatological applications. ,Evaluating TRMM-era products also lays the groundwork and sets the direction for algorithm and applications development for future missions including GPM. QPE uncertainties result mostly from the interplay of systematic errors and random errors. In this work, we will synthesize our recent results quantifying the error characteristics of satellite-based precipitation estimates. Both systematic errors and total uncertainties have been analyzed for six different TRMM-era precipitation products (3B42, 3B42RT, CMORPH, PERSIANN, NRL and GSMap). For systematic errors, we devised an error decomposition scheme to separate errors in precipitation estimates into three independent components, hit biases, missed precipitation and false precipitation. This decomposition scheme reveals hydroclimatologically-relevant error features and provides a better link to the error sources than conventional analysis, because in the latter these error components tend to cancel one another when aggregated or averaged in space or time. For the random errors, we calculated the measurement spread from the ensemble of these six quasi-independent products, and thus produced a global map of measurement uncertainties. The map yields a global view of the error characteristics and their regional and seasonal variations, reveals many undocumented error features over areas with no validation data available, and provides better guidance to global assimilation of satellite-based precipitation data. Insights gained from these results and how they could help with GPM will be highlighted.
Localized landslide risk assessment with multi pass L band DInSAR analysis
NASA Astrophysics Data System (ADS)
Yun, HyeWon; Rack Kim, Jung; Lin, Shih-Yuan; Choi, YunSoo
2014-05-01
In terms of data availability and error correction, landslide forecasting by Differential Interferometric SAR (DInSAR) analysis is not easy task. Especially, the landslides by the anthropogenic construction activities frequently occurred in the localized cutting side of mountainous area. In such circumstances, it is difficult to attain sufficient enough accuracy because of the external factors inducing the error component in electromagnetic wave propagation. For instance, the local climate characteristics such as orographic effect and the proximity to water source can produce the significant anomalies in the water vapor distribution and consequently result in the error components of InSAR phase angle measurements. Moreover the high altitude parts of target area cause the stratified tropospheric delay error in DInSAR measurement. The other obstacle in DInSAR observation over the potential landside site is the vegetation canopy which causes the decorrelation of InSAR phase. Thus rather than C band sensor such as ENVISAT, ERS and RADARSAT, DInSAR analysis with L band ALOS PLASAR is more recommendable. Together with the introduction of L band DInSAR analysis, the improved DInSAR technique to cope all above obstacles is necessary. Thus we employed two approaches i.e. StaMPS/MTI (Stanford Method for Persistent Scatterers/Multi-Temporal InSAR, Hopper et al., 2007) which was newly developed for extracting the reliable deformation values through time series analysis and two pass DInSAR with the error term compensation based on the external weather information in this study. Since the water vapor observation from spaceborne radiometer is not feasible by the temporal gap in this case, the quantities from weather Research Forecasting (WRF) with 1 km spatial resolution was used to address the atmospheric phase error in two pass DInSAR analysis. Also it was observed that base DEM offset with time dependent perpendicular baselines of InSAR time series produce a significant error even in the advanced time series techniques such as StaMPS/MTI. We tried to compensate with the algorithmic base together with the usage of high resolution LIDAR DEM. The target area of this study is the eastern part of Korean peninsula centered. In there, the landslide originated by the geomorphic factors such as high sloped topography and localized torrential down pour is critical issue. The surface deformations from error corrected two pass DInSAR and StaMPS/MTI are crossly compared and validated with the landslide triggering factors such as vegetation, slope and geological properties. The study will be further extended for the application of future SAR sensors by incorporating the dynamic analysis of topography to implement practical landslide forecasting scheme.
Text familiarity, word frequency, and sentential constraints in error detection.
Pilotti, Maura; Chodorow, Martin; Schauss, Frances
2009-12-01
The present study examines whether the frequency of an error-bearing word and its predictability, arising from sentential constraints and text familiarity, either independently or jointly, would impair error detection by making proofreading driven by top-down processes. Prior to a proofreading task, participants were asked to read, copy, memorize, or paraphrase sentences, half of which contained errors. These tasks represented a continuum of progressively more demanding and time-consuming activities, which were thought to lead to comparable increases in text familiarity and thus predictability. Proofreading times were unaffected by whether the sentences had been encountered earlier. Proofreading was slower and less accurate for high-frequency words and for highly constrained sentences. Prior memorization produced divergent effects on accuracy depending on sentential constraints. The latter finding suggested that a substantial level of predictability, such as that produced by memorizing highly constrained sentences, can increase the probability of overlooking errors.
Predictability Experiments With the Navy Operational Global Atmospheric Prediction System
NASA Astrophysics Data System (ADS)
Reynolds, C. A.; Gelaro, R.; Rosmond, T. E.
2003-12-01
There are several areas of research in numerical weather prediction and atmospheric predictability, such as targeted observations and ensemble perturbation generation, where it is desirable to combine information about the uncertainty of the initial state with information about potential rapid perturbation growth. Singular vectors (SVs) provide a framework to accomplish this task in a mathematically rigorous and computationally feasible manner. In this study, SVs are calculated using the tangent and adjoint models of the Navy Operational Global Atmospheric Prediction System (NOGAPS). The analysis error variance information produced by the NRL Atmospheric Variational Data Assimilation System is used as the initial-time SV norm. These VAR SVs are compared to SVs for which total energy is both the initial and final time norms (TE SVs). The incorporation of analysis error variance information has a significant impact on the structure and location of the SVs. This in turn has a significant impact on targeted observing applications. The utility and implications of such experiments in assessing the analysis error variance estimates will be explored. Computing support has been provided by the Department of Defense High Performance Computing Center at the Naval Oceanographic Office Major Shared Resource Center at Stennis, Mississippi.
Shannon, Harlan E; Love, Patrick L
2005-12-01
Patients with epilepsy can have impaired cognitive abilities. Antiepileptic drugs (AEDs) may contribute to the cognitive deficits observed in patients with epilepsy, and have been shown to induce cognitive impairments in healthy individuals. However, there are few systematic data on the effects of AEDs on specific cognitive domains. We have previously evaluated a number of AEDs with respect to their effects on working memory. The purpose of the present study was to evaluate the effects of AEDs on attention as measured by five-choice serial reaction time behavior in nonepileptic rats. The GABA-related AEDs triazolam, phenobarbital, and chlordiazepoxide significantly disrupted performance by increasing errors of omission, whereas tiagabine, valproate, and gabapentin did not. The sodium channel blocker carbamazepine increased errors of omission at relatively high doses, whereas the sodium channel blockers phenytoin, topiramate, and lamotrigine were without significant effect. Levetiracetam had no effect on attention. The disruptions produced by triazolam, phenobarbital, chlordiazepoxide, and carbamazepine were similar in magnitude to the effects of the muscarinic cholinergic receptor antagonist scopolamine. The present results indicate that AEDs can disrupt attention, but there are differences among AEDs in the magnitude of the disruption in nonepileptic rats, with drugs that enhance GABA receptor function producing the most consistent disruption of attention.
Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors
NASA Astrophysics Data System (ADS)
Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.
2018-04-01
The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.
Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts
NASA Astrophysics Data System (ADS)
Gingrich, Mark
Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.
Bier, Nathalie; Van Der Linden, Martial; Gagnon, Lise; Desrosiers, Johanne; Adam, Stephane; Louveaux, Stephanie; Saint-Mleux, Julie
2008-06-01
This study compared the efficacy of five learning methods in the acquisition of face-name associations in early dementia of Alzheimer type (AD). The contribution of error production and implicit memory to the efficacy of each method was also examined. Fifteen participants with early AD and 15 matched controls were exposed to five learning methods: spaced retrieval, vanishing cues, errorless, and two trial-and-error methods, one with explicit and one with implicit memory task instructions. Under each method, participants had to learn a list of five face-name associations, followed by free recall, cued recall and recognition. Delayed recall was also assessed. For AD, results showed that all methods were efficient but there were no significant differences between them. The number of errors produced during the learning phases varied between the five methods but did not influence learning. There were no significant differences between implicit and explicit memory task instructions on test performances. For the control group, there were no differences between the five methods. Finally, no significant correlations were found between the performance of the AD participants in free recall and their cognitive profile, but generally, the best performers had better remaining episodic memory. Also, case study analyses showed that spaced retrieval was the method for which the greatest number of participants (four) obtained results as good as the controls. This study suggests that the five methods are effective for new learning of face-name associations in AD. It appears that early AD patients can learn, even in the context of error production and explicit memory conditions.
NASA Technical Reports Server (NTRS)
Berg, Wesley; Avery, Susan K.
1995-01-01
Estimates of monthly rainfall have been computed over the tropical Pacific using passive microwave satellite observations from the special sensor microwave/imager (SSM/I) for the period from July 1987 through December 1990. These monthly estimates are calibrated using data from a network of Pacific atoll rain gauges in order to account for systematic biases and are then compared with several visible and infrared satellite-based rainfall estimation techniques for the purpose of evaluating the performance of the microwave-based estimates. Although several key differences among the various techniques are observed, the general features of the monthly rainfall time series agree very well. Finally, the significant error sources contributing to uncertainties in the monthly estimates are examined and an estimate of the total error is produced. The sampling error characteristics are investigated using data from two SSM/I sensors and a detailed analysis of the characteristics of the diurnal cycle of rainfall over the oceans and its contribution to sampling errors in the monthly SSM/I estimates is made using geosynchronous satellite data. Based on the analysis of the sampling and other error sources the total error was estimated to be of the order of 30 to 50% of the monthly rainfall for estimates averaged over 2.5 deg x 2.5 deg latitude/longitude boxes, with a contribution due to diurnal variability of the order of 10%.
Parallel computers - Estimate errors caused by imprecise data
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik; Bernat, Andrew; Villa, Elsa; Mariscal, Yvonne
1991-01-01
A new approach to the problem of estimating errors caused by imprecise data is proposed in the context of software engineering. A software device is used to produce an ideal solution to the problem, when the computer is capable of computing errors of arbitrary programs. The software engineering aspect of this problem is to describe a device for computing the error estimates in software terms and then to provide precise numbers with error estimates to the user. The feasibility of the program capable of computing both some quantity and its error estimate in the range of possible measurement errors is demonstrated.
Error Sensitivity to Environmental Noise in Quantum Circuits for Chemical State Preparation.
Sawaya, Nicolas P D; Smelyanskiy, Mikhail; McClean, Jarrod R; Aspuru-Guzik, Alán
2016-07-12
Calculating molecular energies is likely to be one of the first useful applications to achieve quantum supremacy, performing faster on a quantum than a classical computer. However, if future quantum devices are to produce accurate calculations, errors due to environmental noise and algorithmic approximations need to be characterized and reduced. In this study, we use the high performance qHiPSTER software to investigate the effects of environmental noise on the preparation of quantum chemistry states. We simulated 18 16-qubit quantum circuits under environmental noise, each corresponding to a unitary coupled cluster state preparation of a different molecule or molecular configuration. Additionally, we analyze the nature of simple gate errors in noise-free circuits of up to 40 qubits. We find that, in most cases, the Jordan-Wigner (JW) encoding produces smaller errors under a noisy environment as compared to the Bravyi-Kitaev (BK) encoding. For the JW encoding, pure dephasing noise is shown to produce substantially smaller errors than pure relaxation noise of the same magnitude. We report error trends in both molecular energy and electron particle number within a unitary coupled cluster state preparation scheme, against changes in nuclear charge, bond length, number of electrons, noise types, and noise magnitude. These trends may prove to be useful in making algorithmic and hardware-related choices for quantum simulation of molecular energies.
To Err Is Human; To Structurally Prime from Errors Is Also Human
ERIC Educational Resources Information Center
Slevc, L. Robert; Ferreira, Victor S.
2013-01-01
Natural language contains disfluencies and errors. Do listeners simply discard information that was clearly produced in error, or can erroneous material persist to affect subsequent processing? Two experiments explored this question using a structural priming paradigm. Speakers described dative-eliciting pictures after hearing prime sentences that…
Explaining Errors in Children's Questions
ERIC Educational Resources Information Center
Rowland, Caroline F.
2007-01-01
The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that,…
Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife
ERIC Educational Resources Information Center
Jennrich, Robert I.
2008-01-01
The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…
Factor Rotation and Standard Errors in Exploratory Factor Analysis
ERIC Educational Resources Information Center
Zhang, Guangjian; Preacher, Kristopher J.
2015-01-01
In this article, we report a surprising phenomenon: Oblique CF-varimax and oblique CF-quartimax rotation produced similar point estimates for rotated factor loadings and factor correlations but different standard error estimates in an empirical example. Influences of factor rotation on asymptotic standard errors are investigated using a numerical…
Grammatical Errors Produced by English Majors: The Translation Task
ERIC Educational Resources Information Center
Mohaghegh, Hamid; Zarandi, Fatemeh Mahmoudi; Shariati, Mohammad
2011-01-01
This study investigated the frequency of the grammatical errors related to the four categories of preposition, relative pronoun, article, and tense using the translation task. In addition, the frequencies of these grammatical errors in different categories and in each category were examined. The quantitative component of the study further looked…
Optical linear algebra processors: noise and error-source modeling.
Casasent, D; Ghosh, A
1985-06-01
The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.
Optical linear algebra processors - Noise and error-source modeling
NASA Technical Reports Server (NTRS)
Casasent, D.; Ghosh, A.
1985-01-01
The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.
Speech errors of amnesic H.M.: unlike everyday slips-of-the-tongue.
MacKay, Donald G; James, Lori E; Hadley, Christopher B; Fogler, Kethera A
2011-03-01
Three language production studies indicate that amnesic H.M. produces speech errors unlike everyday slips-of-the-tongue. Study 1 was a naturalistic task: H.M. and six controls closely matched for age, education, background and IQ described what makes captioned cartoons funny. Nine judges rated the descriptions blind to speaker identity and gave reliably more negative ratings for coherence, vagueness, comprehensibility, grammaticality, and adequacy of humor-description for H.M. than the controls. Study 2 examined "major errors", a novel type of speech error that is uncorrected and reduces the coherence, grammaticality, accuracy and/or comprehensibility of an utterance. The results indicated that H.M. produced seven types of major errors reliably more often than controls: substitutions, omissions, additions, transpositions, reading errors, free associations, and accuracy errors. These results contradict recent claims that H.M. retains unconscious or implicit language abilities and produces spoken discourse that is "sophisticated," "intact" and "without major errors." Study 3 examined whether three classical types of errors (omissions, additions, and substitutions of words and phrases) differed for H.M. versus controls in basic nature and relative frequency by error type. The results indicated that omissions, and especially multi-word omissions, were relatively more common for H.M. than the controls; and substitutions violated the syntactic class regularity (whereby, e.g., nouns substitute with nouns but not verbs) relatively more often for H.M. than the controls. These results suggest that H.M.'s medial temporal lobe damage impaired his ability to rapidly form new connections between units in the cortex, a process necessary to form complete and coherent internal representations for novel sentence-level plans. In short, different brain mechanisms underlie H.M.'s major errors (which reflect incomplete and incoherent sentence-level plans) versus everyday slips-of-the tongue (which reflect errors in activating pre-planned units in fully intact sentence-level plans). Implications of the results of Studies 1-3 are discussed for systems theory, binding theory and relational memory theories. Copyright © 2010 Elsevier Srl. All rights reserved.
A constrained-gradient method to control divergence errors in numerical MHD
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.
2016-10-01
In numerical magnetohydrodynamics (MHD), a major challenge is maintaining nabla \\cdot {B}=0. Constrained transport (CT) schemes achieve this but have been restricted to specific methods. For more general (meshless, moving-mesh, ALE) methods, `divergence-cleaning' schemes reduce the nabla \\cdot {B} errors; however they can still be significant and can lead to systematic errors which converge away slowly. We propose a new constrained gradient (CG) scheme which augments these with a projection step, and can be applied to any numerical scheme with a reconstruction. This iteratively approximates the least-squares minimizing, globally divergence-free reconstruction of the fluid. Unlike `locally divergence free' methods, this actually minimizes the numerically unstable nabla \\cdot {B} terms, without affecting the convergence order of the method. We implement this in the mesh-free code GIZMO and compare various test problems. Compared to cleaning schemes, our CG method reduces the maximum nabla \\cdot {B} errors by ˜1-3 orders of magnitude (˜2-5 dex below typical errors if no nabla \\cdot {B} cleaning is used). By preventing large nabla \\cdot {B} at discontinuities, this eliminates systematic errors at jumps. Our CG results are comparable to CT methods; for practical purposes, the nabla \\cdot {B} errors are eliminated. The cost is modest, ˜30 per cent of the hydro algorithm, and the CG correction can be implemented in a range of numerical MHD methods. While for many problems, we find Dedner-type cleaning schemes are sufficient for good results, we identify a range of problems where using only Powell or `8-wave' cleaning can produce order-of-magnitude errors.
Detecting Silent Data Corruption for Extreme-Scale Applications through Data Mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bautista-Gomez, Leonardo; Cappello, Franck
Supercomputers allow scientists to study natural phenomena by means of computer simulations. Next-generation machines are expected to have more components and, at the same time, consume several times less energy per operation. These trends are pushing supercomputer construction to the limits of miniaturization and energy-saving strategies. Consequently, the number of soft errors is expected to increase dramatically in the coming years. While mechanisms are in place to correct or at least detect some soft errors, a significant percentage of those errors pass unnoticed by the hardware. Such silent errors are extremely damaging because they can make applications silently produce wrongmore » results. In this work we propose a technique that leverages certain properties of high-performance computing applications in order to detect silent errors at the application level. Our technique detects corruption solely based on the behavior of the application datasets and is completely application-agnostic. We propose multiple corruption detectors, and we couple them to work together in a fashion transparent to the user. We demonstrate that this strategy can detect the majority of the corruptions, while incurring negligible overhead. We show that with the help of these detectors, applications can have up to 80% of coverage against data corruption.« less
Evaluating significance in linear mixed-effects models in R.
Luke, Steven G
2017-08-01
Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.
A post-processing algorithm for time domain pitch trackers
NASA Astrophysics Data System (ADS)
Specker, P.
1983-01-01
This paper describes a powerful post-processing algorithm for time-domain pitch trackers. On two successive passes, the post-processing algorithm eliminates errors produced during a first pass by a time-domain pitch tracker. During the second pass, incorrect pitch values are detected as outliers by computing the distribution of values over a sliding 80 msec window. During the third pass (based on artificial intelligence techniques), remaining pitch pulses are used as anchor points to reconstruct the pitch train from the original waveform. The algorithm produced a decrease in the error rate from 21% obtained with the original time domain pitch tracker to 2% for isolated words and sentences produced in an office environment by 3 male and 3 female talkers. In a noisy computer room errors decreased from 52% to 2.9% for the same stimuli produced by 2 male talkers. The algorithm is efficient, accurate, and resistant to noise. The fundamental frequency micro-structure is tracked sufficiently well to be used in extracting phonetic features in a feature-based recognition system.
Kim, Eunji; Ivanov, Ivan; Hua, Jianping; Lampe, Johanna W; Hullar, Meredith Aj; Chapkin, Robert S; Dougherty, Edward R
2017-01-01
Ranking feature sets for phenotype classification based on gene expression is a challenging issue in cancer bioinformatics. When the number of samples is small, all feature selection algorithms are known to be unreliable, producing significant error, and error estimators suffer from different degrees of imprecision. The problem is compounded by the fact that the accuracy of classification depends on the manner in which the phenomena are transformed into data by the measurement technology. Because next-generation sequencing technologies amount to a nonlinear transformation of the actual gene or RNA concentrations, they can potentially produce less discriminative data relative to the actual gene expression levels. In this study, we compare the performance of ranking feature sets derived from a model of RNA-Seq data with that of a multivariate normal model of gene concentrations using 3 measures: (1) ranking power, (2) length of extensions, and (3) Bayes features. This is the model-based study to examine the effectiveness of reporting lists of small feature sets using RNA-Seq data and the effects of different model parameters and error estimators. The results demonstrate that the general trends of the parameter effects on the ranking power of the underlying gene concentrations are preserved in the RNA-Seq data, whereas the power of finding a good feature set becomes weaker when gene concentrations are transformed by the sequencing machine.
Quantifying and correcting motion artifacts in MRI
NASA Astrophysics Data System (ADS)
Bones, Philip J.; Maclaren, Julian R.; Millane, Rick P.; Watts, Richard
2006-08-01
Patient motion during magnetic resonance imaging (MRI) can produce significant artifacts in a reconstructed image. Since measurements are made in the spatial frequency domain ('k-space'), rigid-body translational motion results in phase errors in the data samples while rotation causes location errors. A method is presented to detect and correct these errors via a modified sampling strategy, thereby achieving more accurate image reconstruction. The strategy involves sampling vertical and horizontal strips alternately in k-space and employs phase correlation within the overlapping segments to estimate translational motion. An extension, also based on correlation, is employed to estimate rotational motion. Results from simulations with computer-generated phantoms suggest that the algorithm is robust up to realistic noise levels. The work is being extended to physical phantoms. Provided that a reference image is available and the object is of limited extent, it is shown that a measure related to the amount of energy outside the support can be used to objectively compare the severity of motion-induced artifacts.
Combining forecast weights: Why and how?
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim
2012-09-01
This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.
Compact disk error measurements
NASA Technical Reports Server (NTRS)
Howe, D.; Harriman, K.; Tehranchi, B.
1993-01-01
The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.
Flavour and identification threshold detection overview of Slovak adepts for certified testing.
Vietoris, VladimIr; Barborova, Petra; Jancovicova, Jana; Eliasova, Lucia; Karvaj, Marian
2016-07-01
During certification process of sensory assessors of Slovak certification body we obtained results for basic taste thresholds and lifestyle habits. 500 adult people were screened during experiment with food industry background. For analysis of basic and non basic tastes, we used standardized procedure of ISO 8586-1:1993. In flavour test experiment, group of (26-35 y.o) produced the lowest error ratio (1.438), highest is (56+ y.o.) group with result (2.0). Average error value based on gender for women was (1.510) in comparison to men (1.477). People with allergies have the average error ratio (1.437) in comparison to people without allergies (1.511). Non-smokers produced less errors (1.484) against the smokers (1.576). Another flavour threshold identification test detected differences among age groups (by age are values increased). The highest number of errors made by men in metallic taste was (24%) the same as made by women (22%). Higher error ratio made by men occurred in salty taste (19%) against women (10%). Analysis detected some differences between allergic/non-allergic, smokers/non-smokers groups.
Ronchi, Roberta; Revol, Patrice; Katayama, Masahiro; Rossetti, Yves; Farnè, Alessandro
2011-01-01
During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: As consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects). Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift) were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction) produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion “to feel” the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors. PMID:21731649
Improving vertebra segmentation through joint vertebra-rib atlases
NASA Astrophysics Data System (ADS)
Wang, Yinong; Yao, Jianhua; Roth, Holger R.; Burns, Joseph E.; Summers, Ronald M.
2016-03-01
Accurate spine segmentation allows for improved identification and quantitative characterization of abnormalities of the vertebra, such as vertebral fractures. However, in existing automated vertebra segmentation methods on computed tomography (CT) images, leakage into nearby bones such as ribs occurs due to the close proximity of these visibly intense structures in a 3D CT volume. To reduce this error, we propose the use of joint vertebra-rib atlases to improve the segmentation of vertebrae via multi-atlas joint label fusion. Segmentation was performed and evaluated on CTs containing 106 thoracic and lumbar vertebrae from 10 pathological and traumatic spine patients on an individual vertebra level basis. Vertebra atlases produced errors where the segmentation leaked into the ribs. The use of joint vertebra-rib atlases produced a statistically significant increase in the Dice coefficient from 92.5 +/- 3.1% to 93.8 +/- 2.1% for the left and right transverse processes and a decrease in the mean and max surface distance from 0.75 +/- 0.60mm and 8.63 +/- 4.44mm to 0.30 +/- 0.27mm and 3.65 +/- 2.87mm, respectively.
Demonstration to characterize watershed runoff potential by microwave techniques
NASA Technical Reports Server (NTRS)
Blanchard, B. J.
1977-01-01
Characteristics such as storage capacity of the soil, volume of storage in vegetative matter, and volume of storage available in local depressions are expressed in empirical watershed runoff equations as one or more coefficients. Conventional techniques for estimating coefficients representing the spatial distribution of these characteristics over a watershed drainage area are subjective and produce significant errors. Characteristics of the wear surface are described as a single coefficient called the curve number.
NASA Technical Reports Server (NTRS)
Lundberg, J. B.; Feulner, M. R.; Abusali, P. A. M.; Ho, C. S.
1991-01-01
The method of modified back differences, a technique that significantly reduces the numerical integration errors associated with crossing shadow boundaries using a fixed-mesh multistep integrator without a significant increase in computer run time, is presented. While Hubbard's integral approach can produce significant improvements to the trajectory solution, the interpolation method provides the best overall results. It is demonstrated that iterating on the point mass term correction is also important for achieving the best overall results. It is also shown that the method of modified back differences can be implemented with only a small increase in execution time.
Analyzing Software Requirements Errors in Safety-Critical, Embedded Systems
NASA Technical Reports Server (NTRS)
Lutz, Robyn R.
1993-01-01
This paper analyzes the root causes of safety-related software errors in safety-critical, embedded systems. The results show that software errors identified as potentially hazardous to the system tend to be produced by different error mechanisms than non- safety-related software errors. Safety-related software errors are shown to arise most commonly from (1) discrepancies between the documented requirements specifications and the requirements needed for correct functioning of the system and (2) misunderstandings of the software's interface with the rest of the system. The paper uses these results to identify methods by which requirements errors can be prevented. The goal is to reduce safety-related software errors and to enhance the safety of complex, embedded systems.
Collaborative recall of details of an emotional film.
Wessel, Ineke; Zandstra, Anna Roos E; Hengeveld, Hester M E; Moulds, Michelle L
2015-01-01
Collaborative inhibition refers to the phenomenon that when several people work together to produce a single memory report, they typically produce fewer items than when the unique items in the individual reports of the same number of participants are combined (i.e., nominal recall). Yet, apart from this negative effect, collaboration may be beneficial in that group members remove errors from a collaborative report. Collaborative inhibition studies on memory for emotional stimuli are scarce. Therefore, the present study examined both collaborative inhibition and collaborative error reduction in the recall of the details of emotional material in a laboratory setting. Female undergraduates (n = 111) viewed a film clip of a fatal accident and subsequently engaged in either collaborative (n = 57) or individual recall (n = 54) in groups of three. The results show that, across several detail categories, collaborating groups recalled fewer details than nominal groups. However, overall, nominal recall produced more errors than collaborative recall. The present results extend earlier findings on both collaborative inhibition and error reduction to the recall of affectively laden material. These findings may have implications for the applied fields of forensic and clinical psychology.
NASA Astrophysics Data System (ADS)
Simmons, B. E.
1981-08-01
This report derives equations predicting satellite ephemeris error as a function of measurement errors of space-surveillance sensors. These equations lend themselves to rapid computation with modest computer resources. They are applicable over prediction times such that measurement errors, rather than uncertainties of atmospheric drag and of Earth shape, dominate in producing ephemeris error. This report describes the specialization of these equations underlying the ANSER computer program, SEEM (Satellite Ephemeris Error Model). The intent is that this report be of utility to users of SEEM for interpretive purposes, and to computer programmers who may need a mathematical point of departure for limited generalization of SEEM.
Probabilistic Analysis of Pattern Formation in Monotonic Self-Assembly
Moore, Tyler G.; Garzon, Max H.; Deaton, Russell J.
2015-01-01
Inspired by biological systems, self-assembly aims to construct complex structures. It functions through piece-wise, local interactions among component parts and has the potential to produce novel materials and devices at the nanoscale. Algorithmic self-assembly models the product of self-assembly as the output of some computational process, and attempts to control the process of assembly algorithmically. Though providing fundamental insights, these computational models have yet to fully account for the randomness that is inherent in experimental realizations, which tend to be based on trial and error methods. In order to develop a method of analysis that addresses experimental parameters, such as error and yield, this work focuses on the capability of assembly systems to produce a pre-determined set of target patterns, either accurately or perhaps only approximately. Self-assembly systems that assemble patterns that are similar to the targets in a significant percentage are “strong” assemblers. In addition, assemblers should predominantly produce target patterns, with a small percentage of errors or junk. These definitions approximate notions of yield and purity in chemistry and manufacturing. By combining these definitions, a criterion for efficient assembly is developed that can be used to compare the ability of different assembly systems to produce a given target set. Efficiency is a composite measure of the accuracy and purity of an assembler. Typical examples in algorithmic assembly are assessed in the context of these metrics. In addition to validating the method, they also provide some insight that might be used to guide experimentation. Finally, some general results are established that, for efficient assembly, imply that every target pattern is guaranteed to be assembled with a minimum common positive probability, regardless of its size, and that a trichotomy exists to characterize the global behavior of typical efficient, monotonic self-assembly systems in the literature. PMID:26421616
Fine-resolution imaging of solar features using Phase-Diverse Speckle
NASA Technical Reports Server (NTRS)
Paxman, Richard G.
1995-01-01
Phase-diverse speckle (PDS) is a novel imaging technique intended to overcome the degrading effects of atmospheric turbulence on fine-resolution imaging. As its name suggests, PDS is a blend of phase-diversity and speckle-imaging concepts. PDS reconstructions on solar data were validated by simulation, by demonstrating internal consistency of PDS estimates, and by comparing PDS reconstructions with those produced from well accepted speckle-imaging processing. Several sources of error in data collected with the Swedish Vacuum Solar Telescope (SVST) were simulated: CCD noise, quantization error, image misalignment, and defocus error, as well as atmospheric turbulence model error. The simulations demonstrate that fine-resolution information can be reliably recovered out to at least 70% of the diffraction limit without significant introduction of image artifacts. Additional confidence in the SVST restoration is obtained by comparing its spatial power spectrum with previously-published power spectra derived from both space-based images and earth-based images corrected with traditional speckle-imaging techniques; the shape of the spectrum is found to match well the previous measurements. In addition, the imagery is found to be consistent with, but slightly sharper than, imagery reconstructed with accepted speckle-imaging techniques.
A vision-based system for fast and accurate laser scanning in robot-assisted phonomicrosurgery.
Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G
2015-02-01
Surgical quality in phonomicrosurgery can be improved by open-loop laser control (e.g., high-speed scanning capabilities) with a robust and accurate closed-loop visual servoing systems. A new vision-based system for laser scanning control during robot-assisted phonomicrosurgery was developed and tested. Laser scanning was accomplished with a dual control strategy, which adds a vision-based trajectory correction phase to a fast open-loop laser controller. The system is designed to eliminate open-loop aiming errors caused by system calibration limitations and by the unpredictable topology of real targets. Evaluation of the new system was performed using CO(2) laser cutting trials on artificial targets and ex-vivo tissue. This system produced accuracy values corresponding to pixel resolution even when smoke created by the laser-target interaction clutters the camera view. In realistic test scenarios, trajectory following RMS errors were reduced by almost 80 % with respect to open-loop system performances, reaching mean error values around 30 μ m and maximum observed errors in the order of 60 μ m. A new vision-based laser microsurgical control system was shown to be effective and promising with significant positive potential impact on the safety and quality of laser microsurgeries.
Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di, Sheng; Cappello, Franck
Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points canmore » be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.« less
Meurier, C E
2000-07-01
Human errors are common in clinical practice, but they are under-reported. As a result, very little is known of the types, antecedents and consequences of errors in nursing practice. This limits the potential to learn from errors and to make improvement in the quality and safety of nursing care. The aim of this study was to use an Organizational Accident Model to analyse critical incidents of errors in nursing. Twenty registered nurses were invited to produce a critical incident report of an error (which had led to an adverse event or potentially could have led to an adverse event) they had made in their professional practice and to write down their responses to the error using a structured format. Using Reason's Organizational Accident Model, supplemental information was then collected from five of the participants by means of an individual in-depth interview to explore further issues relating to the incidents they had reported. The detailed analysis of one of the incidents is discussed in this paper, demonstrating the effectiveness of this approach in providing insight into the chain of events which may lead to an adverse event. The case study approach using critical incidents of clinical errors was shown to provide relevant information regarding the interaction of organizational factors, local circumstances and active failures (errors) in producing an adverse or potentially adverse event. It is suggested that more use should be made of this approach to understand how errors are made in practice and to take appropriate preventative measures.
Influence of Pedometer Position on Pedometer Accuracy at Various Walking Speeds: A Comparative Study
Lovis, Christian
2016-01-01
Background Demographic growth in conjunction with the rise of chronic diseases is increasing the pressure on health care systems in most OECD countries. Physical activity is known to be an essential factor in improving or maintaining good health. Walking is especially recommended, as it is an activity that can easily be performed by most people without constraints. Pedometers have been extensively used as an incentive to motivate people to become more active. However, a recognized problem with these devices is their diminishing accuracy associated with decreased walking speed. The arrival on the consumer market of new devices, worn indifferently either at the waist, wrist, or as a necklace, gives rise to new questions regarding their accuracy at these different positions. Objective Our objective was to assess the performance of 4 pedometers (iHealth activity monitor, Withings Pulse O2, Misfit Shine, and Garmin vívofit) and compare their accuracy according to their position worn, and at various walking speeds. Methods We conducted this study in a controlled environment with 21 healthy adults required to walk 100 m at 3 different paces (0.4 m/s, 0.6 m/s, and 0.8 m/s) regulated by means of a string attached between their legs at the level of their ankles and a metronome ticking the cadence. To obtain baseline values, we asked the participants to walk 200 m at their own pace. Results A decrease of accuracy was positively correlated with reduced speed for all pedometers (12% mean error at self-selected pace, 27% mean error at 0.8 m/s, 52% mean error at 0.6 m/s, and 76% mean error at 0.4 m/s). Although the position of the pedometer on the person did not significantly influence its accuracy, some interesting tendencies can be highlighted in 2 settings: (1) positioning the pedometer at the waist at a speed greater than 0.8 m/s or as a necklace at preferred speed tended to produce lower mean errors than at the wrist position; and (2) at a slow speed (0.4 m/s), pedometers worn at the wrist tended to produce a lower mean error than in the other positions. Conclusions At all positions, all tested pedometers generated significant errors at slow speeds and therefore cannot be used reliably to evaluate the amount of physical activity for people walking slower than 0.6 m/s (2.16 km/h, or 1.24 mph). At slow speeds, the better accuracy observed with pedometers worn at the wrist could constitute a valuable line of inquiry for the future development of devices adapted to elderly people. PMID:27713114
Tokuda, Yasuharu; Kishida, Naoki; Konishi, Ryota; Koizumi, Shunzo
2011-03-01
Cognitive errors in the course of clinical decision-making are prevalent in many cases of medical injury. We used information on verdict's judgment from closed claims files to determine the important cognitive factors associated with cases of medical injury. Data were collected from claims closed between 2001 to 2005 at district courts in Tokyo and Osaka, Japan. In each case, we recorded all the contributory cognitive, systemic, and patient-related factors judged in the verdicts to be causally related to the medical injury. We also analyzed the association between cognitive factors and cases involving paid compensation using a multivariable logistic regression model. Among 274 cases (mean age 49 years old; 45% women), there were 122 (45%) deaths and 67 (24%) major injuries (incomplete recovery within a year). In 103 cases (38%), the verdicts ordered hospitals to pay compensation (median; 8,000,000 Japanese Yen). An error in judgment (199/274, 73%) and failure of vigilance (177/274, 65%) were the most prevalent causative cognitive factors, and error in judgment was also significantly associated with paid compensation (odds ratio, 1.9; 95% confidence interval [CI], 1.0-3.4). Systemic causative factors including poor teamwork (11/274, 4%) and technology failure (5/274, 2%) were less common. The closed claims analysis based on verdict's judgment showed that cognitive errors were common in cases of medical injury, with an error in judgment being most prevalent and closely associated with compensation payment. Reduction of this type of error is required to produce safer healthcare. 2010 Society of Hospital Medicine.
Porter, Teresita M.; Golding, G. Brian
2012-01-01
Nuclear large subunit ribosomal DNA is widely used in fungal phylogenetics and to an increasing extent also amplicon-based environmental sequencing. The relatively short reads produced by next-generation sequencing, however, makes primer choice and sequence error important variables for obtaining accurate taxonomic classifications. In this simulation study we tested the performance of three classification methods: 1) a similarity-based method (BLAST + Metagenomic Analyzer, MEGAN); 2) a composition-based method (Ribosomal Database Project naïve Bayesian classifier, NBC); and, 3) a phylogeny-based method (Statistical Assignment Package, SAP). We also tested the effects of sequence length, primer choice, and sequence error on classification accuracy and perceived community composition. Using a leave-one-out cross validation approach, results for classifications to the genus rank were as follows: BLAST + MEGAN had the lowest error rate and was particularly robust to sequence error; SAP accuracy was highest when long LSU query sequences were classified; and, NBC runs significantly faster than the other tested methods. All methods performed poorly with the shortest 50–100 bp sequences. Increasing simulated sequence error reduced classification accuracy. Community shifts were detected due to sequence error and primer selection even though there was no change in the underlying community composition. Short read datasets from individual primers, as well as pooled datasets, appear to only approximate the true community composition. We hope this work informs investigators of some of the factors that affect the quality and interpretation of their environmental gene surveys. PMID:22558215
Effect of bird maneuver on frequency-domain helicopter EM response
Fitterman, D.V.; Yin, C.
2004-01-01
Bird maneuver, the rotation of the coil-carrying instrument pod used for frequency-domain helicopter electromagnetic surveys, changes the nominal geometric relationship between the bird-coil system and the ground. These changes affect electromagnetic coupling and can introduce errors in helicopter electromagnetic, (HEM) data. We analyze these effects for a layered half-space for three coil configurations: vertical coaxial, vertical coplanar, and horizontal coplanar. Maneuver effect is shown to have two components: one that is purely geometric and another that is inductive in nature. The geometric component is significantly larger. A correction procedure is developed using an iterative approach that uses standard HEM inversion routines. The maneuver effect correction reduces inversion misfit error and produces laterally smoother cross sections than obtained from uncorrected data. ?? 2004 Society of Exploration Geophysicists. All rights reserved.
Waring, George O
2009-10-01
To describe recent technological additions to the NIDEK CXIII and Quest excimer lasers. A summary article with data from previous published studies outlining the benefits of newer technology. The addition of a 1-kHz infrared eye tracker decreased the spread of laser spot placement from a mean of 228.79 microm without a tracker to 38.47 microm with the eye tracker. The addition of real-time torsion error correction produced a statistically significantly lower cylinder dispersion, mean manifest refractive cylinder, and error of angle postoperatively in eyes that underwent LASIK. The incorporation of an ultrahigh speed eye tracker and active cyclotorsion correction surpasses the minimal technology criteria required for accurate wavefront-based ablations. Copyright 2009, SLACK Incorporated.
Comparison of Different Attitude Correction Models for ZY-3 Satellite Imagery
NASA Astrophysics Data System (ADS)
Song, Wenping; Liu, Shijie; Tong, Xiaohua; Niu, Changling; Ye, Zhen; Zhang, Han; Jin, Yanmin
2018-04-01
ZY-3 satellite, launched in 2012, is the first civilian high resolution stereo mapping satellite of China. This paper analyzed the positioning errors of ZY-3 satellite imagery and conducted compensation for geo-position accuracy improvement using different correction models, including attitude quaternion correction, attitude angle offset correction, and attitude angle linear correction. The experimental results revealed that there exist systematic errors with ZY-3 attitude observations and the positioning accuracy can be improved after attitude correction with aid of ground controls. There is no significant difference between the results of attitude quaternion correction method and the attitude angle correction method. However, the attitude angle offset correction model produced steady improvement than the linear correction model when limited ground control points are available for single scene.
ERIC Educational Resources Information Center
Stokes, Stephanie F.; Lau, Jessica Tse-Kay; Ciocca, Valter
2002-01-01
This study examined the interaction of ambient frequency and feature complexity in the diphthong errors produced by 13 Cantonese-speaking children with phonological disorders. Perceptual analysis of 611 diphthongs identified those most frequently and least frequently in error. Suggested treatment guidelines include consideration of three factors:…
Linzer, Mark; Poplau, Sara; Brown, Roger; Grossman, Ellie; Varkey, Anita; Yale, Steven; Williams, Eric S; Hicks, Lanis; Wallock, Jill; Kohnhorst, Diane; Barbouche, Michael
2017-01-01
While primary care work conditions are associated with adverse clinician outcomes, little is known about the effect of work condition interventions on quality or safety. A cluster randomized controlled trial of 34 clinics in the upper Midwest and New York City. Primary care clinicians and their diabetic and hypertensive patients. Quality improvement projects to improve communication between providers, workflow design, and chronic disease management. Intervention clinics received brief summaries of their clinician and patient outcome data at baseline. We measured work conditions and clinician and patient outcomes both at baseline and 6-12 months post-intervention. Multilevel regression analyses assessed the impact of work condition changes on outcomes. Subgroup analyses assessed impact by intervention category. There were no significant differences in error reduction (19 % vs. 11 %, OR of improvement 1.84, 95 % CI 0.70, 4.82, p = 0.21) or quality of care improvement (19 % improved vs. 44 %, OR 0.62, 95 % CI 0.58, 1.21, p = 0.42) between intervention and control clinics. The conceptual model linking work conditions, provider outcomes, and error reduction showed significant relationships between work conditions and provider outcomes (p ≤ 0.001) and a trend toward a reduced error rate in providers with lower burnout (OR 1.44, 95 % CI 0.94, 2.23, p = 0.09). Few quality metrics, short time span, fewer clinicians recruited than anticipated. Work-life interventions improving clinician satisfaction and well-being do not necessarily reduce errors or improve quality. Longer, more focused interventions may be needed to produce meaningful improvements in patient care. ClinicalTrials.gov # NCT02542995.
Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar
Doerry, Armin W [Albuquerque, NM; Heard, Freddie E [Albuquerque, NM; Cordaro, J Thomas [Albuquerque, NM
2008-06-24
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Peripheral Vision Can Influence Eye Growth and Refractive Development in Infant Monkeys
Smith, Earl L.; Kee, Chea-su; Ramamirtham, Ramkumar; Qiao-Grider, Ying; Hung, Li-Fang
2006-01-01
PURPOSE Given the prominence of central vision in humans, it has been assumed that visual signals from the fovea dominate emmetropization. The purpose of this study was to examine the impact of peripheral vision on emmetropization. METHODS Bilateral, peripheral form deprivation was produced in 12 infant monkeys by rearing them with diffusers that had either 4- or 8-mm apertures centered on the pupils of each eye, to allow 24° or 37° of unrestricted central vision, respectively. At the end of the lens-rearing period, an argon laser was used to ablate the fovea in one eye of each of seven monkeys. Subsequently, all the animals were allowed unrestricted vision. Refractive error and axial dimensions were measured along the pupillary axis by retinoscopy and A-scan ultrasonography, respectively. Control data were obtained from 21 normal monkeys and 3 infants reared with binocular plano lenses. RESULTS Nine of the 12 treated monkeys had refractive errors that fell outside the 10th- and 90th-percentile limits for the age-matched control subjects, and the average refractive error for the treated animals was more variable and significantly less hyperopic/more myopic (+0.03 ± 2.39 D vs. +2.39 ± 0.92 D). The refractive changes were symmetric in the two eyes of a given animal and axial in nature. After lens removal, all the treated monkeys recovered from the induced refractive errors. No interocular differences in the recovery process were observed in the animals with monocular foveal lesions. CONCLUSIONS On the one hand, the peripheral retina can contribute to emmetropizing responses and to ametropias produced by an abnormal visual experience. On the other hand, unrestricted central vision is not sufficient to ensure normal refractive development, and the fovea is not essential for emmetropizing responses. PMID:16249469
Trends in the production of scientific data analysis resources.
Hennessey, Jason; Georgescu, Constantin; Wren, Jonathan D
2014-01-01
As the amount of scientific data grows, peer-reviewed Scientific Data Analysis Resources (SDARs) such as published software programs, databases and web servers have had a strong impact on the productivity of scientific research. SDARs are typically linked to using an Internet URL, which have been shown to decay in a time-dependent fashion. What is less clear is whether or not SDAR-producing group size or prior experience in SDAR production correlates with SDAR persistence or whether certain institutions or regions account for a disproportionate number of peer-reviewed resources. We first quantified the current availability of over 26,000 unique URLs published in MEDLINE abstracts/titles over the past 20 years, then extracted authorship, institutional and ZIP code data. We estimated which URLs were SDARs by using keyword proximity analysis. We identified 23,820 non-archival URLs produced between 1996 and 2013, out of which 11,977 were classified as SDARs. Production of SDARs as measured with the Gini coefficient is more widely distributed among institutions (.62) and ZIP codes (.65) than scientific research in general, which tends to be disproportionately clustered within elite institutions (.91) and ZIPs (.96). An estimated one percent of institutions produced 68% of published research whereas the top 1% only accounted for 16% of SDARs. Some labs produced many SDARs (maximum detected = 64), but 74% of SDAR-producing authors have only published one SDAR. Interestingly, decayed SDARs have significantly fewer average authors (4.33 +/- 3.06), than available SDARs (4.88 +/- 3.59) (p < 8.32 × 10-4). Approximately 3.4% of URLs, as published, contain errors in their entry/format, including DOIs and links to clinical trials registry numbers. SDAR production is less dependent upon institutional location and resources, and SDAR online persistence does not seem to be a function of infrastructure or expertise. Yet, SDAR team size correlates positively with SDAR accessibility, suggesting a possible sociological factor involved. While a detectable URL entry error rate of 3.4% is relatively low, it raises the question of whether or not this is a general error rate that extends to additional published entities.
Skill assessment of Korea operational oceanographic system (KOOS)
NASA Astrophysics Data System (ADS)
Kim, J.; Park, K.
2016-02-01
For the ocean forecast system in Korea, the Korea operational oceanographic system (KOOS) has been developed and pre-operated since 2009 by the Korea institute of ocean science and technology (KIOST) funded by the Korean government. KOOS provides real time information and forecasts for marine environmental conditions in order to support all kinds of activities in the sea. Furthermore, more significant purpose of the KOOS information is to response and support to maritime problems and accidents such as oil spill, red-tide, shipwreck, extraordinary wave, coastal inundation and so on. Accordingly, it is essential to evaluate prediction accuracy and efforts to improve accuracy. The forecast accuracy should meet or exceed target benchmarks before its products are approved for release to the public.In this paper, we conduct error quantification of the forecasts using skill assessment technique for judgement of the KOOS performance. Skill assessment statistics includes the measures of errors and correlations such as root-mean-square-error (RMSE), mean bias (MB), correlation coefficient (R), and index of agreement (IOA) and the frequency with which errors lie within specified limits termed the central frequency (CF).The KOOS provides 72-hour daily forecast data such as air pressure, wind, water elevation, currents, wave, water temperature, and salinity produced by meteorological and hydrodynamic numerical models of WRF, ROMS, MOM5, WAM, WW3, and MOHID. The skill assessment has been performed through comparison of model results with in-situ observation data (Figure 1) for the period from 1 July, 2010 to 31 March, 2015 in Table 1 and model errors have been quantified with skill scores and CF determined by acceptable criteria depending on predicted variables (Table 2). Moreover, we conducted quantitative evaluation of spatio-temporal pattern correlation between numerical models and observation data such as sea surface temperature (SST) and sea surface current produced by ocean sensor in satellites and high frequency (HF) radar, respectively. Those quantified errors can allow to objective assessment of the KOOS performance and used can reveal different aspects of model inefficiency. Based on these results, various model components are tested and developed in order to improve forecast accuracy.
Computer modeling of Earthshine contamination on the VIIRS solar diffuser
NASA Astrophysics Data System (ADS)
Mills, Stephen P.; Agravante, Hiroshi; Hauss, Bruce; Klein, James E.; Weiss, Stephanie C.
2005-10-01
The Visible/Infrared Imager Radiometer Suite (VIIRS), built by Raytheon Santa Barbara Remote Sensing (SBRS) will be one of the primary earth-observing remote-sensing instruments on the National Polar-Orbiting Operational Environmental Satellite System (NPOESS). It will also be installed on the NPOESS Preparatory Project (NPP). These satellite systems fly in near-circular, sun-synchronous low-earth orbits at altitudes of approximately 830 km. VIIRS has 15 bands designed to measure reflectance with wavelengths between 412 nm and 2250 nm, and an additional 7 bands measuring primarily emissive radiance between 3700nm and 11450 nm. The calibration source for the reflective bands is a solar diffuser (SD) that is illuminated once per orbit as the satellite passes from the dark side to the light side of the earth near the poles. Sunlight enters VIIRS through an opening in the front of the instrument. An attenuation screen covers the opening, but other than this there are no other optical elements between the SD and the sun. The BRDF of the SD and the transmittance of the attenuation screen is measured pre-flight, and so with knowledge of the angles of incidence, the radiance of the sun can be computed and is used as a reference to produce calibrated reflectances and radiances. Unfortunately, the opening also allows a significant amount of reflected earthshine to illuminate part of the SD, and this component introduces radiometric error to the calibration process, referred to as earthshine contamination (ESC). The VIIRS radiometric error budget allocated a 0.3% error based on modeling of the ESC done by SBRS during the design phase. This model assumes that the earth has Lambertian BRDF with a maximum top-of-atmosphere albedo of 1. The Moderate Resolution Imaging Spectroradiometer (MODIS) has an SD with a design similar to VIIRS, and in 2003 the MODIS Science Team reported to Northrop Grumman Space Technology (NGST), the prime contractor for NPOESS, their suspicion that ESC was causing higher than expected radiometric error, and asked whether VIIRS might have a similar problem. The NPOESS Models and Simulation (M&S) team considered whether the Lambertian BRDF assumption would cause an underestimating of the ESC error. Particularly, snow, ice and water show very large BRDFs for geometries for forward scattered, near-grazing angles of incidence, and in common parlance this is called glare. The observed earth geometry during the period where the SD is illuminated by the sun has just such geometries that produce strongly forward scattering glare. In addition the SD acquisition occurs in the polar regions, where snow, ice and water are most prevalent. Using models in their Environmental Products Verification and Remote Sensing Testbed (EVEREST), the M&S team produced a model that meticulously traced the light rays from the attenuation screen to each detector and combined this with a model of the satellite orbit, with solar geometry and radiative transfer models that include the effect of the BRDF of various surfaces. This modeling showed that radiometric errors up to 4.5% over water and 1.5% over snow or ice. Clouds produce errors up to 0.8%. The likelihood of these high errors occurring has not been determined. Because of this analysis, various remedial options are now being considered.
Training of Working Memory Impacts Neural Processing of Vocal Pitch Regulation
Li, Weifeng; Guo, Zhiqiang; Jones, Jeffery A.; Huang, Xiyan; Chen, Xi; Liu, Peng; Chen, Shaozhen; Liu, Hanjun
2015-01-01
Working memory training can improve the performance of tasks that were not trained. Whether auditory-motor integration for voice control can benefit from working memory training, however, remains unclear. The present event-related potential (ERP) study examined the impact of working memory training on the auditory-motor processing of vocal pitch. Trained participants underwent adaptive working memory training using a digit span backwards paradigm, while control participants did not receive any training. Before and after training, both trained and control participants were exposed to frequency-altered auditory feedback while producing vocalizations. After training, trained participants exhibited significantly decreased N1 amplitudes and increased P2 amplitudes in response to pitch errors in voice auditory feedback. In addition, there was a significant positive correlation between the degree of improvement in working memory capacity and the post-pre difference in P2 amplitudes. Training-related changes in the vocal compensation, however, were not observed. There was no systematic change in either vocal or cortical responses for control participants. These findings provide evidence that working memory training impacts the cortical processing of feedback errors in vocal pitch regulation. This enhanced cortical processing may be the result of increased neural efficiency in the detection of pitch errors between the intended and actual feedback. PMID:26553373
Taylor, Jasmine B; Visser, Troy A W; Fueggle, Simone N; Bellgrove, Mark A; Fox, Allison M
2018-04-01
Previous studies have postulated that the error-related negativity (ERN) may reflect individual differences in impulsivity; however, none have used a longitudinal framework or evaluated impulsivity as a multidimensional construct. The current study evaluated whether ERN amplitude, measured in childhood and adolescence, is predictive of impulsiveness during adolescence. Seventy-five children participated in this study, initially at ages 7-9 years and again at 12-18 years. The interval between testing sessions ranged from 5 to 9 years. The ERN was extracted in response to behavioural errors produced during a modified visual flanker task at both time points (i.e. childhood and adolescence). Participants also completed the Barratt Impulsiveness Scale - a measure that considers impulsiveness to comprise three core sub-traits - during adolescence. At adolescence, the ERN amplitude was significantly larger than during childhood. Additionally, ERN amplitude during adolescence significantly predicted motor impulsiveness at that time point, after controlling for age, gender, and the number of trials included in the ERN. In contrast, ERN amplitude during childhood did not uniquely predict impulsiveness during adolescence. These findings provide preliminary evidence that ERN amplitude is an electrophysiological marker of self-reported motor impulsiveness (i.e. acting without thinking) during adolescence. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Saccadic performance in questionnaire-identified schizotypes over time.
Gooding, Diane C; Shea, Heather B; Matts, Christie W
2005-02-28
In the present study, 121 young adults (mean age=19 years), hypothesized to be at varying levels of risk for psychosis on the basis of their psychometric profiles, were administered saccadic (antisaccade and refixation) tasks at two separate assessments. At Time 1, individuals posited to be at heightened risk for the later development of schizophrenia-spectrum disorders (i.e., those individuals with elevated Social Anhedonia Scale [SAS] scores) produced significantly more antisaccade task errors than the controls. Despite apparent improvement in antisaccade task performance from initial testing to the follow-up (mean test-retest interval=59 months) across all groups, the Social Anhedonia (SocAnh) group continued to produce significantly more errors than the control group. The antisaccade task performance of the control group showed good temporal stability (Pearson's r=0.70, ICC=0.52), and the SocAnh group's performance showed excellent temporal stability (Pearson's r=0.85, ICC=0.83). The results of this investigation are twofold: First, antisaccade task performance is temporally stable, even in psychometrically identified schizotypes over long test-retest intervals; and secondly, Social Anhedonia Scale scores as well as Time 1 antisaccade task accuracy accounted for much of the variability in Time 2 antisaccade task performance. These findings add to the growing body of literature suggesting that antisaccade task deficits may serve as an endophenotypic marker of a schizophrenia diathesis.
Wavelength modulation diode laser absorption spectroscopy for high-pressure gas sensing
NASA Astrophysics Data System (ADS)
Sun, K.; Chao, X.; Sur, R.; Jeffries, J. B.; Hanson, R. K.
2013-03-01
A general model for 1 f-normalized wavelength modulation absorption spectroscopy with nf detection (i.e., WMS- nf) is presented that considers the performance of injection-current-tuned diode lasers and the reflective interference produced by other optical components on the line-of-sight (LOS) transmission intensity. This model explores the optimization of sensitive detection of optical absorption by species with structured spectra at elevated pressures. Predictions have been validated by comparison with measurements of the 1 f-normalized WMS- nf (for n = 2-6) lineshape of the R(11) transition in the 1st overtone band of CO near 2.3 μm at four different pressures ranging from 5 to 20 atm, all at room temperature. The CO mole fractions measured by 1 f-normalized WMS-2 f, 3 f, and 4 f techniques agree with calibrated mixtures within 2.0 %. At conditions where absorption features are significantly broadened and large modulation depths are required, uncertainties in the WMS background signals due to reflective interference in the optical path can produce significant error in gas mole fraction measurements by 1 f-normalized WMS-2 f. However, such potential errors can be greatly reduced by using the higher harmonics, i.e., 1 f-normalized WMS- nf with n > 2. In addition, less interference from pressure-broadened neighboring transitions has been observed for WMS with higher harmonics than for WMS-2 f.
Zeligman, Liran; Zivotofsky, Ari Z.
2017-01-01
The pro and anti-saccade task (PAT) is a widely used tool in the study of overt and covert attention with promising potential role in neurocognitive and psychiatric assessment. However, specific PAT protocols can vary significantly between labs, potentially resulting in large variations in findings across studies. In light of recent calls towards a standardization of PAT the current study's objective was to systematically and purposely evaluate the effects of block vs. interleaved administration—a fundamental consideration—on PAT measures in a within subject design. Additionally, this study evaluated whether measures of a Posner-type cueing paradigm parallels measures of the PAT paradigm. As hypothesized, results indicate that PAT performance is highly susceptible to administration mode. Interleaved mode resulted in larger error rates not only for anti (blocks: M = 22%; interleaved: M = 42%) but also for pro-saccades (blocks: M = 5%; interleaved: M = 12%). This difference between block and interleaved administration was significantly larger in anti-saccades compared to pro-saccades and cannot be attributed to a 'speed/accuracy tradeoff'. Interleaved mode produced larger pro and anti-saccade differences in error rates while block administration produced larger latency differences. Results question the reflexive nature of pro-saccades, suggesting they are not purely reflexive. These results were further discussed and compared to previous studies that included within subject data of blocks and interleaved trials. PMID:28222173
NASA Astrophysics Data System (ADS)
Wells, J. R.; Kim, J. B.
2011-12-01
Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that multiple acceptable parameter sets exist. Further we expect to demonstrate that the multiple parameter sets produce significantly divergent future forecasts in NEP, C storage, and ET and runoff; and thereby identify a highly important source of DGVM uncertainty
NASA Astrophysics Data System (ADS)
Byun, Do-Seong; Hart, Deirdre E.
2017-04-01
Regional and/or coastal ocean models can use tidal current harmonic forcing, together with tidal harmonic forcing along open boundaries in order to successfully simulate tides and tidal currents. These inputs can be freely generated using online open-access data, but the data produced are not always at the resolution required for regional or coastal models. Subsequent interpolation procedures can produce tidal current forcing data errors for parts of the world's coastal ocean where tidal ellipse inclinations and phases move across the invisible mathematical "boundaries" between 359° and 0° degrees (or 179° and 0°). In nature, such "boundaries" are in fact smooth transitions, but if these mathematical "boundaries" are not treated correctly during interpolation, they can produce inaccurate input data and hamper the accurate simulation of tidal currents in regional and coastal ocean models. These avoidable errors arise due to procedural shortcomings involving vector embodiment problems (i.e., how a vector is represented mathematically, for example as velocities or as coordinates). Automated solutions for producing correct tidal ellipse parameter input data are possible if a series of steps are followed correctly, including the use of Cartesian coordinates during interpolation. This note comprises the first published description of scenarios where tidal ellipse parameter interpolation errors can arise, and of a procedure to successfully avoid these errors when generating tidal inputs for regional and/or coastal ocean numerical models. We explain how a straightforward sequence of data production, format conversion, interpolation, and format reconversion steps may be used to check for the potential occurrence and avoidance of tidal ellipse interpolation and phase errors. This sequence is demonstrated via a case study of the M2 tidal constituent in the seas around Korea but is designed to be universally applicable. We also recommend employing tidal ellipse parameter calculation methods that avoid the use of Foreman's (1978) "northern semi-major axis convention" since, as revealed in our analysis, this commonly used conversion can result in inclination interpolation errors even when Cartesian coordinate-based "vector embodiment" solutions are employed.
Rhodes, Nathaniel J.; Richardson, Chad L.; Heraty, Ryan; Liu, Jiajun; Malczynski, Michael; Qi, Chao
2014-01-01
While a lack of concordance is known between gold standard MIC determinations and Vitek 2, the magnitude of the discrepancy and its impact on treatment decisions for extended-spectrum-β-lactamase (ESBL)-producing Escherichia coli are not. Clinical isolates of ESBL-producing E. coli were collected from blood, tissue, and body fluid samples from January 2003 to July 2009. Resistance genotypes were identified by PCR. Primary analyses evaluated the discordance between Vitek 2 and gold standard methods using cefepime susceptibility breakpoint cutoff values of 8, 4, and 2 μg/ml. The discrepancies in MICs between the methods were classified per convention as very major, major, and minor errors. Sensitivity, specificity, and positive and negative predictive values for susceptibility classifications were calculated. A total of 304 isolates were identified; 59% (179) of the isolates carried blaCTX-M, 47% (143) carried blaTEM, and 4% (12) carried blaSHV. At a breakpoint MIC of 8 μg/ml, Vitek 2 produced a categorical agreement of 66.8% and exhibited very major, major, and minor error rates of 23% (20/87 isolates), 5.1% (8/157 isolates), and 24% (73/304), respectively. The sensitivity, specificity, and positive and negative predictive values for a susceptibility breakpoint of 8 μg/ml were 94.9%, 61.2%, 72.3%, and 91.8%, respectively. The sensitivity, specificity, and positive and negative predictive values for a susceptibility breakpoint of 2 μg/ml were 83.8%, 65.3%, 41%, and 93.3%, respectively. Vitek 2 results in unacceptably high error rates for cefepime compared to those of agar dilution for ESBL-producing E. coli. Clinicians should be wary of making treatment decisions on the basis of Vitek 2 susceptibility results for ESBL-producing E. coli. PMID:24752253
NASA Astrophysics Data System (ADS)
Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu
2008-10-01
The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.
Task motivation influences alpha suppression following errors.
Compton, Rebecca J; Bissey, Bryn; Worby-Selim, Sharoda
2014-07-01
The goal of the present research is to examine the influence of motivation on a novel error-related neural marker, error-related alpha suppression (ERAS). Participants completed an attentionally demanding flanker task under conditions that emphasized either speed or accuracy or under conditions that manipulated the monetary value of errors. Conditions in which errors had greater motivational value produced greater ERAS, that is, greater alpha suppression following errors compared to correct trials. A second study found that a manipulation of task difficulty did not affect ERAS. Together, the results confirm that ERAS is both a robust phenomenon and one that is sensitive to motivational factors. Copyright © 2014 Society for Psychophysiological Research.
Linearizing feedforward/feedback attitude control
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Bach, Ralph E.
1991-01-01
An approach to attitude control theory is introduced in which a linear form is postulated for the closed-loop rotation error dynamics, then the exact control law required to realize it is derived. The nonminimal (four-component) quaternion form is used to attitude because it is globally nonsingular, but the minimal (three-component) quaternion form is used for attitude error because it has no nonlinear constraints to prevent the rotational error dynamics from being linearized, and the definition of the attitude error is based on quaternion algebra. This approach produces an attitude control law that linearizes the closed-loop rotational error dynamics exactly, without any attitude singularities, even if the control errors become large.
Cawyer, Chase R; Anderson, Sarah B; Szychowski, Jeff M; Neely, Cherry; Owen, John
2018-03-01
To compare the accuracy of a new regression-derived formula developed from the National Fetal Growth Studies data to the common alternative method that uses the average of the gestational ages (GAs) calculated for each fetal biometric measurement (biparietal diameter, head circumference, abdominal circumference, and femur length). This retrospective cross-sectional study identified nonanomalous singleton pregnancies that had a crown-rump length plus at least 1 additional sonographic examination with complete fetal biometric measurements. With the use of the crown-rump length to establish the referent estimated date of delivery, each method's (National Institute of Child Health and Human Development regression versus Hadlock average [Radiology 1984; 152:497-501]), error at every examination was computed. Error, defined as the difference between the crown-rump length-derived GA and each method's predicted GA (weeks), was compared in 3 GA intervals: 1 (14 weeks-20 weeks 6 days), 2 (21 weeks-28 weeks 6 days), and 3 (≥29 weeks). In addition, the proportion of each method's examinations that had errors outside prespecified (±) day ranges was computed by using odds ratios. A total of 16,904 sonograms were identified. The overall and prespecified GA range subset mean errors were significantly smaller for the regression compared to the average (P < .01), and the regression had significantly lower odds of observing examinations outside the specified range of error in GA intervals 2 (odds ratio, 1.15; 95% confidence interval, 1.01-1.31) and 3 (odds ratio, 1.24; 95% confidence interval, 1.17-1.32) than the average method. In a contemporary unselected population of women dated by a crown-rump length-derived GA, the National Institute of Child Health and Human Development regression formula produced fewer estimates outside a prespecified margin of error than the commonly used Hadlock average; the differences were most pronounced for GA estimates at 29 weeks and later. © 2017 by the American Institute of Ultrasound in Medicine.
Calibration of a stack of NaI scintillators at the Berkeley Bevalac
NASA Technical Reports Server (NTRS)
Schindler, S. M.; Buffington, A.; Lau, K.; Rasmussen, I. L.
1983-01-01
An analysis of the carbon and argon data reveals that essentially all of the charge-changing fragmentation reactions within the stack can be identified and removed by imposing the simple criteria relating the observed energy deposition profiles to the expected Bragg curve depositions. It is noted that these criteria are even capable of identifying approximately one-third of the expected neutron-stripping interactions, which in these cases have anomalous deposition profiles. The contribution of mass error from uncertainty in delta E has an upper limit of 0.25 percent for Mn; this produces an associated mass error for the experiment of about 0.14 amu. It is believed that this uncertainty will change little with changing gamma. Residual errors in the mapping produce even smaller mass errors for lighter isotopes, whereas photoelectron fluctuations and delta-ray effects are approximately the same independent of the charge and energy deposition.
Fairfield, Beth; Mammarella, Nicola; Di Domenico, Alberto; D'Aurora, Marco; Stuppia, Liborio; Gatta, Valentina
2017-08-30
False memories are common memory distortions in everyday life and seem to increase with affectively connoted complex information. In line with recent studies showing a significant interaction between the noradrenergic system and emotional memory, we investigated whether healthy volunteer carriers of the deletion variant of the ADRA2B gene that codes for the α2b-adrenergic receptor are more prone to false memories than non-carriers. In this study, we collected genotype data from 212 healthy female volunteers; 91 ADRA2B carriers and 121 non-carriers. To assess gene effects on false memories for affective information, factorial mixed model analysis of variances (ANOVAs) were conducted with genotype as the between-subjects factor and type of memory error as the within-subjects factor. We found that although carriers and non-carriers made comparable numbers of false memory errors, they showed differences in the direction of valence biases, especially for inferential causal errors. Specifically, carriers produced fewer causal false memory errors for scripts with a negative outcome, whereas non-carriers showed a more general emotional effect and made fewer causal errors with both positive and negative outcomes. These findings suggest that putatively higher levels of noradrenaline in deletion carriers may enhance short-term consolidation of negative information and lead to fewer memory distortions when facing negative events. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Sherwood, David E.
2010-01-01
According to closed-loop accounts of motor control, movement errors are detected by comparing sensory feedback to an acquired reference state. Differences between the reference state and the movement-produced feedback results in an error signal that serves as a basis for a correction. The main question addressed in the current study was how…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.
Real-time terrain rendering for interactive visualization remains a demanding task. We present a novel algorithm with several advantages over previous methods: our method is unusually stingy with polygons yet achieves real-time performance and is scalable to arbitrary regions and resolutions. The method provides a continuous terrain mesh of specified triangle count having provably minimum error in restricted but reasonably general classes of permissible meshes and error metrics. Our method provides an elegant solution to guaranteeing certain elusive types of consistency in scenes produced by multiple scene generators which share a common finest-resolution database but which otherwise operate entirely independently. Thismore » consistency is achieved by exploiting the freedom of choice of error metric allowed by the algorithm to provide, for example, multiple exact lines-of-sight in real-time. Our methods rely on an off-line pre-processing phase to construct a multi-scale data structure consisting of triangular terrain approximations enhanced ({open_quotes}thickened{close_quotes}) with world-space error information. In real time, this error data is efficiently transformed into screen-space where it is used to guide a greedy top-down triangle subdivision algorithm which produces the desired minimal error continuous terrain mesh. Our algorithm has been implemented and it operates at real-time rates.« less
Local and global evaluation for remote sensing image segmentation
NASA Astrophysics Data System (ADS)
Su, Tengfei; Zhang, Shengwei
2017-08-01
In object-based image analysis, how to produce accurate segmentation is usually a very important issue that needs to be solved before image classification or target recognition. The study for segmentation evaluation method is key to solving this issue. Almost all of the existent evaluation strategies only focus on the global performance assessment. However, these methods are ineffective for the situation that two segmentation results with very similar overall performance have very different local error distributions. To overcome this problem, this paper presents an approach that can both locally and globally quantify segmentation incorrectness. In doing so, region-overlapping metrics are utilized to quantify each reference geo-object's over and under-segmentation error. These quantified error values are used to produce segmentation error maps which have effective illustrative power to delineate local segmentation error patterns. The error values for all of the reference geo-objects are aggregated through using area-weighted summation, so that global indicators can be derived. An experiment using two scenes of very different high resolution images showed that the global evaluation part of the proposed approach was almost as effective as other two global evaluation methods, and the local part was a useful complement to comparing different segmentation results.
Evaluating mixed samples as a source of error in non-invasive genetic studies using microsatellites
Roon, David A.; Thomas, M.E.; Kendall, K.C.; Waits, L.P.
2005-01-01
The use of noninvasive genetic sampling (NGS) for surveying wild populations is increasing rapidly. Currently, only a limited number of studies have evaluated potential biases associated with NGS. This paper evaluates the potential errors associated with analysing mixed samples drawn from multiple animals. Most NGS studies assume that mixed samples will be identified and removed during the genotyping process. We evaluated this assumption by creating 128 mixed samples of extracted DNA from brown bear (Ursus arctos) hair samples. These mixed samples were genotyped and screened for errors at six microsatellite loci according to protocols consistent with those used in other NGS studies. Five mixed samples produced acceptable genotypes after the first screening. However, all mixed samples produced multiple alleles at one or more loci, amplified as only one of the source samples, or yielded inconsistent electropherograms by the final stage of the error-checking process. These processes could potentially reduce the number of individuals observed in NGS studies, but errors should be conservative within demographic estimates. Researchers should be aware of the potential for mixed samples and carefully design gel analysis criteria and error checking protocols to detect mixed samples.
Reducing representativeness and sampling errors in radio occultation-radiosonde comparisons
NASA Astrophysics Data System (ADS)
Gilpin, Shay; Rieckh, Therese; Anthes, Richard
2018-05-01
Radio occultation (RO) and radiosonde (RS) comparisons provide a means of analyzing errors associated with both observational systems. Since RO and RS observations are not taken at the exact same time or location, temporal and spatial sampling errors resulting from atmospheric variability can be significant and inhibit error analysis of the observational systems. In addition, the vertical resolutions of RO and RS profiles vary and vertical representativeness errors may also affect the comparison. In RO-RS comparisons, RO observations are co-located with RS profiles within a fixed time window and distance, i.e. within 3-6 h and circles of radii ranging between 100 and 500 km. In this study, we first show that vertical filtering of RO and RS profiles to a common vertical resolution reduces representativeness errors. We then test two methods of reducing horizontal sampling errors during RO-RS comparisons: restricting co-location pairs to within ellipses oriented along the direction of wind flow rather than circles and applying a spatial-temporal sampling correction based on model data. Using data from 2011 to 2014, we compare RO and RS differences at four GCOS Reference Upper-Air Network (GRUAN) RS stations in different climatic locations, in which co-location pairs were constrained to a large circle ( ˜ 666 km radius), small circle ( ˜ 300 km radius), and ellipse parallel to the wind direction ( ˜ 666 km semi-major axis, ˜ 133 km semi-minor axis). We also apply a spatial-temporal sampling correction using European Centre for Medium-Range Weather Forecasts Interim Reanalysis (ERA-Interim) gridded data. Restricting co-locations to within the ellipse reduces root mean square (RMS) refractivity, temperature, and water vapor pressure differences relative to RMS differences within the large circle and produces differences that are comparable to or less than the RMS differences within circles of similar area. Applying the sampling correction shows the most significant reduction in RMS differences, such that RMS differences are nearly identical to the sampling correction regardless of the geometric constraints. We conclude that implementing the spatial-temporal sampling correction using a reliable model will most effectively reduce sampling errors during RO-RS comparisons; however, if a reliable model is not available, restricting spatial comparisons to within an ellipse parallel to the wind flow will reduce sampling errors caused by horizontal atmospheric variability.
de Freitas, Carolina P.; Cabot, Florence; Manns, Fabrice; Culbertson, William; Yoo, Sonia H.; Parel, Jean-Marie
2015-01-01
Purpose. To assess if a change in refractive index of the anterior chamber during femtosecond laser-assisted cataract surgery can affect the laser beam focus position. Methods. The index of refraction and chromatic dispersion of six ophthalmic viscoelastic devices (OVDs) was measured with an Abbe refractometer. Using the Gullstrand eye model, the index values were used to predict the error in the depth of a femtosecond laser cut when the anterior chamber is filled with OVD. Two sources of error produced by the change in refractive index were evaluated: the error in anterior capsule position measured with optical coherence tomography biometry and the shift in femtosecond laser beam focus depth. Results. The refractive indices of the OVDs measured ranged from 1.335 to 1.341 in the visible light (at 587 nm). The error in depth measurement of the refilled anterior chamber ranged from −5 to +7 μm. The OVD produced a shift of the femtosecond laser focus ranging from −1 to +6 μm. Replacement of the aqueous humor with OVDs with the densest compound produced a predicted error in cut depth of 13 μm anterior to the expected cut. Conclusions. Our calculations show that the change in refractive index due to anterior chamber refilling does not sufficiently shift the laser beam focus position to cause the incomplete capsulotomies reported during femtosecond laser–assisted cataract surgery. PMID:25626971
A greedy algorithm for species selection in dimension reduction of combustion chemistry
NASA Astrophysics Data System (ADS)
Hiremath, Varun; Ren, Zhuyin; Pope, Stephen B.
2010-09-01
Computational calculations of combustion problems involving large numbers of species and reactions with a detailed description of the chemistry can be very expensive. Numerous dimension reduction techniques have been developed in the past to reduce the computational cost. In this paper, we consider the rate controlled constrained-equilibrium (RCCE) dimension reduction method, in which a set of constrained species is specified. For a given number of constrained species, the 'optimal' set of constrained species is that which minimizes the dimension reduction error. The direct determination of the optimal set is computationally infeasible, and instead we present a greedy algorithm which aims at determining a 'good' set of constrained species; that is, one leading to near-minimal dimension reduction error. The partially-stirred reactor (PaSR) involving methane premixed combustion with chemistry described by the GRI-Mech 1.2 mechanism containing 31 species is used to test the algorithm. Results on dimension reduction errors for different sets of constrained species are presented to assess the effectiveness of the greedy algorithm. It is shown that the first four constrained species selected using the proposed greedy algorithm produce lower dimension reduction error than constraints on the major species: CH4, O2, CO2 and H2O. It is also shown that the first ten constrained species selected using the proposed greedy algorithm produce a non-increasing dimension reduction error with every additional constrained species; and produce the lowest dimension reduction error in many cases tested over a wide range of equivalence ratios, pressures and initial temperatures.
Statistical error in simulations of Poisson processes: Example of diffusion in solids
NASA Astrophysics Data System (ADS)
Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.
2016-08-01
Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.
Particle simulation of Coulomb collisions: Comparing the methods of Takizuka and Abe and Nanbu
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Chiaming; Lin, Tungyou; Caflisch, Russel
2008-04-20
The interactions of charged particles in a plasma are governed by long-range Coulomb collision. We compare two widely used Monte Carlo models for Coulomb collisions. One was developed by Takizuka and Abe in 1977, the other was developed by Nanbu in 1997. We perform deterministic and statistical error analysis with respect to particle number and time step. The two models produce similar stochastic errors, but Nanbu's model gives smaller time step errors. Error comparisons between these two methods are presented.
Rosenfield, Mark; Hue, Jennifer E; Huang, Rae R; Bababekova, Yuliya
2012-03-01
Computer vision syndrome (CVS) is a complex of eye and vision problems related to computer use which has been reported in up to 90% of computer users. Ocular symptoms may include asthenopia, accommodative and vergence difficulties and dry eye. Previous studies have reported that uncorrected astigmatism may have a significant impact on symptoms of CVS. However, its effect on task performance is unclear. This study recorded symptoms after a 10 min period of reading from a computer monitor either through the habitual distance refractive correction or with a supplementary -1.00 or -2.00D oblique cylinder added over these lenses in 12 young, visually-normal subjects. Additionally, the distance correction condition was repeated to assess the repeatability of the symptom questionnaire. Subjects' reading speed and accuracy were monitored during the course of the 10 min trial. There was no significant difference in reading rate or the number of errors between the three astigmatic conditions. However, a significant change in symptoms was reported with the median total symptom scores for the 0, 1 and 2D astigmatic conditions being 2.0, 6.5 and 40.0, respectively (p < 0.0001). Further, the repeatability coefficient of the total symptom score following the repeated zero astigmatism condition was ± 13.46. The presence of induced astigmatism produced a significant increase in post-task symptoms but did not affect reading rate or the number of reading errors. The correction of small astigmatic refractive errors may be important in optimizing patient comfort during computer operation. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.
Analysis of the impact of error detection on computer performance
NASA Technical Reports Server (NTRS)
Shin, K. C.; Lee, Y. H.
1983-01-01
Conventionally, reliability analyses either assume that a fault/error is detected immediately following its occurrence, or neglect damages caused by latent errors. Though unrealistic, this assumption was imposed in order to avoid the difficulty of determining the respective probabilities that a fault induces an error and the error is then detected in a random amount of time after its occurrence. As a remedy for this problem a model is proposed to analyze the impact of error detection on computer performance under moderate assumptions. Error latency, the time interval between occurrence and the moment of detection, is used to measure the effectiveness of a detection mechanism. This model is used to: (1) predict the probability of producing an unreliable result, and (2) estimate the loss of computation due to fault and/or error.
A System for Controlling the Oxygen Content of a Gas Produced by Combustion
NASA Technical Reports Server (NTRS)
Singh, J. J.; Davis, W. T.; Puster, R. L. (Inventor)
1984-01-01
A mixture of air, CH4 and OH(2) is burned in a combustion chamber to produce a product gas in the test section. The OH(2) content of the product gas is compared with the OH(2) content of reference air in an OH(2) sensor. If there is a difference an error signal is produced at the output of a control circuit which by the means of a solenoid valve, regulates the flow of OH(2) into the combustion chamber to make the error signal zero. The product gas in the test section has the same oxygen content as air.
Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas
2010-07-20
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas
2010-08-17
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O
2016-11-01
Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.
Statistically Self-Consistent and Accurate Errors for SuperDARN Data
NASA Astrophysics Data System (ADS)
Reimer, A. S.; Hussey, G. C.; McWilliams, K. A.
2018-01-01
The Super Dual Auroral Radar Network (SuperDARN)-fitted data products (e.g., spectral width and velocity) are produced using weighted least squares fitting. We present a new First-Principles Fitting Methodology (FPFM) that utilizes the first-principles approach of Reimer et al. (2016) to estimate the variance of the real and imaginary components of the mean autocorrelation functions (ACFs) lags. SuperDARN ACFs fitted by the FPFM do not use ad hoc or empirical criteria. Currently, the weighting used to fit the ACF lags is derived from ad hoc estimates of the ACF lag variance. Additionally, an overcautious lag filtering criterion is used that sometimes discards data that contains useful information. In low signal-to-noise (SNR) and/or low signal-to-clutter regimes the ad hoc variance and empirical criterion lead to underestimated errors for the fitted parameter because the relative contributions of signal, noise, and clutter to the ACF variance is not taken into consideration. The FPFM variance expressions include contributions of signal, noise, and clutter. The clutter is estimated using the maximal power-based self-clutter estimator derived by Reimer and Hussey (2015). The FPFM was successfully implemented and tested using synthetic ACFs generated with the radar data simulator of Ribeiro, Ponomarenko, et al. (2013). The fitted parameters and the fitted-parameter errors produced by the FPFM are compared with the current SuperDARN fitting software, FITACF. Using self-consistent statistical analysis, the FPFM produces reliable or trustworthy quantitative measures of the errors of the fitted parameters. For an SNR in excess of 3 dB and velocity error below 100 m/s, the FPFM produces 52% more data points than FITACF.
NASA Technical Reports Server (NTRS)
Gundy-Burlet, Karen
2003-01-01
The Neural Flight Control System (NFCS) was developed to address the need for control systems that can be produced and tested at lower cost, easily adapted to prototype vehicles and for flight systems that can accommodate damaged control surfaces or changes to aircraft stability and control characteristics resulting from failures or accidents. NFCS utilizes on a neural network-based flight control algorithm which automatically compensates for a broad spectrum of unanticipated damage or failures of an aircraft in flight. Pilot stick and rudder pedal inputs are fed into a reference model which produces pitch, roll and yaw rate commands. The reference model frequencies and gains can be set to provide handling quality characteristics suitable for the aircraft of interest. The rate commands are used in conjunction with estimates of the aircraft s stability and control (S&C) derivatives by a simplified Dynamic Inverse controller to produce virtual elevator, aileron and rudder commands. These virtual surface deflection commands are optimally distributed across the aircraft s available control surfaces using linear programming theory. Sensor data is compared with the reference model rate commands to produce an error signal. A Proportional/Integral (PI) error controller "winds up" on the error signal and adds an augmented command to the reference model output with the effect of zeroing the error signal. In order to provide more consistent handling qualities for the pilot, neural networks learn the behavior of the error controller and add in the augmented command before the integrator winds up. In the case of damage sufficient to affect the handling qualities of the aircraft, an Adaptive Critic is utilized to reduce the reference model frequencies and gains to stay within a flyable envelope of the aircraft.
NASA Technical Reports Server (NTRS)
Gomez, Susan F.; Hood, Laura; Panneton, Robert J.; Saunders, Penny E.; Adkins, Antha; Hwu, Shian U.; Lu, Ba P.
1996-01-01
Two computational techniques are used to calculate differential phase errors on Global Positioning System (GPS) carrier war phase measurements due to certain multipath-producing objects. The two computational techniques are a rigorous computati electromagnetics technique called Geometric Theory of Diffraction (GTD) and the other is a simple ray tracing method. The GTD technique has been used successfully to predict microwave propagation characteristics by taking into account the dominant multipath components due to reflections and diffractions from scattering structures. The ray tracing technique only solves for reflected signals. The results from the two techniques are compared to GPS differential carrier phase ns taken on the ground using a GPS receiver in the presence of typical International Space Station (ISS) interference structures. The calculations produced using the GTD code compared to the measured results better than the ray tracing technique. The agreement was good, demonstrating that the phase errors due to multipath can be modeled and characterized using the GTD technique and characterized to a lesser fidelity using the DECAT technique. However, some discrepancies were observed. Most of the discrepancies occurred at lower devations and were either due to phase center deviations of the antenna, the background multipath environment, or the receiver itself. Selected measured and predicted differential carrier phase error results are presented and compared. Results indicate that reflections and diffractions caused by the multipath producers, located near the GPS antennas, can produce phase shifts of greater than 10 mm, and as high as 95 mm. It should be noted tl the field test configuration was meant to simulate typical ISS structures, but the two environments are not identical. The GZ and DECAT techniques have been used to calculate phase errors due to multipath o the ISS configuration to quantify the expected attitude determination errors.
Improved astigmatic focus error detection method
NASA Technical Reports Server (NTRS)
Bernacki, Bruce E.
1992-01-01
All easy-to-implement focus- and track-error detection methods presently used in magneto-optical (MO) disk drives using pre-grooved media suffer from a side effect known as feedthrough. Feedthrough is the unwanted focus error signal (FES) produced when the optical head is seeking a new track, and light refracted from the pre-grooved disk produces an erroneous FES. Some focus and track-error detection methods are more resistant to feedthrough, but tend to be complicated and/or difficult to keep in alignment as a result of environmental insults. The astigmatic focus/push-pull tracking method is an elegant, easy-to-align focus- and track-error detection method. Unfortunately, it is also highly susceptible to feedthrough when astigmatism is present, with the worst effects caused by astigmatism oriented such that the tangential and sagittal foci are at 45 deg to the track direction. This disclosure outlines a method to nearly completely eliminate the worst-case form of feedthrough due to astigmatism oriented 45 deg to the track direction. Feedthrough due to other primary aberrations is not improved, but performance is identical to the unimproved astigmatic method.
Theta EEG dynamics of the error-related negativity.
Trujillo, Logan T; Allen, John J B
2007-03-01
The error-related negativity (ERN) is a response-locked brain potential (ERP) occurring 80-100ms following response errors. This report contrasts three views of the genesis of the ERN, testing the classic view that time-locked phasic bursts give rise to the ERN against the view that the ERN arises from a pure phase-resetting of ongoing theta (4-7Hz) EEG activity and the view that the ERN is generated - at least in part - by a phase-resetting and amplitude enhancement of ongoing theta EEG activity. Time-domain ERP analyses were augmented with time-frequency investigations of phase-locked and non-phase-locked spectral power, and inter-trial phase coherence (ITPC) computed from individual EEG trials, examining time courses and scalp topographies. Simulations based on the assumptions of the classic, pure phase-resetting, and phase-resetting plus enhancement views, using parameters from each subject's empirical data, were used to contrast the time-frequency findings that could be expected if one or more of these hypotheses adequately modeled the data. Error responses produced larger amplitude activity than correct responses in time-domain ERPs immediately following responses, as expected. Time-frequency analyses revealed that significant error-related post-response increases in total spectral power (phase- and non-phase-locked), phase-locked power, and ITPC were primarily restricted to the theta range, with this effect located over midfrontocentral sites, with a temporal distribution from approximately 150-200ms prior to the button press and persisting up to 400ms post-button press. The increase in non-phase-locked power (total power minus phase-locked power) was larger than phase-locked power, indicating that the bulk of the theta event-related dynamics were not phase-locked to response. Results of the simulations revealed a good fit for data simulated according to the phase-locking with amplitude enhancement perspective, and a poor fit for data simulated according to the classic view and the pure phase-resetting view. Error responses produce not only phase-locked increases in theta EEG activity, but also increases in non-phase-locked theta, both of which share a similar topography. The findings are thus consistent with the notion advanced by Luu et al. [Luu P, Tucker DM, Makeig S. Frontal midline theta and the error-related negativity; neurophysiological mechanisms of action regulation. Clin Neurophysiol 2004;115:1821-35] that the ERN emerges, at least in part, from a phase-resetting and phase-locking of ongoing theta-band activity, in the context of a general increase in theta power following errors.
Raud Westberg, Liisi; Höglund Santamarta, Lena; Karlsson, Jenny; Nyberg, Jill; Neovius, Erik; Lohmander, Anette
2017-10-25
The aim of this study was to describe speech at 1, 1;6 and 3 years of age in children born with unilateral cleft lip and palate (UCLP) and relate the findings to operation method and amount of early intervention received. A prospective trial of children born with UCLP operated with a one-stage (OS) palatal repair at 12 months or a two-stage repair (TS) with soft palate closure at 3-4 months and hard palate closure at 12 months was undertaken (Scandcleft). At 1 and 1;6 years the place and manner of articulation and number of different consonants produced in babbling were reported in 33 children. At three years of age percentage consonants correct adjusted for age (PCC-A) and cleft speech errors were assessed in 26 of the 33 children. Early intervention was not provided as part of the trial but according to the clinical routine and was extracted from patient records. At age 3, the mean PCC-A was 68% and 46% of the children produced articulation errors with no significant difference between the two groups. At one year there was a significantly higher occurrence of oral stops and anterior place consonants in the TS group. There were significant correlations between the consonant production between one and three years of age, but not with amount of early intervention received. The TS method was beneficial for consonant production at age 1, but not shown at 1;6 or 3 years. Behaviourally based early intervention still needs to be evaluated.
Verifying speculative multithreading in an application
Felton, Mitchell D
2014-12-09
Verifying speculative multithreading in an application executing in a computing system, including: executing one or more test instructions serially thereby producing a serial result, including insuring that all data dependencies among the test instructions are satisfied; executing the test instructions speculatively in a plurality of threads thereby producing a speculative result; and determining whether a speculative multithreading error exists including: comparing the serial result to the speculative result and, if the serial result does not match the speculative result, determining that a speculative multithreading error exists.
Verifying speculative multithreading in an application
Felton, Mitchell D
2014-11-18
Verifying speculative multithreading in an application executing in a computing system, including: executing one or more test instructions serially thereby producing a serial result, including insuring that all data dependencies among the test instructions are satisfied; executing the test instructions speculatively in a plurality of threads thereby producing a speculative result; and determining whether a speculative multithreading error exists including: comparing the serial result to the speculative result and, if the serial result does not match the speculative result, determining that a speculative multithreading error exists.
The US Navy Coastal Surge and Inundation Prediction System (CSIPS): Making Forecasts Easier
2013-02-14
produced the best results Peak Water Level Percent Error CD Formulation LAWMA , Amerada Pass Freshwater Canal Locks Calcasieu Pass Sabine Pass...Conclusions Ongoing Work 16 Baseline Simulation Results Peak Water Level Percent Error LAWMA , Amerada Pass Freshwater Canal Locks Calcasieu Pass...Conclusions Ongoing Work 20 Sensitivity Studies Waves Run Water Level – Percent Error of Peak HWM MAPE Lawma , Armeda Pass Freshwater
A median filter approach for correcting errors in a vector field
NASA Technical Reports Server (NTRS)
Schultz, H.
1985-01-01
Techniques are presented for detecting and correcting errors in a vector field. These methods employ median filters which are frequently used in image processing to enhance edges and remove noise. A detailed example is given for wind field maps produced by a spaceborne scatterometer. The error detection and replacement algorithm was tested with simulation data from the NASA Scatterometer (NSCAT) project.
Huang, Juan; Hung, Li-Fang; Smith, Earl L.
2012-01-01
This study aimed to investigate the changes in ocular shape and relative peripheral refraction during the recovery from myopia produced by form deprivation (FD) and hyperopic defocus. FD was imposed in 6 monkeys by securing a diffuser lens over one eye; hyperopic defocus was produced in another 6 monkeys by fitting one eye with -3D spectacle. When unrestricted vision was re-established, the treated eyes recovered from the vision-induced central and peripheral refractive errors. The recovery of peripheral refractive errors was associated with corresponding changes in the shape of the posterior globe. The results suggest that vision can actively regulate ocular shape and the development of central and peripheral refractions in infant primates. PMID:23026012
Bepko, Robert J; Moore, John R; Coleman, John R
2009-01-01
This article reports an intervention to improve the quality and safety of hospital patient care by introducing the use of pharmacy robotics into the medication distribution process. Medication safety is vitally important. The integration of pharmacy robotics with computerized practitioner order entry and bedside medication bar coding produces a significant reduction in medication errors. The creation of a safe medication-from initial ordering to bedside administration-provides enormous benefits to patients, to health care providers, and to the organization as well.
Informatics and data quality at collaborative multicenter Breast and Colon Cancer Family Registries.
McGarvey, Peter B; Ladwa, Sweta; Oberti, Mauricio; Dragomir, Anca Dana; Hedlund, Erin K; Tanenbaum, David Michael; Suzek, Baris E; Madhavan, Subha
2012-06-01
Quality control and harmonization of data is a vital and challenging undertaking for any successful data coordination center and a responsibility shared between the multiple sites that produce, integrate, and utilize the data. Here we describe a coordinated effort between scientists and data managers in the Cancer Family Registries to implement a data governance infrastructure consisting of both organizational and technical solutions. The technical solution uses a rule-based validation system that facilitates error detection and correction for data centers submitting data to a central informatics database. Validation rules comprise both standard checks on allowable values and a crosscheck of related database elements for logical and scientific consistency. Evaluation over a 2-year timeframe showed a significant decrease in the number of errors in the database and a concurrent increase in data consistency and accuracy.
Informatics and data quality at collaborative multicenter Breast and Colon Cancer Family Registries
McGarvey, Peter B; Ladwa, Sweta; Oberti, Mauricio; Dragomir, Anca Dana; Hedlund, Erin K; Tanenbaum, David Michael; Suzek, Baris E
2012-01-01
Quality control and harmonization of data is a vital and challenging undertaking for any successful data coordination center and a responsibility shared between the multiple sites that produce, integrate, and utilize the data. Here we describe a coordinated effort between scientists and data managers in the Cancer Family Registries to implement a data governance infrastructure consisting of both organizational and technical solutions. The technical solution uses a rule-based validation system that facilitates error detection and correction for data centers submitting data to a central informatics database. Validation rules comprise both standard checks on allowable values and a crosscheck of related database elements for logical and scientific consistency. Evaluation over a 2-year timeframe showed a significant decrease in the number of errors in the database and a concurrent increase in data consistency and accuracy. PMID:22323393
Linguistic Knowledge and Reasoning for Error Diagnosis and Feedback Generation.
ERIC Educational Resources Information Center
Delmonte, Rodolfo
2003-01-01
Presents four sets of natural language processing-based exercises for which error correction and feedback are produced by means of a rich database in which linguistic information is encoded either at the lexical or the grammatical level. (Author/VWL)
NASA Astrophysics Data System (ADS)
Vanhaelewyn, Gauthier; Duchatelet, Pierre; Vigouroux, Corinne; Dils, Bart; Kumps, Nicolas; Hermans, Christian; Demoulin, Philippe; Mahieu, Emmanuel; Sussmann, Ralf; de Mazière, Martine
2010-05-01
The Fourier Transform Infra Red (FTIR) remote measurements of atmospheric constituents at the observatories at Saint-Denis (20.90°S, 55.48°E, 50 m a.s.l., Île de la Réunion) and Jungfraujoch (46.55°N, 7.98°E, 3580 m a.s.l., Switzerland) are affiliated to the Network for the Detection of Atmospheric Composition Change (NDACC). The European NDACC FTIR data for CH4 were improved and homogenized among the stations in the EU project HYMN. One important application of these data is their use for the validation of satellite products, like the validation of SCIAMACHY or IASI CH4 columns. Therefore, it is very important that errors and uncertainties associated to the ground-based FTIR CH4 data are well characterized. In this poster we present a comparison of errors on retrieved vertical concentration profiles of CH4 between Saint-Denis and Jungfraujoch. At both stations, we have used the same retrieval algorithm, namely SFIT2 v3.92 developed jointly at the NASA Langley Research Center, the National Center for Atmospheric Research (NCAR) and the National Institute of Water and Atmosphere Research (NIWA) at Lauder, New Zealand, and error evaluation tools developed at the Belgian Institute for Space Aeronomy (BIRA-IASB). The error components investigated in this study are: smoothing, noise, temperature, instrumental line shape (ILS) (in particular the modulation amplitude and phase), spectroscopy (in particular the pressure broadening and intensity), interfering species and solar zenith angle (SZA) error. We will determine if the characteristics of the sites in terms of altitude, geographic locations and atmospheric conditions produce significant differences in the error budgets for the retrieved CH4 vertical profiles
NASA Astrophysics Data System (ADS)
Kim, J. G.; Liu, H.
2007-10-01
Near-infrared spectroscopy or imaging has been extensively applied to various biomedical applications since it can detect the concentrations of oxyhaemoglobin (HbO2), deoxyhaemoglobin (Hb) and total haemoglobin (Hbtotal) from deep tissues. To quantify concentrations of these haemoglobin derivatives, the extinction coefficient values of HbO2 and Hb have to be employed. However, it was not well recognized among researchers that small differences in extinction coefficients could cause significant errors in quantifying the concentrations of haemoglobin derivatives. In this study, we derived equations to estimate errors of haemoglobin derivatives caused by the variation of haemoglobin extinction coefficients. To prove our error analysis, we performed experiments using liquid-tissue phantoms containing 1% Intralipid in a phosphate-buffered saline solution. The gas intervention of pure oxygen was given in the solution to examine the oxygenation changes in the phantom, and 3 mL of human blood was added twice to show the changes in [Hbtotal]. The error calculation has shown that even a small variation (0.01 cm-1 mM-1) in extinction coefficients can produce appreciable relative errors in quantification of Δ[HbO2], Δ[Hb] and Δ[Hbtotal]. We have also observed that the error of Δ[Hbtotal] is not always larger than those of Δ[HbO2] and Δ[Hb]. This study concludes that we need to be aware of any variation in haemoglobin extinction coefficients, which could result from changes in temperature, and to utilize corresponding animal's haemoglobin extinction coefficients for the animal experiments, in order to obtain more accurate values of Δ[HbO2], Δ[Hb] and Δ[Hbtotal] from in vivo tissue measurements.
Jorratt, Pascal; Delano, Paul H; Delgado, Carolina; Dagnino-Subiabre, Alexies; Terreros, Gonzalo
2017-01-01
The auditory efferent system is a neural network that originates in the auditory cortex and projects to the cochlear receptor through olivocochlear (OC) neurons. Medial OC neurons make cholinergic synapses with outer hair cells (OHCs) through nicotinic receptors constituted by α9 and α10 subunits. One of the physiological functions of the α9 nicotinic receptor subunit (α9-nAChR) is the suppression of auditory distractors during selective attention to visual stimuli. In a recent study we demonstrated that the behavioral performance of alpha-9 nicotinic receptor knock-out (KO) mice is altered during selective attention to visual stimuli with auditory distractors since they made less correct responses and more omissions than wild type (WT) mice. As the inhibition of the behavioral responses to irrelevant stimuli is an important mechanism of the selective attention processes, behavioral errors are relevant measures that can reflect altered inhibitory control. Errors produced during a cued attention task can be classified as premature, target and perseverative errors. Perseverative responses can be considered as an inability to inhibit the repetition of an action already planned, while premature responses can be considered as an index of the ability to wait or retain an action. Here, we studied premature, target and perseverative errors during a visual attention task with auditory distractors in WT and KO mice. We found that α9-KO mice make fewer perseverative errors with longer latencies than WT mice in the presence of auditory distractors. In addition, although we found no significant difference in the number of target error between genotypes, KO mice made more short-latency target errors than WT mice during the presentation of auditory distractors. The fewer perseverative error made by α9-KO mice could be explained by a reduced motivation for reward and an increased impulsivity during decision making with auditory distraction in KO mice.
Improving the color fidelity of cameras for advanced television systems
NASA Astrophysics Data System (ADS)
Kollarits, Richard V.; Gibbon, David C.
1992-08-01
In this paper we compare the accuracy of the color information obtained from television cameras using three and five wavelength bands. This comparison is based on real digital camera data. The cameras are treated as colorimeters whose characteristics are not linked to that of the display. The color matrices for both cameras were obtained by identical optimization procedures that minimized the color error The color error for the five band camera is 2. 5 times smaller than that obtained from the three band camera. Visual comparison of color matches on a characterized color monitor indicate that the five band camera is capable of color measurements that produce no significant visual error on the display. Because the outputs from the five band camera are reduced to the normal three channels conventionally used for display there need be no increase in signal handling complexity outside the camera. Likewise it is possible to construct a five band camera using only three sensors as in conventional cameras. The principal drawback of the five band camera is the reduction in effective camera sensitivity by about 3/4 of an I stop. 1.
Hill, B.R.; DeCarlo, E.H.; Fuller, C.C.; Wong, M.F.
1998-01-01
Reliable estimates of sediment-budget errors are important for interpreting sediment-budget results. Sediment-budget errors are commonly considered equal to sediment-budget imbalances, which may underestimate actual sediment-budget errors if they include compensating positive and negative errors. We modified the sediment 'fingerprinting' approach to qualitatively evaluate compensating errors in an annual (1991) fine (<63 ??m) sediment budget for the North Halawa Valley, a mountainous, forested drainage basin on the island of Oahu, Hawaii, during construction of a major highway. We measured concentrations of aeolian quartz and 137Cs in sediment sources and fluvial sediments, and combined concentrations of these aerosols with the sediment budget to construct aerosol budgets. Aerosol concentrations were independent of the sediment budget, hence aerosol budgets were less likely than sediment budgets to include compensating errors. Differences between sediment-budget and aerosol-budget imbalances therefore provide a measure of compensating errors in the sediment budget. The sediment-budget imbalance equalled 25% of the fluvial fine-sediment load. Aerosol-budget imbalances were equal to 19% of the fluvial 137Cs load and 34% of the fluval quartz load. The reasonably close agreement between sediment- and aerosol-budget imbalances indicates that compensating errors in the sediment budget were not large and that the sediment-budget imbalance as a reliable measure of sediment-budget error. We attribute at least one-third of the 1991 fluvial fine-sediment load to highway construction. Continued monitoring indicated that highway construction produced 90% of the fluvial fine-sediment load during 1992. Erosion of channel margins and attrition of coarse particles provided most of the fine sediment produced by natural processes. Hillslope processes contributed relatively minor amounts of sediment.
Rotational wind indicator enhances control of rotated displays
NASA Technical Reports Server (NTRS)
Cunningham, H. A.; Pavel, Misha
1991-01-01
Rotation by 108 deg of the spatial mapping between a visual display and a manual input device produces large spatial errors in a discrete aiming task. These errors are not easily corrected by voluntary mental effort, but the central nervous system does adapt gradually to the new mapping. Bernotat (1970) showed that adding true hand position to a 90 deg rotated display improved performance of a compensatory tracking task, but tracking error rose again upon removal of the explicit cue. This suggests that the explicit error signal did not induce changes in the neural mapping, but rather allowed the operator to reduce tracking error using a higher mental strategy. In this report, we describe an explicit visual display enhancement applied to a 108 deg rotated discrete aiming task. A 'wind indicator' corresponding to the effect of the mapping rotation is displayed on the operator-controlled cursor. The human operator is instructed to oppose the virtual force represented by the indicator, as one would do if flying an airplane in a crosswind. This enhancement reduces spatial aiming error in the first 10 minutes of practice by an average of 70 percent when compared to a no enhancement control condition. Moreover, it produces adaptation aftereffect, which is evidence of learning by neural adaptation rather than by mental strategy. Finally, aiming error does not rise upon removal of the explicit cue.
Error analysis of speed of sound reconstruction in ultrasound limited angle transmission tomography.
Jintamethasawat, Rungroj; Lee, Won-Mean; Carson, Paul L; Hooi, Fong Ming; Fowlkes, J Brian; Goodsitt, Mitchell M; Sampson, Richard; Wenisch, Thomas F; Wei, Siyuan; Zhou, Jian; Chakrabarti, Chaitali; Kripfgans, Oliver D
2018-04-07
We have investigated limited angle transmission tomography to estimate speed of sound (SOS) distributions for breast cancer detection. That requires both accurate delineations of major tissues, in this case by segmentation of prior B-mode images, and calibration of the relative positions of the opposed transducers. Experimental sensitivity evaluation of the reconstructions with respect to segmentation and calibration errors is difficult with our current system. Therefore, parametric studies of SOS errors in our bent-ray reconstructions were simulated. They included mis-segmentation of an object of interest or a nearby object, and miscalibration of relative transducer positions in 3D. Close correspondence of reconstruction accuracy was verified in the simplest case, a cylindrical object in homogeneous background with induced segmentation and calibration inaccuracies. Simulated mis-segmentation in object size and lateral location produced maximum SOS errors of 6.3% within 10 mm diameter change and 9.1% within 5 mm shift, respectively. Modest errors in assumed transducer separation produced the maximum SOS error from miscalibrations (57.3% within 5 mm shift), still, correction of this type of error can easily be achieved in the clinic. This study should aid in designing adequate transducer mounts and calibration procedures, and in specification of B-mode image quality and segmentation algorithms for limited angle transmission tomography relying on ray tracing algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.
Stability of simulated flight path control at +3 Gz in a human centrifuge.
Guardiera, Simon; Dalecki, Marc; Bock, Otmar
2010-04-01
Earlier studies have shown that naïve subjects and experienced jet pilots produce exaggerated manual forces when exposed to increased acceleration (+Gz). This study was designed to evaluate whether this exaggeration affects the stability of simulated flight path control. We evaluated naïve subjects' performance in a flight simulator which either remained stationary (+1 Gz), or rotated to induce an acceleration in accordance to the simulated flight path with a mean acceleration of about +3 Gz. In either case, subjects were requested to produce a series of altitude changes in pursuit of a visual target airplane. Resulting flight paths were analyzed to determine the largest oscillation after an altitude change (Oscillation) and the mean deviation between subject and target flight path (Tracking Error). Flight stability after an altitude change was degraded in +3 Gz compared to +1 Gz, as evidenced by larger Oscillations (+11%) and increased Tracking Errors (+80%). These deficits correlated significantly with subjects' +3 Gz deficits in a manual-force production task. We conclude that force exaggeration in +3 Gz may impair flight stability during simulated jet maneuvers in naïve subjects, most likely as a consequence of vestibular stimulation.
NASA Technical Reports Server (NTRS)
Howell, Leonard W., Jr.; Six, N. Frank (Technical Monitor)
2002-01-01
The Maximum Likelihood (ML) statistical theory required to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments is developed in this paper. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral information based on the combination of data sets. The procedure is of significant value to both existing data sets and those to be produced by future astrophysics missions consisting of two or more detectors by allowing instrument developers to optimize each detector's design parameters through simulation studies in order to design and build complementary detectors that will maximize the precision with which the science objectives may be obtained. The benefits of this ML theory and its application is measured in terms of the reduction of the statistical errors (standard deviations) of the spectra information using the multiple data sets in concert as compared to the statistical errors of the spectra information when the data sets are considered separately, as well as any biases resulting from poor statistics in one or more of the individual data sets that might be reduced when the data sets are combined.
Consideration of species community composition in statistical ...
Diseases are increasing in marine ecosystems, and these increases have been attributed to a number of environmental factors including climate change, pollution, and overfishing. However, many studies pool disease prevalence into taxonomic groups, disregarding host species composition when comparing sites or assessing environmental impacts on patterns of disease presence. We used simulated data under a known environmental effect to assess the ability of standard statistical methods (binomial and linear regression, ANOVA) to detect a significant environmental effect on pooled disease prevalence with varying species abundance distributions and relative susceptibilities to disease. When one species was more susceptible to a disease and both species only partially overlapped in their distributions, models tended to produce a greater number of false positives (Type I error). Differences in disease risk between regions or along an environmental gradient tended to be underestimated, or even in the wrong direction, when highly susceptible taxa had reduced abundances in impacted sites, a situation likely to be common in nature. Including relative abundance as an additional variable in regressions improved model accuracy, but tended to be conservative, producing more false negatives (Type II error) when species abundance was strongly correlated with the environmental effect. Investigators should be cautious of underlying assumptions of species similarity in susceptib
Potential and Limitations of an Improved Method to Produce Dynamometric Wheels
García de Jalón, Javier
2018-01-01
A new methodology for the estimation of tyre-contact forces is presented. The new procedure is an evolution of a previous method based on harmonic elimination techniques developed with the aim of producing low cost dynamometric wheels. While the original method required stress measurement in many rim radial lines and the fulfillment of some rigid conditions of symmetry, the new methodology described in this article significantly reduces the number of required measurement points and greatly relaxes symmetry constraints. This can be done without compromising the estimation error level. The reduction of the number of measuring radial lines increases the ripple of demodulated signals due to non-eliminated higher order harmonics. Therefore, it is necessary to adapt the calibration procedure to this new scenario. A new calibration procedure that takes into account angular position of the wheel is completely described. This new methodology is tested on a standard commercial five-spoke car wheel. Obtained results are qualitatively compared to those derived from the application of former methodology leading to the conclusion that the new method is both simpler and more robust due to the reduction in the number of measuring points, while contact forces’ estimation error remains at an acceptable level. PMID:29439427
Effect of cephalometer misalignment on calculations of facial asymmetry.
Lee, Ki-Heon; Hwang, Hyeon-Shik; Curry, Sean; Boyd, Robert L; Norris, Kevin; Baumrind, Sheldon
2007-07-01
In this study, we evaluated errors introduced into the interpretation of facial asymmetry on posteroanterior (PA) cephalograms due to malpositioning of the x-ray emitter focal spot. We tested the hypothesis that horizontal displacements of the emitter from its ideal position would produce systematic displacements of skull landmarks that could be fully accounted for by the rules of projective geometry alone. A representative dry skull with 22 metal markers was used to generate a series of PA images from different emitter positions by using a fully calibrated stereo cephalometer. Empirical measurements of the resulting cephalograms were compared with mathematical predictions based solely on geometric rules. The empirical measurements matched the mathematical predictions within the limits of measurement error (x= 0.23 mm), thus supporting the hypothesis. Based upon this finding, we generated a completely symmetrical mathematical skull and calculated the expected errors for focal spots of several different magnitudes. Quantitative data were computed for focal spot displacements of different magnitudes. Misalignment of the x-ray emitter focal spot introduces systematic errors into the interpretation of facial asymmetry on PA cephalograms. For misalignments of less than 20 mm, the effect is small in individual cases. However, misalignments as small as 10 mm can introduce spurious statistical findings of significant asymmetry when mean values for large groups of PA images are evaluated.
Why do we miss rare targets? Exploring the boundaries of the low prevalence effect
Rich, Anina N.; Kunar, Melina A.; Van Wert, Michael J.; Hidalgo-Sotelo, Barbara; Horowitz, Todd S.; Wolfe, Jeremy M.
2011-01-01
Observers tend to miss a disproportionate number of targets in visual search tasks with rare targets. This ‘prevalence effect’ may have practical significance since many screening tasks (e.g., airport security, medical screening) are low prevalence searches. It may also shed light on the rules used to terminate search when a target is not found. Here, we use perceptually simple stimuli to explore the sources of this effect. Experiment 1 shows a prevalence effect in inefficient spatial configuration search. Experiment 2 demonstrates this effect occurs even in a highly efficient feature search. However, the two prevalence effects differ. In spatial configuration search, misses seem to result from ending the search prematurely, while in feature search, they seem due to response errors. In Experiment 3, a minimum delay before response eliminated the prevalence effect for feature but not spatial configuration search. In Experiment 4, a target was present on each trial in either two (2AFC) or four (4AFC) orientations. With only two response alternatives, low prevalence produced elevated errors. Providing four response alternatives eliminated this effect. Low target prevalence puts searchers under pressure that tends to increase miss errors. We conclude that the specific source of those errors depends on the nature of the search. PMID:19146299
Accelerated Compressed Sensing Based CT Image Reconstruction.
Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C
2015-01-01
In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.
Accelerated Compressed Sensing Based CT Image Reconstruction
Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.
2015-01-01
In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200
Wind Power Forecasting Error Frequency Analyses for Operational Power System Studies: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Florita, A.; Hodge, B. M.; Milligan, M.
2012-08-01
The examination of wind power forecasting errors is crucial for optimal unit commitment and economic dispatch of power systems with significant wind power penetrations. This scheduling process includes both renewable and nonrenewable generators, and the incorporation of wind power forecasts will become increasingly important as wind fleets constitute a larger portion of generation portfolios. This research considers the Western Wind and Solar Integration Study database of wind power forecasts and numerical actualizations. This database comprises more than 30,000 locations spread over the western United States, with a total wind power capacity of 960 GW. Error analyses for individual sites andmore » for specific balancing areas are performed using the database, quantifying the fit to theoretical distributions through goodness-of-fit metrics. Insights into wind-power forecasting error distributions are established for various levels of temporal and spatial resolution, contrasts made among the frequency distribution alternatives, and recommendations put forth for harnessing the results. Empirical data are used to produce more realistic site-level forecasts than previously employed, such that higher resolution operational studies are possible. This research feeds into a larger work of renewable integration through the links wind power forecasting has with various operational issues, such as stochastic unit commitment and flexible reserve level determination.« less
The effects of training on errors of perceived direction in perspective displays
NASA Technical Reports Server (NTRS)
Tharp, Gregory K.; Ellis, Stephen R.
1990-01-01
An experiment was conducted to determine the effects of training on the characteristic direction errors that are observed when subjects estimate exocentric directions on perspective displays. Changes in five subjects' perceptual errors were measured during a training procedure designed to eliminate the error. The training was provided by displaying to each subject both the sign and the direction of his judgment error. The feedback provided by the error display was found to decrease but not eliminate the error. A lookup table model of the source of the error was developed in which the judgement errors were attributed to overestimates of both the pitch and the yaw of the viewing direction used to produce the perspective projection. The model predicts the quantitative characteristics of the data somewhat better than previous models did. A mechanism is proposed for the observed learning, and further tests of the model are suggested.
Comparing different models of the development of verb inflection in early child Spanish.
Aguado-Orea, Javier; Pine, Julian M
2015-01-01
How children acquire knowledge of verb inflection is a long-standing question in language acquisition research. In the present study, we test the predictions of some current constructivist and generativist accounts of the development of verb inflection by focusing on data from two Spanish-speaking children between the ages of 2;0 and 2;6. The constructivist claim that children's early knowledge of verb inflection is only partially productive is tested by comparing the average number of different inflections per verb in matched samples of child and adult speech. The generativist claim that children's early use of verb inflection is essentially error-free is tested by investigating the rate at which the children made subject-verb agreement errors in different parts of the present tense paradigm. Our results show: 1) that, although even adults' use of verb inflection in Spanish tends to look somewhat lexically restricted, both children's use of verb inflection was significantly less flexible than that of their caregivers, and 2) that, although the rate at which the two children produced subject-verb agreement errors in their speech was very low, this overall error rate hid a consistent pattern of error in which error rates were substantially higher in low frequency than in high frequency contexts, and substantially higher for low frequency than for high frequency verbs. These results undermine the claim that children's use of verb inflection is fully productive from the earliest observable stages, and are consistent with the constructivist claim that knowledge of verb inflection develops only gradually.
Australian children with cleft palate achieve age-appropriate speech by 5 years of age.
Chacon, Antonia; Parkin, Melissa; Broome, Kate; Purcell, Alison
2017-12-01
Children with cleft palate demonstrate atypical speech sound development, which can influence their intelligibility, literacy and learning. There is limited documentation regarding how speech sound errors change over time in cleft palate speech and the effect that these errors have upon mono-versus polysyllabic word production. The objective of this study was to examine the phonetic and phonological speech skills of children with cleft palate at ages 3 and 5. A cross-sectional observational design was used. Eligible participants were aged 3 or 5 years with a repaired cleft palate. The Diagnostic Evaluation of Articulation and Phonology (DEAP) Articulation subtest and a non-standardised list of mono- and polysyllabic words were administered once for each child. The Profile of Phonology (PROPH) was used to analyse each child's speech. N = 51 children with cleft palate participated in the study. Three-year-old children with cleft palate produced significantly more speech errors than their typically-developing peers, but no difference was apparent at 5 years. The 5-year-olds demonstrated greater phonetic and phonological accuracy than the 3-year-old children. Polysyllabic words were more affected by errors than monosyllables in the 3-year-old group only. Children with cleft palate are prone to phonetic and phonological speech errors in their preschool years. Most of these speech errors approximate typically-developing children by 5 years. At 3 years, word shape has an influence upon phonological speech accuracy. Speech pathology intervention is indicated to support the intelligibility of these children from their earliest stages of development. Copyright © 2017 Elsevier B.V. All rights reserved.
Evaluating rainfall errors in global climate models through cloud regimes
NASA Astrophysics Data System (ADS)
Tan, Jackson; Oreopoulos, Lazaros; Jakob, Christian; Jin, Daeho
2017-07-01
Global climate models suffer from a persistent shortcoming in their simulation of rainfall by producing too much drizzle and too little intense rain. This erroneous distribution of rainfall is a result of deficiencies in the representation of underlying processes of rainfall formation. In the real world, clouds are precursors to rainfall and the distribution of clouds is intimately linked to the rainfall over the area. This study examines the model representation of tropical rainfall using the cloud regime concept. In observations, these cloud regimes are derived from cluster analysis of joint-histograms of cloud properties retrieved from passive satellite measurements. With the implementation of satellite simulators, comparable cloud regimes can be defined in models. This enables us to contrast the rainfall distributions of cloud regimes in 11 CMIP5 models to observations and decompose the rainfall errors by cloud regimes. Many models underestimate the rainfall from the organized convective cloud regime, which in observation provides half of the total rain in the tropics. Furthermore, these rainfall errors are relatively independent of the model's accuracy in representing this cloud regime. Error decomposition reveals that the biases are compensated in some models by a more frequent occurrence of the cloud regime and most models exhibit substantial cancellation of rainfall errors from different regimes and regions. Therefore, underlying relatively accurate total rainfall in models are significant cancellation of rainfall errors from different cloud types and regions. The fact that a good representation of clouds does not lead to appreciable improvement in rainfall suggests a certain disconnect in the cloud-precipitation processes of global climate models.
New architecture for dynamic frame-skipping transcoder.
Fung, Kai-Tat; Chan, Yui-Lam; Siu, Wan-Chi
2002-01-01
Transcoding is a key technique for reducing the bit rate of a previously compressed video signal. A high transcoding ratio may result in an unacceptable picture quality when the full frame rate of the incoming video bitstream is used. Frame skipping is often used as an efficient scheme to allocate more bits to the representative frames, so that an acceptable quality for each frame can be maintained. However, the skipped frame must be decompressed completely, which might act as a reference frame to nonskipped frames for reconstruction. The newly quantized discrete cosine transform (DCT) coefficients of the prediction errors need to be re-computed for the nonskipped frame with reference to the previous nonskipped frame; this can create undesirable complexity as well as introduce re-encoding errors. In this paper, we propose new algorithms and a novel architecture for frame-rate reduction to improve picture quality and to reduce complexity. The proposed architecture is mainly performed on the DCT domain to achieve a transcoder with low complexity. With the direct addition of DCT coefficients and an error compensation feedback loop, re-encoding errors are reduced significantly. Furthermore, we propose a frame-rate control scheme which can dynamically adjust the number of skipped frames according to the incoming motion vectors and re-encoding errors due to transcoding such that the decoded sequence can have a smooth motion as well as better transcoded pictures. Experimental results show that, as compared to the conventional transcoder, the new architecture for frame-skipping transcoder is more robust, produces fewer requantization errors, and has reduced computational complexity.
Khwaileh, Tariq; Body, Richard; Herbert, Ruth
2015-01-01
Within the domain of inflectional morpho-syntax, differential processing of regular and irregular forms has been found in healthy speakers and in aphasia. One view assumes that irregular forms are retrieved as full entities, while regular forms are compiled on-line. An alternative view holds that a single mechanism oversees regular and irregular forms. Arabic offers an opportunity to study this phenomenon, as Arabic nouns contain a consonantal root, delivering lexical meaning, and a vocalic pattern, delivering syntactic information, such as gender and number. The aim of this study is to investigate morpho-syntactic processing of regular (sound) and irregular (broken) Arabic plurals in patients with morpho-syntactic impairment. Three participants with acquired agrammatic aphasia produced plural forms in a picture-naming task. We measured overall response accuracy, then analysed lexical errors and morpho-syntactic errors, separately. Error analysis revealed different patterns of morpho-syntactic errors depending on the type of pluralization (sound vs broken). Omissions formed the vast majority of errors in sound plurals, while substitution was the only error mechanism that occurred in broken plurals. The dissociation was statistically significant for retrieval of morpho-syntactic information (vocalic pattern) but not for lexical meaning (consonantal root), suggesting that the participants' selective impairment was an effect of the morpho-syntax of plurals. These results suggest that irregular plurals forms are stored, while regular forms are derived. The current findings support the findings from other languages and provide a new analysis technique for data from languages with non-concatenative morpho-syntax.
COBE DMR results and implications. [Differential Microwave Radiometer
NASA Technical Reports Server (NTRS)
Smoot, George F.
1992-01-01
This lecture presents early results obtained from the first six months of measurements of the Cosmic Microwave Background (CMB) by Differential Microwave Radiometers (DMR) aboard COBE and discusses significant cosmological implications. The DMR maps show the dipole anisotropy and some galactic emission but otherwise a spatially smooth early universe. The measurements are sufficiently precise that we must pay careful attention to potential systematic errors. Maps of galactic and local emission such as those produced by the FIRAS and DIRBE instruments will be needed to identify foregrounds from extragalactic emission and thus to interpret the results in terms of events in the early universe. The current DMR results are significant for Cosmology.
Iterative random vs. Kennard-Stone sampling for IR spectrum-based classification task using PLS2-DA
NASA Astrophysics Data System (ADS)
Lee, Loong Chuen; Liong, Choong-Yeun; Jemain, Abdul Aziz
2018-04-01
External testing (ET) is preferred over auto-prediction (AP) or k-fold-cross-validation in estimating more realistic predictive ability of a statistical model. With IR spectra, Kennard-stone (KS) sampling algorithm is often used to split the data into training and test sets, i.e. respectively for model construction and for model testing. On the other hand, iterative random sampling (IRS) has not been the favored choice though it is theoretically more likely to produce reliable estimation. The aim of this preliminary work is to compare performances of KS and IRS in sampling a representative training set from an attenuated total reflectance - Fourier transform infrared spectral dataset (of four varieties of blue gel pen inks) for PLS2-DA modeling. The `best' performance achievable from the dataset is estimated with AP on the full dataset (APF, error). Both IRS (n = 200) and KS were used to split the dataset in the ratio of 7:3. The classic decision rule (i.e. maximum value-based) is employed for new sample prediction via partial least squares - discriminant analysis (PLS2-DA). Error rate of each model was estimated repeatedly via: (a) AP on full data (APF, error); (b) AP on training set (APS, error); and (c) ET on the respective test set (ETS, error). A good PLS2-DA model is expected to produce APS, error and EVS, error that is similar to the APF, error. Bearing that in mind, the similarities between (a) APS, error vs. APF, error; (b) ETS, error vs. APF, error and; (c) APS, error vs. ETS, error were evaluated using correlation tests (i.e. Pearson and Spearman's rank test), using series of PLS2-DA models computed from KS-set and IRS-set, respectively. Overall, models constructed from IRS-set exhibits more similarities between the internal and external error rates than the respective KS-set, i.e. less risk of overfitting. In conclusion, IRS is more reliable than KS in sampling representative training set.
State estimation for autopilot control of small unmanned aerial vehicles in windy conditions
NASA Astrophysics Data System (ADS)
Poorman, David Paul
The use of small unmanned aerial vehicles (UAVs) both in the military and civil realms is growing. This is largely due to the proliferation of inexpensive sensors and the increase in capability of small computers that has stemmed from the personal electronic device market. Methods for performing accurate state estimation for large scale aircraft have been well known and understood for decades, which usually involve a complex array of expensive high accuracy sensors. Performing accurate state estimation for small unmanned aircraft is a newer area of study and often involves adapting known state estimation methods to small UAVs. State estimation for small UAVs can be more difficult than state estimation for larger UAVs due to small UAVs employing limited sensor suites due to cost, and the fact that small UAVs are more susceptible to wind than large aircraft. The purpose of this research is to evaluate the ability of existing methods of state estimation for small UAVs to accurately capture the states of the aircraft that are necessary for autopilot control of the aircraft in a Dryden wind field. The research begins by showing which aircraft states are necessary for autopilot control in Dryden wind. Then two state estimation methods that employ only accelerometer, gyro, and GPS measurements are introduced. The first method uses assumptions on aircraft motion to directly solve for attitude information and smooth GPS data, while the second method integrates sensor data to propagate estimates between GPS measurements and then corrects those estimates with GPS information. The performance of both methods is analyzed with and without Dryden wind, in straight and level flight, in a coordinated turn, and in a wings level ascent. It is shown that in zero wind, the first method produces significant steady state attitude errors in both a coordinated turn and in a wings level ascent. In Dryden wind, it produces large noise on the estimates for its attitude states, and has a non-zero mean error that increases when gyro bias is increased. The second method is shown to not exhibit any steady state error in the tested scenarios that is inherent to its design. The second method can correct for attitude errors that arise from both integration error and gyro bias states, but it suffers from lack of attitude error observability. The attitude errors are shown to be more observable in wind, but increased integration error in wind outweighs the increase in attitude corrections that such increased observability brings, resulting in larger attitude errors in wind. Overall, this work highlights many technical deficiencies of both of these methods of state estimation that could be improved upon in the future to enhance state estimation for small UAVs in windy conditions.
Kal, Betül Ilhan; Baksi, B Güniz; Dündar, Nesrin; Sen, Bilge Hakan
2007-02-01
The aim of this study was to compare the accuracy of endodontic file lengths after application of various image enhancement modalities. Endodontic files of three different ISO sizes were inserted in 20 single-rooted extracted permanent mandibular premolar teeth and standardized images were obtained. Original digital images were then enhanced using five processing algorithms. Six evaluators measured the length of each file on each image. The measurements from each processing algorithm and each file size were compared using repeated measures ANOVA and Bonferroni tests (P = 0.05). Paired t test was performed to compare the measurements with the true lengths of the files (P = 0.05). All of the processing algorithms provided significantly shorter measurements than the true length of each file size (P < 0.05). The threshold enhancement modality produced significantly higher mean error values (P < 0.05), while there was no significant difference among the other enhancement modalities (P > 0.05). Decrease in mean error value was observed with increasing file size (P < 0.05). Invert, contrast/brightness and edge enhancement algorithms may be recommended for accurate file length measurements when utilizing storage phosphor plates.
NASA Astrophysics Data System (ADS)
Greenough, J. A.; Rider, W. J.
2004-05-01
A numerical study is undertaken comparing a fifth-order version of the weighted essentially non-oscillatory numerical (WENO5) method to a modern piecewise-linear, second-order, version of Godunov's (PLMDE) method for the compressible Euler equations. A series of one-dimensional test problems are examined beginning with classical linear problems and ending with complex shock interactions. The problems considered are: (1) linear advection of a Gaussian pulse in density, (2) Sod's shock tube problem, (3) the "peak" shock tube problem, (4) a version of the Shu and Osher shock entropy wave interaction and (5) the Woodward and Colella interacting shock wave problem. For each problem and method, run times, density error norms and convergence rates are reported for each method as produced from a common code test-bed. The linear problem exhibits the advertised convergence rate for both methods as well as the expected large disparity in overall error levels; WENO5 has the smaller errors and an enormous advantage in overall efficiency (in accuracy per unit CPU time). For the nonlinear problems with discontinuities, however, we generally see both first-order self-convergence of error as compared to an exact solution, or when an analytic solution is not available, a converged solution generated on an extremely fine grid. The overall comparison of error levels shows some variation from problem to problem. For Sod's shock tube, PLMDE has nearly half the error, while on the peak problem the errors are nearly the same. For the interacting blast wave problem the two methods again produce a similar level of error with a slight edge for the PLMDE. On the other hand, for the Shu-Osher problem, the errors are similar on the coarser grids, but favors WENO by a factor of nearly 1.5 on the finer grids used. In all cases holding mesh resolution constant though, PLMDE is less costly in terms of CPU time by approximately a factor of 6. If the CPU cost is taken as fixed, that is run times are equal for both numerical methods, then PLMDE uniformly produces lower errors than WENO for the fixed computation cost on the test problems considered here.
Particle Simulation of Coulomb Collisions: Comparing the Methods of Takizuka & Abe and Nanbu
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, C; Lin, T; Caflisch, R
2007-05-22
The interactions of charged particles in a plasma are in a plasma is governed by the long-range Coulomb collision. We compare two widely used Monte Carlo models for Coulomb collisions. One was developed by Takizuka and Abe in 1977, the other was developed by Nanbu in 1997. We perform deterministic and stochastic error analysis with respect to particle number and time step. The two models produce similar stochastic errors, but Nanbu's model gives smaller time step errors. Error comparisons between these two methods are presented.
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
Sources of Error in Substance Use Prevalence Surveys
Johnson, Timothy P.
2014-01-01
Population-based estimates of substance use patterns have been regularly reported now for several decades. Concerns with the quality of the survey methodologies employed to produce those estimates date back almost as far. Those concerns have led to a considerable body of research specifically focused on understanding the nature and consequences of survey-based errors in substance use epidemiology. This paper reviews and summarizes that empirical research by organizing it within a total survey error model framework that considers multiple types of representation and measurement errors. Gaps in our knowledge of error sources in substance use surveys and areas needing future research are also identified. PMID:27437511
Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis
NASA Technical Reports Server (NTRS)
Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.
2017-01-01
This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.
COBE - New sky maps of the early universe
NASA Technical Reports Server (NTRS)
Smoot, G. F.
1991-01-01
This paper presents early results obtained from the first six months of measurements of the cosmic microwave background (CMB) by instruments aboard NASA's Cosmic Background Explorer (COBE) satellite and discusses the implications for cosmology. The three instruments: FIRAS, DMR, and DIRBE have operated well and produced significant new results. The FIRAS measurement of the CMB spectrum supports the standard big bang nucleosynthesis model. The maps made from the DMR instrument measurements show a surprisingly smooth early universe. The measurements are sufficiently precise that we must pay careful attention to potential systematic errors. The maps of galactic and local emission produced by the DIRBE instrument will be needed to identify foregrounds from extragalactic emission and thus to interpret the terms of events in the early universe.
General Aviation Avionics Statistics.
1980-12-01
designed to produce standard errors on these variables at levels specified by the FAA. No controls were placed on the standard errors of the non-design...Transponder Encoding Requirement. and Mode CAutomatic (11as been deleted) Altitude Reporting Ca- pabili.,; Two-way Radio; VOR or TACAN Receiver. Remaining 42
Hoy, Robert S; Foteinopoulou, Katerina; Kröger, Martin
2009-09-01
Primitive path analyses of entanglements are performed over a wide range of chain lengths for both bead spring and atomistic polyethylene polymer melts. Estimators for the entanglement length N_{e} which operate on results for a single chain length N are shown to produce systematic O(1/N) errors. The mathematical roots of these errors are identified as (a) treating chain ends as entanglements and (b) neglecting non-Gaussian corrections to chain and primitive path dimensions. The prefactors for the O(1/N) errors may be large; in general their magnitude depends both on the polymer model and the method used to obtain primitive paths. We propose, derive, and test new estimators which eliminate these systematic errors using information obtainable from the variation in entanglement characteristics with chain length. The new estimators produce accurate results for N_{e} from marginally entangled systems. Formulas based on direct enumeration of entanglements appear to converge faster and are simpler to apply.
Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang
2016-10-14
First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5-60 m³/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2-60 m³/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow.
Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang
2016-01-01
First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5–60 m3/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2–60 m3/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow. PMID:27754412
Accurate characterisation of hole size and location by projected fringe profilometry
NASA Astrophysics Data System (ADS)
Wu, Yuxiang; Dantanarayana, Harshana G.; Yue, Huimin; Huntley, Jonathan M.
2018-06-01
The ability to accurately estimate the location and geometry of holes is often required in the field of quality control and automated assembly. Projected fringe profilometry is a potentially attractive technique on account of being non-contacting, of lower cost, and orders of magnitude faster than the traditional coordinate measuring machine. However, we demonstrate in this paper that fringe projection is susceptible to significant (hundreds of µm) measurement artefacts in the neighbourhood of hole edges, which give rise to errors of a similar magnitude in the estimated hole geometry. A mechanism for the phenomenon is identified based on the finite size of the imaging system’s point spread function and the resulting bias produced near to sample discontinuities in geometry and reflectivity. A mathematical model is proposed, from which a post-processing compensation algorithm is developed to suppress such errors around the holes. The algorithm includes a robust and accurate sub-pixel edge detection method based on a Fourier descriptor of the hole contour. The proposed algorithm was found to reduce significantly the measurement artefacts near the hole edges. As a result, the errors in estimated hole radius were reduced by up to one order of magnitude, to a few tens of µm for hole radii in the range 2–15 mm, compared to those from the uncompensated measurements.
Data mining: Potential applications in research on nutrition and health.
Batterham, Marijka; Neale, Elizabeth; Martin, Allison; Tapsell, Linda
2017-02-01
Data mining enables further insights from nutrition-related research, but caution is required. The aim of this analysis was to demonstrate and compare the utility of data mining methods in classifying a categorical outcome derived from a nutrition-related intervention. Baseline data (23 variables, 8 categorical) on participants (n = 295) in an intervention trial were used to classify participants in terms of meeting the criteria of achieving 10 000 steps per day. Results from classification and regression trees (CARTs), random forests, adaptive boosting, logistic regression, support vector machines and neural networks were compared using area under the curve (AUC) and error assessments. The CART produced the best model when considering the AUC (0.703), overall error (18%) and within class error (28%). Logistic regression also performed reasonably well compared to the other models (AUC 0.675, overall error 23%, within class error 36%). All the methods gave different rankings of variables' importance. CART found that body fat, quality of life using the SF-12 Physical Component Summary (PCS) and the cholesterol: HDL ratio were the most important predictors of meeting the 10 000 steps criteria, while logistic regression showed the SF-12PCS, glucose levels and level of education to be the most significant predictors (P ≤ 0.01). Differing outcomes suggest caution is required with a single data mining method, particularly in a dataset with nonlinear relationships and outliers and when exploring relationships that were not the primary outcomes of the research. © 2017 Dietitians Association of Australia.
Feedback controlled optics with wavefront compensation
NASA Technical Reports Server (NTRS)
Breckenridge, William G. (Inventor); Redding, David C. (Inventor)
1993-01-01
The sensitivity model of a complex optical system obtained by linear ray tracing is used to compute a control gain matrix by imposing the mathematical condition for minimizing the total wavefront error at the optical system's exit pupil. The most recent deformations or error states of the controlled segments or optical surfaces of the system are then assembled as an error vector, and the error vector is transformed by the control gain matrix to produce the exact control variables which will minimize the total wavefront error at the exit pupil of the optical system. These exact control variables are then applied to the actuators controlling the various optical surfaces in the system causing the immediate reduction in total wavefront error observed at the exit pupil of the optical system.
Multiple imputation of missing fMRI data in whole brain analysis
Vaden, Kenneth I.; Gebregziabher, Mulugeta; Kuchinsky, Stefanie E.; Eckert, Mark A.
2012-01-01
Whole brain fMRI analyses rarely include the entire brain because of missing data that result from data acquisition limits and susceptibility artifact, in particular. This missing data problem is typically addressed by omitting voxels from analysis, which may exclude brain regions that are of theoretical interest and increase the potential for Type II error at cortical boundaries or Type I error when spatial thresholds are used to establish significance. Imputation could significantly expand statistical map coverage, increase power, and enhance interpretations of fMRI results. We examined multiple imputation for group level analyses of missing fMRI data using methods that leverage the spatial information in fMRI datasets for both real and simulated data. Available case analysis, neighbor replacement, and regression based imputation approaches were compared in a general linear model framework to determine the extent to which these methods quantitatively (effect size) and qualitatively (spatial coverage) increased the sensitivity of group analyses. In both real and simulated data analysis, multiple imputation provided 1) variance that was most similar to estimates for voxels with no missing data, 2) fewer false positive errors in comparison to mean replacement, and 3) fewer false negative errors in comparison to available case analysis. Compared to the standard analysis approach of omitting voxels with missing data, imputation methods increased brain coverage in this study by 35% (from 33,323 to 45,071 voxels). In addition, multiple imputation increased the size of significant clusters by 58% and number of significant clusters across statistical thresholds, compared to the standard voxel omission approach. While neighbor replacement produced similar results, we recommend multiple imputation because it uses an informed sampling distribution to deal with missing data across subjects that can include neighbor values and other predictors. Multiple imputation is anticipated to be particularly useful for 1) large fMRI data sets with inconsistent missing voxels across subjects and 2) addressing the problem of increased artifact at ultra-high field, which significantly limit the extent of whole brain coverage and interpretations of results. PMID:22500925
Response effects in the perception of conjunctions of colour and form.
Chmiel, N
1989-01-01
Two experiments addressed the question whether visual search for a target defined by a conjunction of colour and form requires a central, serial, attentional process, but detection of a single feature, such as colour, is preattentive, as proposed by the feature-integration theory of attention. Experiment 1 investigated conjunction and feature search using small array sizes of up to five elements, under conditions which precluded eye-movements, in contrast to previous studies. The results were consistent with the theory. Conjunction search showed the effect of adding distractors to the display, the slopes of the curves relating RT to array size were in the approximate ratio of 2:1, consistent with a central, serial search process, exhaustive for absence responses and self-terminating for presence responses. Feature search showed no significant effect of distractors for presence responses. Experiment 2 manipulated the response requirements in conjunction search, using vocal response in a GO-NO GO procedure, in contrast to Experiment 1, which used key-press responses in a YES-NO procedure. Strikingly, presence-response RT was not affected significantly by the number of distractors in the array. The slope relating RT to array size was 3.92. The absence RT slope was 30.56, producing a slope ratio of approximately 8:1. There was no interaction of errors with array size and the presence and absence conditions, implying that RT-error trade-offs did not produce this slope ratio. This result suggests that feature-integration theory is at least incomplete.
Ouyang, Liwen; Apley, Daniel W; Mehrotra, Sanjay
2016-04-01
Electronic medical record (EMR) databases offer significant potential for developing clinical hypotheses and identifying disease risk associations by fitting statistical models that capture the relationship between a binary response variable and a set of predictor variables that represent clinical, phenotypical, and demographic data for the patient. However, EMR response data may be error prone for a variety of reasons. Performing a manual chart review to validate data accuracy is time consuming, which limits the number of chart reviews in a large database. The authors' objective is to develop a new design-of-experiments-based systematic chart validation and review (DSCVR) approach that is more powerful than the random validation sampling used in existing approaches. The DSCVR approach judiciously and efficiently selects the cases to validate (i.e., validate whether the response values are correct for those cases) for maximum information content, based only on their predictor variable values. The final predictive model will be fit using only the validation sample, ignoring the remainder of the unvalidated and unreliable error-prone data. A Fisher information based D-optimality criterion is used, and an algorithm for optimizing it is developed. The authors' method is tested in a simulation comparison that is based on a sudden cardiac arrest case study with 23 041 patients' records. This DSCVR approach, using the Fisher information based D-optimality criterion, results in a fitted model with much better predictive performance, as measured by the receiver operating characteristic curve and the accuracy in predicting whether a patient will experience the event, than a model fitted using a random validation sample. The simulation comparisons demonstrate that this DSCVR approach can produce predictive models that are significantly better than those produced from random validation sampling, especially when the event rate is low. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Isobaric Reconstruction of the Baryonic Acoustic Oscillation
NASA Astrophysics Data System (ADS)
Wang, Xin; Yu, Hao-Ran; Zhu, Hong-Ming; Yu, Yu; Pan, Qiaoyin; Pen, Ue-Li
2017-06-01
In this Letter, we report a significant recovery of the linear baryonic acoustic oscillation (BAO) signature by applying the isobaric reconstruction algorithm to the nonlinear matter density field. Assuming only the longitudinal component of the displacement being cosmologically relevant, this algorithm iteratively solves the coordinate transform between the Lagrangian and Eulerian frames without requiring any specific knowledge of the dynamics. For dark matter field, it produces the nonlinear displacement potential with very high fidelity. The reconstruction error at the pixel level is within a few percent and is caused only by the emergence of the transverse component after the shell-crossing. As it circumvents the strongest nonlinearity of the density evolution, the reconstructed field is well described by linear theory and immune from the bulk-flow smearing of the BAO signature. Therefore, this algorithm could significantly improve the measurement accuracy of the sound horizon scale s. For a perfect large-scale structure survey at redshift zero without Poisson or instrumental noise, the fractional error {{Δ }}s/s is reduced by a factor of ˜2.7, very close to the ideal limit with the linear power spectrum and Gaussian covariance matrix.
Explaining errors in children's questions.
Rowland, Caroline F
2007-07-01
The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813-842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children's speech, and that errors occur when children resort to other operations to produce questions [e.g. Dabrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83-102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157-181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.
ERIC Educational Resources Information Center
Broth, Mathias; Lundell, Fanny Forsberg
2013-01-01
In this paper, we consider a student error produced in a French foreign language small-group seminar, involving four Swedish L1 first-term university students of French and a native French teacher. The error in question consists of a mispronunciation of the second vowel of the name "Napoléon" in the midst of a student presentation on the…
Against Structural Constraints in Subject-Verb Agreement Production
ERIC Educational Resources Information Center
Gillespie, Maureen; Pearlmutter, Neal J.
2013-01-01
Syntactic structure has been considered an integral component of agreement computation in language production. In agreement error studies, clause-boundedness (Bock & Cutting, 1992) and hierarchical feature-passing (Franck, Vigliocco, & Nicol, 2002) predict that local nouns within clausal modifiers should produce fewer errors than do those within…
NASA Technical Reports Server (NTRS)
Warner, Thomas T.; Key, Lawrence E.; Lario, Annette M.
1989-01-01
The effects of horizontal and vertical data resolution, data density, data location, different objective analysis algorithms, and measurement error on mesoscale-forecast accuracy are studied with observing-system simulation experiments. Domain-averaged errors are shown to generally decrease with time. It is found that the vertical distribution of error growth depends on the initial vertical distribution of the error itself. Larger gravity-inertia wave noise is produced in forecasts with coarser vertical data resolution. The use of a low vertical resolution observing system with three data levels leads to more forecast errors than moderate and high vertical resolution observing systems with 8 and 14 data levels. Also, with poor vertical resolution in soundings, the initial and forecast errors are not affected by the horizontal data resolution.
Scene-based nonuniformity correction algorithm based on interframe registration.
Zuo, Chao; Chen, Qian; Gu, Guohua; Sui, Xiubao
2011-06-01
In this paper, we present a simple and effective scene-based nonuniformity correction (NUC) method for infrared focal plane arrays based on interframe registration. This method estimates the global translation between two adjacent frames and minimizes the mean square error between the two properly registered images to make any two detectors with the same scene produce the same output value. In this way, the accumulation of the registration error can be avoided and the NUC can be achieved. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of the proposed technique is thoroughly studied with infrared image sequences with simulated nonuniformity and infrared imagery with real nonuniformity. It shows a significantly fast and reliable fixed-pattern noise reduction and obtains an effective frame-by-frame adaptive estimation of each detector's gain and offset.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qian, S.; Jark, W.; Takacs, P.Z.
1995-02-01
Metrology requirements for optical components for third generation synchrotron sources are taxing the state-of-the-art in manufacturing technology. We have investigated a number of effect sources in a commercial figure measurement instrument, the Long Trace Profiler II (LTP II), and have demonstrated that, with some simple modifications, we can significantly reduce the effect of error sources and improve the accuracy and reliability of the measurement. By keeping the optical head stationary and moving a penta prism along the translation stage, the stability of the optical system is greatly improved, and the remaining error signals can be corrected by a simple referencemore » beam subtraction. We illustrate the performance of the modified system by investigating the distortion produced by gravity on a typical synchrotron mirror and demonstrate the repeatability of the instrument despite relaxed tolerances on the translation stage.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ader, C.; Voirin, E.; McGee, M.
An error was found in an edge stress coefficient used to calculate stresses in thin windows. This error is present in “Roark’s Formulas for Stress and Strain” 7th and 8th Edition. The 6th Edition is correct. This guideline specially discusses a major difference in regards to a coefficient used in calculating the edge stress in “Roark’s Formulas for Stress and Strain” 6th Edition compared to the 7th and 8th Editions. In Chapter 10: Flat Plates under “Circular plates under distributed load producing large deflections,” Case 3, which is “Fixed and held. Uniform pressure q over entire plate.” The coefficient formore » a fixed edge condition in the 6th Edition1 K4 = 0.476 while in the 7th and 8th Edition2, the coefficient is 1.73 which is significant difference.« less
Bilton, Timothy P.; Schofield, Matthew R.; Black, Michael A.; Chagné, David; Wilcox, Phillip L.; Dodds, Ken G.
2018-01-01
Next-generation sequencing is an efficient method that allows for substantially more markers than previous technologies, providing opportunities for building high-density genetic linkage maps, which facilitate the development of nonmodel species’ genomic assemblies and the investigation of their genes. However, constructing genetic maps using data generated via high-throughput sequencing technology (e.g., genotyping-by-sequencing) is complicated by the presence of sequencing errors and genotyping errors resulting from missing parental alleles due to low sequencing depth. If unaccounted for, these errors lead to inflated genetic maps. In addition, map construction in many species is performed using full-sibling family populations derived from the outcrossing of two individuals, where unknown parental phase and varying segregation types further complicate construction. We present a new methodology for modeling low coverage sequencing data in the construction of genetic linkage maps using full-sibling populations of diploid species, implemented in a package called GUSMap. Our model is based on the Lander–Green hidden Markov model but extended to account for errors present in sequencing data. We were able to obtain accurate estimates of the recombination fractions and overall map distance using GUSMap, while most existing mapping packages produced inflated genetic maps in the presence of errors. Our results demonstrate the feasibility of using low coverage sequencing data to produce genetic maps without requiring extensive filtering of potentially erroneous genotypes, provided that the associated errors are correctly accounted for in the model. PMID:29487138
Bilton, Timothy P; Schofield, Matthew R; Black, Michael A; Chagné, David; Wilcox, Phillip L; Dodds, Ken G
2018-05-01
Next-generation sequencing is an efficient method that allows for substantially more markers than previous technologies, providing opportunities for building high-density genetic linkage maps, which facilitate the development of nonmodel species' genomic assemblies and the investigation of their genes. However, constructing genetic maps using data generated via high-throughput sequencing technology ( e.g. , genotyping-by-sequencing) is complicated by the presence of sequencing errors and genotyping errors resulting from missing parental alleles due to low sequencing depth. If unaccounted for, these errors lead to inflated genetic maps. In addition, map construction in many species is performed using full-sibling family populations derived from the outcrossing of two individuals, where unknown parental phase and varying segregation types further complicate construction. We present a new methodology for modeling low coverage sequencing data in the construction of genetic linkage maps using full-sibling populations of diploid species, implemented in a package called GUSMap. Our model is based on the Lander-Green hidden Markov model but extended to account for errors present in sequencing data. We were able to obtain accurate estimates of the recombination fractions and overall map distance using GUSMap, while most existing mapping packages produced inflated genetic maps in the presence of errors. Our results demonstrate the feasibility of using low coverage sequencing data to produce genetic maps without requiring extensive filtering of potentially erroneous genotypes, provided that the associated errors are correctly accounted for in the model. Copyright © 2018 Bilton et al.
Defining and Verifying Research Grade Airborne Laser Swath Mapping (ALSM) Observations
NASA Astrophysics Data System (ADS)
Carter, W. E.; Shrestha, R. L.; Slatton, C. C.
2004-12-01
The first and primary goal of the National Science Foundation (NSF) supported Center for Airborne Laser Mapping (NCALM), operated jointly by the University of Florida and the University of California, Berkeley, is to make "research grade" ALSM data widely available at affordable cost to the national scientific community. Cost aside, researchers need to know what NCALM considers research grade data and how the quality of the data is verified, to be able to determine the likelihood that the data they receive will meet their project specific requirements. Given the current state of the technology it is reasonable to expect a well planned and executed survey to produce surface elevations with uncertainties less than 10 centimeters and horizontal uncertainties of a few decimeters. Various components of the total error are generally associated with the aircraft trajectory, aircraft orientation, or laser vectors. Aircraft trajectory error is dependent largely on the Global Positioning System (GPS) observations, aircraft orientation on Inertial Measurement Unit (IMU) observations, and laser vectors on the scanning and ranging instrumentation. In addition to the issue of the precision or accuracy of the coordinates of the surface points, consideration must also be given to the point-to-point spacing and voids in the coverage. The major sources of error produce distinct artifacts in the data set. For example, aircraft trajectory errors tend to change slowly as the satellite constellation geometry varies, producing slopes within swaths and offsets between swaths. Roll, pitch and yaw biases in the IMU observations tend to persist through whole flights, and created distinctive artifacts in the swath overlap areas. Errors in the zero-point and scale of the laser scanner cause the edges of swaths to turn up or down. Range walk errors cause offsets between bright and dark surfaces, causing paint stripes to float above the dark surfaces of roads. The three keys to producing research grade ALSM observations are calibration, calibration, calibration. In this paper we discuss our general calibrations procedures, give examples of project specific calibration procedures, and discuss the use of ground truth data to verify the accuracy of ALSM surface coordinates.
Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar
2015-01-01
For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489
Ohta, Megumi; Midorikawa, Taishi; Hikihara, Yuki; Masuo, Yoshihisa; Sakamoto, Shizuo; Torii, Suguru; Kawakami, Yasuo; Fukunaga, Tetsuo; Kanehisa, Hiroaki
2017-02-01
This study examined the validity of segmental bioelectrical impedance (BI) analysis for predicting the fat-free masses (FFMs) of whole-body and body segments in children including overweight individuals. The FFM and impedance (Z) values of arms, trunk, legs, and whole body were determined using a dual-energy X-ray absorptiometry and segmental BI analyses, respectively, in 149 boys and girls aged 6 to 12 years, who were divided into model-development (n = 74), cross-validation (n = 35), and overweight (n = 40) groups. Simple regression analysis was applied to (length) 2 /Z (BI index) for each of the whole-body and 3 segments to develop the prediction equations of the measured FFM of the related body part. In the model-development group, the BI index of each of the 3 segments and whole body was significantly correlated to the measured FFM (R 2 = 0.867-0.932, standard error of estimation = 0.18-1.44 kg (5.9%-8.7%)). There was no significant difference between the measured and predicted FFM values without systematic error. The application of each equation derived in the model-development group to the cross-validation and overweight groups did not produce significant differences between the measured and predicted FFM values and systematic errors, with an exception that the arm FFM in the overweight group was overestimated. Segmental bioelectrical impedance analysis is useful for predicting the FFM of each of whole-body and body segments in children including overweight individuals, although the application for estimating arm FFM in overweight individuals requires a certain modification.
Working memory load impairs the evaluation of behavioral errors in the medial frontal cortex.
Maier, Martin E; Steinhauser, Marco
2017-10-01
Early error monitoring in the medial frontal cortex enables error detection and the evaluation of error significance, which helps prioritize adaptive control. This ability has been assumed to be independent from central capacity, a limited pool of resources assumed to be involved in cognitive control. The present study investigated whether error evaluation depends on central capacity by measuring the error-related negativity (Ne/ERN) in a flanker paradigm while working memory load was varied on two levels. We used a four-choice flanker paradigm in which participants had to classify targets while ignoring flankers. Errors could be due to responding either to the flankers (flanker errors) or to none of the stimulus elements (nonflanker errors). With low load, the Ne/ERN was larger for flanker errors than for nonflanker errors-an effect that has previously been interpreted as reflecting differential significance of these error types. With high load, no such effect of error type on the Ne/ERN was observable. Our findings suggest that working memory load does not impair the generation of an Ne/ERN per se but rather impairs the evaluation of error significance. They demonstrate that error monitoring is composed of capacity-dependent and capacity-independent mechanisms. © 2017 Society for Psychophysiological Research.
National suicide rates a century after Durkheim: do we know enough to estimate error?
Claassen, Cynthia A; Yip, Paul S; Corcoran, Paul; Bossarte, Robert M; Lawrence, Bruce A; Currier, Glenn W
2010-06-01
Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the most widely used population-level suicide metric today. After reviewing the unique sources of bias incurred during stages of suicide data collection and concatenation, we propose a model designed to uniformly estimate error in future studies. A standardized method of error estimation uniformly applied to mortality data could produce data capable of promoting high quality analyses of cross-national research questions.
The Outcome of ATC Message Length and Complexity on En Route Pilot Readback Performance
2009-01-01
ngs.as.ordnal.data.produced.α=. . 945 ,.ndcatng.hgh. nter-coder.agreement . sector Descriptions Chicago ARTCC. The.transcrptons.are.of.plot/con...12 were.categorzed.accordng.to.three.types.of.errors:.Er- rors . of. omsson. only. (67 .4%),. Readback. errors. only. (0 .9
Inducing Speech Errors in Dysarthria Using Tongue Twisters
ERIC Educational Resources Information Center
Kember, Heather; Connaghan, Kathryn; Patel, Rupal
2017-01-01
Although tongue twisters have been widely use to study speech production in healthy speakers, few studies have employed this methodology for individuals with speech impairment. The present study compared tongue twister errors produced by adults with dysarthria and age-matched healthy controls. Eight speakers (four female, four male; mean age =…
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.
A map overlay error model based on boundary geometry
Gaeuman, D.; Symanzik, J.; Schmidt, J.C.
2005-01-01
An error model for quantifying the magnitudes and variability of errors generated in the areas of polygons during spatial overlay of vector geographic information system layers is presented. Numerical simulation of polygon boundary displacements was used to propagate coordinate errors to spatial overlays. The model departs from most previous error models in that it incorporates spatial dependence of coordinate errors at the scale of the boundary segment. It can be readily adapted to match the scale of error-boundary interactions responsible for error generation on a given overlay. The area of error generated by overlay depends on the sinuosity of polygon boundaries, as well as the magnitude of the coordinate errors on the input layers. Asymmetry in boundary shape has relatively little effect on error generation. Overlay errors are affected by real differences in boundary positions on the input layers, as well as errors in the boundary positions. Real differences between input layers tend to compensate for much of the error generated by coordinate errors. Thus, the area of change measured on an overlay layer produced by the XOR overlay operation will be more accurate if the area of real change depicted on the overlay is large. The model presented here considers these interactions, making it especially useful for estimating errors studies of landscape change over time. ?? 2005 The Ohio State University.
Low energy atmospheric muon neutrinos in MACRO
NASA Astrophysics Data System (ADS)
Ambrosio, M.; Antolini, R.; Auriemma, G.; Bakari, D.; Baldini, A.; Barbarino, G. C.; Barish, B. C.; Battistoni, G.; Bellotti, R.; Bemporad, C.; Bernardini, P.; Bilokon, H.; Bisi, V.; Bloise, C.; Bower, C.; Brigida, M.; Bussino, S.; Cafagna, F.; Calicchio, M.; Campana, D.; Carboni, M.; Cecchini, S.; Cei, F.; Chiarella, V.; Choudhary, B. C.; Coutu, S.; De Cataldo, G.; Dekhissi, H.; De Marzo, C.; De Mitri, I.; Derkaoui, J.; De Vincenzi, M.; Di Credico, A.; Erriquez, O.; Favuzzi, C.; Forti, C.; Fusco, P.; Giacomelli, G.; Giannini, G.; Giglietto, N.; Giorgini, M.; Grassi, M.; Gray, L.; Grillo, A.; Guarino, F.; Gustavino, C.; Habig, A.; Hanson, K.; Heinz, R.; Iarocci, E.; Katsavounidis, E.; Katsavounidis, I.; Kearns, E.; Kim, H.; Kyriazopoulou, S.; Lamanna, E.; Lane, C.; Levin, D. S.; Lipari, P.; Longley, N. P.; Longo, M. J.; Loparco, F.; Maaroufi, F.; Mancarella, G.; Mandrioli, G.; Margiotta, A.; Marini, A.; Martello, D.; Marzari-Chiesa, A.; Mazziotta, M. N.; Michael, D. G.; Mikheyev, S.; Miller, L.; Monacelli, P.; Montaruli, T.; Monteno, M.; Mufson, S.; Musser, J.; Nicolò, D.; Nolty, R.; Orth, C.; Osteria, G.; Ouchrif, M.; Palamara, O.; Patera, V.; Patrizii, L.; Pazzi, R.; Peck, C. W.; Perrone, L.; Petrera, S.; Pistilli, P.; Popa, V.; Rainò, A.; Reynoldson, J.; Ronga, F.; Satriano, C.; Satta, L.; Scapparone, E.; Scholberg, K.; Sciubba, A.; Serra, P.; Sioli, M.; Sirri, G.; Sitta, M.; Spinelli, P.; Spinetti, M.; Spurio, M.; Steinberg, R.; Stone, J. L.; Sulak, L. R.; Surdo, A.; Tarlè, G.; Togo, V.; Vakili, M.; Vilela, E.; Walter, C. W.; Webb, R.
2000-04-01
We present the measurement of two event samples induced by atmospheric νμ of average energy
Monte Carlo errors with less errors
NASA Astrophysics Data System (ADS)
Wolff, Ulli; Alpha Collaboration
2004-01-01
We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.
The Incorporation and Initialization of Cloud Water/ice in AN Operational Forecast Model
NASA Astrophysics Data System (ADS)
Zhao, Qingyun
Quantitative precipitation forecasts have been one of the weakest aspects of numerical weather prediction models. Theoretical studies show that the errors in precipitation calculation can arise from three sources: errors in the large-scale forecasts of primary variables, errors in the crude treatment of condensation/evaporation and precipitation processes, and errors in the model initial conditions. A new precipitation parameterization scheme has been developed to investigate the forecast value of improved precipitation physics via the introduction of cloud water and cloud ice into a numerical prediction model. The main feature of this scheme is the explicit calculation of cloud water and cloud ice in both the convective and stratiform precipitation parameterization. This scheme has been applied to the eta model at the National Meteorological Center. Four extensive tests have been performed. The statistical results showed a significant improvement in the model precipitation forecasts. Diagnostic studies suggest that the inclusion of cloud ice is important in transferring water vapor to precipitation and in the enhancement of latent heat release; the latter subsequently affects the vertical motion field significantly. Since three-dimensional cloud data is absent from the analysis/assimilation system for most numerical models, a method has been proposed to incorporate observed precipitation and nephanalysis data into the data assimilation system to obtain the initial cloud field for the eta model. In this scheme, the initial moisture and vertical motion fields are also improved at the same time as cloud initialization. The physical initialization is performed in a dynamical initialization framework that uses the Newtonian dynamical relaxation method to nudge the model's wind and mass fields toward analyses during a 12-hour data assimilation period. Results from a case study showed that a realistic cloud field was produced by this method at the end of the data assimilation period. Precipitation forecasts have been significantly improved as a result of the improved initial cloud, moisture and vertical motion fields.
Trommer, J.T.; Loper, J.E.; Hammett, K.M.
1996-01-01
Several traditional techniques have been used for estimating stormwater runoff from ungaged watersheds. Applying these techniques to water- sheds in west-central Florida requires that some of the empirical relationships be extrapolated beyond tested ranges. As a result, there is uncertainty as to the accuracy of these estimates. Sixty-six storms occurring in 15 west-central Florida watersheds were initially modeled using the Rational Method, the U.S. Geological Survey Regional Regression Equations, the Natural Resources Conservation Service TR-20 model, the U.S. Army Corps of Engineers Hydrologic Engineering Center-1 model, and the Environmental Protection Agency Storm Water Management Model. The techniques were applied according to the guidelines specified in the user manuals or standard engineering textbooks as though no field data were available and the selection of input parameters was not influenced by observed data. Computed estimates were compared with observed runoff to evaluate the accuracy of the techniques. One watershed was eliminated from further evaluation when it was determined that the area contributing runoff to the stream varies with the amount and intensity of rainfall. Therefore, further evaluation and modification of the input parameters were made for only 62 storms in 14 watersheds. Runoff ranged from 1.4 to 99.3 percent percent of rainfall. The average runoff for all watersheds included in this study was about 36 percent of rainfall. The average runoff for the urban, natural, and mixed land-use watersheds was about 41, 27, and 29 percent, respectively. Initial estimates of peak discharge using the rational method produced average watershed errors that ranged from an underestimation of 50.4 percent to an overestimation of 767 percent. The coefficient of runoff ranged from 0.20 to 0.60. Calibration of the technique produced average errors that ranged from an underestimation of 3.3 percent to an overestimation of 1.5 percent. The average calibrated coefficient of runoff for each watershed ranged from 0.02 to 0.72. The average values of the coefficient of runoff necessary to calibrate the urban, natural, and mixed land-use watersheds were 0.39, 0.16, and 0.08, respectively. The U.S. Geological Survey regional regression equations for determining peak discharge produced errors that ranged from an underestimation of 87.3 percent to an over- estimation of 1,140 percent. The regression equations for determining runoff volume produced errors that ranged from an underestimation of 95.6 percent to an overestimation of 324 percent. Regression equations developed from data used for this study produced errors that ranged between an underestimation of 82.8 percent and an over- estimation of 328 percent for peak discharge, and from an underestimation of 71.2 percent to an overestimation of 241 percent for runoff volume. Use of the equations developed for west-central Florida streams produced average errors for each type of watershed that were lower than errors associated with use of the U.S. Geological Survey equations. Initial estimates of peak discharges and runoff volumes using the Natural Resources Conservation Service TR-20 model, produced average errors of 44.6 and 42.7 percent respectively, for all the watersheds. Curve numbers and times of concentration were adjusted to match estimated and observed peak discharges and runoff volumes. The average change in the curve number for all the watersheds was a decrease of 2.8 percent. The average change in the time of concentration was an increase of 59.2 percent. The shape of the input dimensionless unit hydrograph also had to be adjusted to match the shape and peak time of the estimated and observed flood hydrographs. Peak rate factors for the modified input dimensionless unit hydrographs ranged from 162 to 454. The mean errors for peak discharges and runoff volumes were reduced to 18.9 and 19.5 percent, respectively, using the average calibrated input parameters for ea
Transient Faults in Computer Systems
NASA Technical Reports Server (NTRS)
Masson, Gerald M.
1993-01-01
A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.
A spectral filter for ESMR's sidelobe errors
NASA Technical Reports Server (NTRS)
Chesters, D.
1979-01-01
Fourier analysis was used to remove periodic errors from a series of NIMBUS-5 electronically scanned microwave radiometer brightness temperatures. The observations were all taken from the midnight orbits over fixed sites in the Australian grasslands. The angular dependence of the data indicates calibration errors consisted of broad sidelobes and some miscalibration as a function of beam position. Even though an angular recalibration curve cannot be derived from the available data, the systematic errors can be removed with a spectral filter. The 7 day cycle in the drift of the orbit of NIMBUS-5, coupled to the look-angle biases, produces an error pattern with peaks in its power spectrum at the weekly harmonics. About plus or minus 4 K of error is removed by simply blocking the variations near two- and three-cycles-per-week.
Task-dependent signal variations in EEG error-related potentials for brain-computer interfaces
NASA Astrophysics Data System (ADS)
Iturrate, I.; Montesano, L.; Minguez, J.
2013-04-01
Objective. A major difficulty of brain-computer interface (BCI) technology is dealing with the noise of EEG and its signal variations. Previous works studied time-dependent non-stationarities for BCIs in which the user’s mental task was independent of the device operation (e.g., the mental task was motor imagery and the operational task was a speller). However, there are some BCIs, such as those based on error-related potentials, where the mental and operational tasks are dependent (e.g., the mental task is to assess the device action and the operational task is the device action itself). The dependence between the mental task and the device operation could introduce a new source of signal variations when the operational task changes, which has not been studied yet. The aim of this study is to analyse task-dependent signal variations and their effect on EEG error-related potentials.Approach. The work analyses the EEG variations on the three design steps of BCIs: an electrophysiology study to characterize the existence of these variations, a feature distribution analysis and a single-trial classification analysis to measure the impact on the final BCI performance.Results and significance. The results demonstrate that a change in the operational task produces variations in the potentials, even when EEG activity exclusively originated in brain areas related to error processing is considered. Consequently, the extracted features from the signals vary, and a classifier trained with one operational task presents a significant loss of performance for other tasks, requiring calibration or adaptation for each new task. In addition, a new calibration for each of the studied tasks rapidly outperforms adaptive techniques designed in the literature to mitigate the EEG time-dependent non-stationarities.
Demiral, Şükrü Barış; Golosheykin, Simon; Anokhin, Andrey P
2017-05-01
Detection and evaluation of the mismatch between the intended and actually obtained result of an action (reward prediction error) is an integral component of adaptive self-regulation of behavior. Extensive human and animal research has shown that evaluation of action outcome is supported by a distributed network of brain regions in which the anterior cingulate cortex (ACC) plays a central role, and the integration of distant brain regions into a unified feedback-processing network is enabled by long-range phase synchronization of cortical oscillations in the theta band. Neural correlates of feedback processing are associated with individual differences in normal and abnormal behavior, however, little is known about the role of genetic factors in the cerebral mechanisms of feedback processing. Here we examined genetic influences on functional cortical connectivity related to prediction error in young adult twins (age 18, n=399) using event-related EEG phase coherence analysis in a monetary gambling task. To identify prediction error-specific connectivity pattern, we compared responses to loss and gain feedback. Monetary loss produced a significant increase of theta-band synchronization between the frontal midline region and widespread areas of the scalp, particularly parietal areas, whereas gain resulted in increased synchrony primarily within the posterior regions. Genetic analyses showed significant heritability of frontoparietal theta phase synchronization (24 to 46%), suggesting that individual differences in large-scale network dynamics are under substantial genetic control. We conclude that theta-band synchronization of brain oscillations related to negative feedback reflects genetically transmitted differences in the neural mechanisms of feedback processing. To our knowledge, this is the first evidence for genetic influences on task-related functional brain connectivity assessed using direct real-time measures of neuronal synchronization. Copyright © 2016 Elsevier B.V. All rights reserved.
Crowd-sourced pictures geo-localization method based on street view images and 3D reconstruction
NASA Astrophysics Data System (ADS)
Cheng, Liang; Yuan, Yi; Xia, Nan; Chen, Song; Chen, Yanming; Yang, Kang; Ma, Lei; Li, Manchun
2018-07-01
People are increasingly becoming accustomed to taking photos of everyday life in modern cities and uploading them on major photo-sharing social media sites. These sites contain numerous pictures, but some have incomplete or blurred location information. The geo-localization of crowd-sourced pictures enriches the information contained therein, and is applicable to activities such as urban construction, urban landscape analysis, and crime tracking. However, geo-localization faces huge technical challenges. This paper proposes a method for large-scale geo-localization of crowd-sourced pictures. Our approach uses structured, organized Street View images as a reference dataset and employs a three-step strategy of coarse geo-localization by image retrieval, selecting reliable matches by image registration, and fine geo-localization by 3D reconstruction to attach geographic tags to pictures from unidentified sources. In study area, 3D reconstruction based on close-range photogrammetry is used to restore the 3D geographical information of the crowd-sourced pictures, resulting in the proposed method improving the median error from 256.7 m to 69.0 m, and the percentage of the geo-localized query pictures under a 50 m error from 17.2% to 43.2% compared with the previous method. Another discovery using the proposed method is that, in respect of the causes of reconstruction error, closer distances from the cameras to the main objects in query pictures tend to produce lower errors and the component of error parallel to the road makes a more significant contribution to the Total Error. The proposed method is not limited to small areas, and could be expanded to cities and larger areas owing to its flexible parameters.
Ultrasonographic Fetal Weight Estimation: Should Macrosomia-Specific Formulas Be Utilized?
Porter, Blake; Neely, Cherry; Szychowski, Jeff; Owen, John
2015-08-01
This study aims to derive an estimated fetal weight (EFW) formula in macrosomic fetuses, compare its accuracy to the 1986 Hadlock IV formula, and assess whether including maternal diabetes (MDM) improves estimation. Retrospective review of nonanomalous live-born singletons with birth weight (BWT) ≥ 4 kg and biometry within 14 days of birth. Formula accuracy included: (1) mean error (ME = EFW - BWT), (2) absolute mean error (AME = absolute value of [1]), and (3) mean percent error (MPE, [1]/BWT × 100%). Using loge BWT as the dependent variable, multivariable linear regression produced a macrosomic-specific formula in a "training" dataset which was verified by "validation" data. Formulas specific for MDM were also developed. Out of the 403 pregnancies, birth gestational age was 39.5 ± 1.4 weeks, and median BWT was 4,240 g. The macrosomic formula from the training data (n = 201) had associated ME = 54 ± 284 g, AME = 234 ± 167 g, and MPE = 1.6 ± 6.2%; evaluation in the validation dataset (n = 202) showed similar errors. The Hadlock formula had associated ME = -369 ± 422 g, AME = 451 ± 332 g, MPE = -8.3 ± 9.3% (all p < 0.0001). Diabetes-specific formula errors were similar to the macrosomic formula errors (all p = NS). With BWT ≥ 4 kg, the macrosomic formula was significantly more accurate than Hadlock IV, which systematically underestimates fetal/BWT. Diabetes-specific formulas did not improve accuracy. A specific formula should be considered when macrosomia is suspected. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
NASA Astrophysics Data System (ADS)
Wu, Cheng; Zhen Yu, Jian
2018-03-01
Linear regression techniques are widely used in atmospheric science, but they are often improperly applied due to lack of consideration or inappropriate handling of measurement uncertainty. In this work, numerical experiments are performed to evaluate the performance of five linear regression techniques, significantly extending previous works by Chu and Saylor. The five techniques are ordinary least squares (OLS), Deming regression (DR), orthogonal distance regression (ODR), weighted ODR (WODR), and York regression (YR). We first introduce a new data generation scheme that employs the Mersenne twister (MT) pseudorandom number generator. The numerical simulations are also improved by (a) refining the parameterization of nonlinear measurement uncertainties, (b) inclusion of a linear measurement uncertainty, and (c) inclusion of WODR for comparison. Results show that DR, WODR and YR produce an accurate slope, but the intercept by WODR and YR is overestimated and the degree of bias is more pronounced with a low R2 XY dataset. The importance of a properly weighting parameter λ in DR is investigated by sensitivity tests, and it is found that an improper λ in DR can lead to a bias in both the slope and intercept estimation. Because the λ calculation depends on the actual form of the measurement error, it is essential to determine the exact form of measurement error in the XY data during the measurement stage. If a priori error in one of the variables is unknown, or the measurement error described cannot be trusted, DR, WODR and YR can provide the least biases in slope and intercept among all tested regression techniques. For these reasons, DR, WODR and YR are recommended for atmospheric studies when both X and Y data have measurement errors. An Igor Pro-based program (Scatter Plot) was developed to facilitate the implementation of error-in-variables regressions.
The Computational Science Environment (CSE)
2009-08-01
supported CSE platforms. Developers can also build against different versions of a particular package (e.g., Python-2.4 vs . Python-2.5) via a...8.2.1 TK Testing Error and Workaround It has been found that TK tends to produces more testing errors when using KDE , and in some instances, the test...suite freezes when reaching the TK select test. These issues have not been seen when using Gnome . 8.2.2 VTK Testing Error and Workaround VTK test
Dissociable Genetic Contributions to Error Processing: A Multimodal Neuroimaging Study
Agam, Yigal; Vangel, Mark; Roffman, Joshua L.; Gallagher, Patience J.; Chaponis, Jonathan; Haddad, Stephen; Goff, Donald C.; Greenberg, Jennifer L.; Wilhelm, Sabine; Smoller, Jordan W.; Manoach, Dara S.
2014-01-01
Background Neuroimaging studies reliably identify two markers of error commission: the error-related negativity (ERN), an event-related potential, and functional MRI activation of the dorsal anterior cingulate cortex (dACC). While theorized to reflect the same neural process, recent evidence suggests that the ERN arises from the posterior cingulate cortex not the dACC. Here, we tested the hypothesis that these two error markers also have different genetic mediation. Methods We measured both error markers in a sample of 92 comprised of healthy individuals and those with diagnoses of schizophrenia, obsessive-compulsive disorder or autism spectrum disorder. Participants performed the same task during functional MRI and simultaneously acquired magnetoencephalography and electroencephalography. We examined the mediation of the error markers by two single nucleotide polymorphisms: dopamine D4 receptor (DRD4) C-521T (rs1800955), which has been associated with the ERN and methylenetetrahydrofolate reductase (MTHFR) C677T (rs1801133), which has been associated with error-related dACC activation. We then compared the effects of each polymorphism on the two error markers modeled as a bivariate response. Results We replicated our previous report of a posterior cingulate source of the ERN in healthy participants in the schizophrenia and obsessive-compulsive disorder groups. The effect of genotype on error markers did not differ significantly by diagnostic group. DRD4 C-521T allele load had a significant linear effect on ERN amplitude, but not on dACC activation, and this difference was significant. MTHFR C677T allele load had a significant linear effect on dACC activation but not ERN amplitude, but the difference in effects on the two error markers was not significant. Conclusions DRD4 C-521T, but not MTHFR C677T, had a significant differential effect on two canonical error markers. Together with the anatomical dissociation between the ERN and error-related dACC activation, these findings suggest that these error markers have different neural and genetic mediation. PMID:25010186
Optimizing dynamic downscaling in one-way nesting using a regional ocean model
NASA Astrophysics Data System (ADS)
Pham, Van Sy; Hwang, Jin Hwan; Ku, Hyeyun
2016-10-01
Dynamical downscaling with nested regional oceanographic models has been demonstrated to be an effective approach for both operationally forecasted sea weather on regional scales and projections of future climate change and its impact on the ocean. However, when nesting procedures are carried out in dynamic downscaling from a larger-scale model or set of observations to a smaller scale, errors are unavoidable due to the differences in grid sizes and updating intervals. The present work assesses the impact of errors produced by nesting procedures on the downscaled results from Ocean Regional Circulation Models (ORCMs). Errors are identified and evaluated based on their sources and characteristics by employing the Big-Brother Experiment (BBE). The BBE uses the same model to produce both nesting and nested simulations; so it addresses those error sources separately (i.e., without combining the contributions of errors from different sources). Here, we focus on discussing errors resulting from the spatial grids' differences, the updating times and the domain sizes. After the BBE was separately run for diverse cases, a Taylor diagram was used to analyze the results and recommend an optimal combination of grid size, updating period and domain sizes. Finally, suggested setups for the downscaling were evaluated by examining the spatial correlations of variables and the relative magnitudes of variances between the nested model and the original data.
NASA Astrophysics Data System (ADS)
Muir, J.; Phinn, S. R.; Armston, J.; Scarth, P.; Eyre, T.
2014-12-01
Coarse woody debris (CWD) provides important habitat for many species and plays a vital role in nutrient cycling within an ecosystem. In addition, CWD makes an important contribution to forest biomass and fuel loads. Airborne or space based remote sensing instruments typically do not detect CWD beneath the forest canopy. Terrestrial laser scanning (TLS) provides a ground based method for three-dimensional (3-D) reconstruction of surface features and CWD. This research produced a 3-D reconstruction of the ground surface and automatically classified coarse woody debris from registered TLS scans. The outputs will be used to inform the development of a site-based index for the assessment of forest condition, and quantitative assessments of biomass and fuel loads. A survey grade terrestrial laser scanner (Riegl VZ400) was used to scan 13 positions, in an open eucalypt woodland site at Karawatha Forest Park, near Brisbane, Australia. Scans were registered, and a digital surface model (DSM) produced using an intensity threshold and an iterative morphological filter. The DSMs produced from single scans were compared to the registered multi-scan point cloud using standard error metrics including: Root Mean Squared Error (RMSE), Mean Squared Error (MSE), range, absolute error and signed error. In addition the DSM was compared to a Digital Elevation Model (DEM) produced from Airborne Laser Scanning (ALS). Coarse woody debris was subsequently classified from the DSM using laser pulse properties, including: width and amplitude, as well as point spatial relationships (e.g. nearest neighbour slope vectors). Validation of the coarse woody debris classification was completed using true-colour photographs co-registered to the TLS point cloud. The volume and length of the coarse woody debris was calculated from the classified point cloud. A representative network of TLS sites will allow for up-scaling to large area assessment using airborne or space based sensors to monitor forest condition, biomass and fuel loads.
NASA Technical Reports Server (NTRS)
Skidmore, Trent A.
1994-01-01
The results of several case studies using the Global Positioning System coverage model developed at Ohio University are summarized. Presented are results pertaining to outage area, outage dynamics, and availability. Input parameters to the model include the satellite orbit data, service area of interest, geometry requirements, and horizon and antenna mask angles. It is shown for precision-landing Category 1 requirements that the planned GPS 21 Primary Satellite Constellation produces significant outage area and unavailability. It is also shown that a decrease in the user equivalent range error dramatically decreases outage area and improves the service availability.
A procedure for the significance testing of unmodeled errors in GNSS observations
NASA Astrophysics Data System (ADS)
Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling
2018-01-01
It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.
Annoyance to Noise Produced by a Distributed Electric Propulsion High-Lift System
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Palumbo, Daniel L.; Rathsam, Jonathan; Christian, Andrew; Rafaelof, Menachem
2017-01-01
A psychoacoustic test was performed using simulated sounds from a distributed electric propulsion aircraft concept to help understand factors associated with human annoyance. A design space spanning the number of high-lift leading edge propellers and their relative operating speeds, inclusive of time varying effects associated with motor controller error and atmospheric turbulence, was considered. It was found that the mean annoyance response varies in a statistically significant manner with the number of propellers and with the inclusion of time varying effects, but does not differ significantly with the relative RPM between propellers. An annoyance model was developed, inclusive of confidence intervals, using the noise metrics of loudness, roughness, and tonality as predictors.
Arima model and exponential smoothing method: A comparison
NASA Astrophysics Data System (ADS)
Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri
2013-04-01
This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.
Errors in imaging patients in the emergency setting
Reginelli, Alfonso; Lo Re, Giuseppe; Midiri, Federico; Muzj, Carlo; Romano, Luigia; Brunese, Luca
2016-01-01
Emergency and trauma care produces a “perfect storm” for radiological errors: uncooperative patients, inadequate histories, time-critical decisions, concurrent tasks and often junior personnel working after hours in busy emergency departments. The main cause of diagnostic errors in the emergency department is the failure to correctly interpret radiographs, and the majority of diagnoses missed on radiographs are fractures. Missed diagnoses potentially have important consequences for patients, clinicians and radiologists. Radiologists play a pivotal role in the diagnostic assessment of polytrauma patients and of patients with non-traumatic craniothoracoabdominal emergencies, and key elements to reduce errors in the emergency setting are knowledge, experience and the correct application of imaging protocols. This article aims to highlight the definition and classification of errors in radiology, the causes of errors in emergency radiology and the spectrum of diagnostic errors in radiography, ultrasonography and CT in the emergency setting. PMID:26838955
Liu, Xiaofeng Steven
2011-05-01
The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.
Errors in imaging patients in the emergency setting.
Pinto, Antonio; Reginelli, Alfonso; Pinto, Fabio; Lo Re, Giuseppe; Midiri, Federico; Muzj, Carlo; Romano, Luigia; Brunese, Luca
2016-01-01
Emergency and trauma care produces a "perfect storm" for radiological errors: uncooperative patients, inadequate histories, time-critical decisions, concurrent tasks and often junior personnel working after hours in busy emergency departments. The main cause of diagnostic errors in the emergency department is the failure to correctly interpret radiographs, and the majority of diagnoses missed on radiographs are fractures. Missed diagnoses potentially have important consequences for patients, clinicians and radiologists. Radiologists play a pivotal role in the diagnostic assessment of polytrauma patients and of patients with non-traumatic craniothoracoabdominal emergencies, and key elements to reduce errors in the emergency setting are knowledge, experience and the correct application of imaging protocols. This article aims to highlight the definition and classification of errors in radiology, the causes of errors in emergency radiology and the spectrum of diagnostic errors in radiography, ultrasonography and CT in the emergency setting.
Simulated rRNA/DNA Ratios Show Potential To Misclassify Active Populations as Dormant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steven, Blaire; Hesse, Cedar; Soghigian, John
The use of rRNA/DNA ratios derived from surveys of rRNA sequences in RNA and DNA extracts is an appealing but poorly validated approach to infer the activity status of environmental microbes. To improve the interpretation of rRNA/DNA ratios, we performed simulations to investigate the effects of community structure, rRNA amplification, and sampling depth on the accuracy of rRNA/DNA ratios in classifying bacterial populations as “active” or “dormant.” Community structure was an insignificant factor. In contrast, the extent of rRNA amplification that occurs as cells transition from dormant to growing had a significant effect (P < 0.0001) on classification accuracy, withmore » misclassification errors ranging from 16 to 28%, depending on the rRNA amplification model. The error rate increased to 47% when communities included a mixture of rRNA amplification models, but most of the inflated error was false negatives (i.e., active populations misclassified as dormant). Sampling depth also affected error rates (P < 0.001). Inadequate sampling depth produced various artifacts that are characteristic of rRNA/DNA ratios generated from real communities. These data show important constraints on the use of rRNA/DNA ratios to infer activity status. Whereas classification of populations as active based on rRNA/DNA ratios appears generally valid, classification of populations as dormant is potentially far less accurate.« less
Pilkington, Emma; Keidel, James; Kendrick, Luke T.; Saddy, James D.; Sage, Karen; Robson, Holly
2017-01-01
This study examined patterns of neologistic and perseverative errors during word repetition in fluent Jargon aphasia. The principal hypotheses accounting for Jargon production indicate that poor activation of a target stimulus leads to weakly activated target phoneme segments, which are outcompeted at the phonological encoding level. Voxel-lesion symptom mapping studies of word repetition errors suggest a breakdown in the translation from auditory-phonological analysis to motor activation. Behavioral analyses of repetition data were used to analyse the target relatedness (Phonological Overlap Index: POI) of neologistic errors and patterns of perseveration in 25 individuals with Jargon aphasia. Lesion-symptom analyses explored the relationship between neurological damage and jargon repetition in a group of 38 aphasia participants. Behavioral results showed that neologisms produced by 23 jargon individuals contained greater degrees of target lexico-phonological information than predicted by chance and that neologistic and perseverative production were closely associated. A significant relationship between jargon production and lesions to temporoparietal regions was identified. Region of interest regression analyses suggested that damage to the posterior superior temporal gyrus and superior temporal sulcus in combination was best predictive of a Jargon aphasia profile. Taken together, these results suggest that poor phonological encoding, secondary to impairment in sensory-motor integration, alongside impairments in self-monitoring result in jargon repetition. Insights for clinical management and future directions are discussed. PMID:28522967
Simulated rRNA/DNA Ratios Show Potential To Misclassify Active Populations as Dormant
Steven, Blaire; Hesse, Cedar; Soghigian, John; ...
2017-03-31
The use of rRNA/DNA ratios derived from surveys of rRNA sequences in RNA and DNA extracts is an appealing but poorly validated approach to infer the activity status of environmental microbes. To improve the interpretation of rRNA/DNA ratios, we performed simulations to investigate the effects of community structure, rRNA amplification, and sampling depth on the accuracy of rRNA/DNA ratios in classifying bacterial populations as “active” or “dormant.” Community structure was an insignificant factor. In contrast, the extent of rRNA amplification that occurs as cells transition from dormant to growing had a significant effect (P < 0.0001) on classification accuracy, withmore » misclassification errors ranging from 16 to 28%, depending on the rRNA amplification model. The error rate increased to 47% when communities included a mixture of rRNA amplification models, but most of the inflated error was false negatives (i.e., active populations misclassified as dormant). Sampling depth also affected error rates (P < 0.001). Inadequate sampling depth produced various artifacts that are characteristic of rRNA/DNA ratios generated from real communities. These data show important constraints on the use of rRNA/DNA ratios to infer activity status. Whereas classification of populations as active based on rRNA/DNA ratios appears generally valid, classification of populations as dormant is potentially far less accurate.« less
Application of statistical machine translation to public health information: a feasibility study.
Kirchhoff, Katrin; Turner, Anne M; Axelrod, Amittai; Saavedra, Francisco
2011-01-01
Accurate, understandable public health information is important for ensuring the health of the nation. The large portion of the US population with Limited English Proficiency is best served by translations of public-health information into other languages. However, a large number of health departments and primary care clinics face significant barriers to fulfilling federal mandates to provide multilingual materials to Limited English Proficiency individuals. This article presents a pilot study on the feasibility of using freely available statistical machine translation technology to translate health promotion materials. The authors gathered health-promotion materials in English from local and national public-health websites. Spanish versions were created by translating the documents using a freely available machine-translation website. Translations were rated for adequacy and fluency, analyzed for errors, manually corrected by a human posteditor, and compared with exclusively manual translations. Machine translation plus postediting took 15-53 min per document, compared to the reported days or even weeks for the standard translation process. A blind comparison of machine-assisted and human translations of six documents revealed overall equivalency between machine-translated and manually translated materials. The analysis of translation errors indicated that the most important errors were word-sense errors. The results indicate that machine translation plus postediting may be an effective method of producing multilingual health materials with equivalent quality but lower cost compared to manual translations.
Application of statistical machine translation to public health information: a feasibility study
Turner, Anne M; Axelrod, Amittai; Saavedra, Francisco
2011-01-01
Objective Accurate, understandable public health information is important for ensuring the health of the nation. The large portion of the US population with Limited English Proficiency is best served by translations of public-health information into other languages. However, a large number of health departments and primary care clinics face significant barriers to fulfilling federal mandates to provide multilingual materials to Limited English Proficiency individuals. This article presents a pilot study on the feasibility of using freely available statistical machine translation technology to translate health promotion materials. Design The authors gathered health-promotion materials in English from local and national public-health websites. Spanish versions were created by translating the documents using a freely available machine-translation website. Translations were rated for adequacy and fluency, analyzed for errors, manually corrected by a human posteditor, and compared with exclusively manual translations. Results Machine translation plus postediting took 15–53 min per document, compared to the reported days or even weeks for the standard translation process. A blind comparison of machine-assisted and human translations of six documents revealed overall equivalency between machine-translated and manually translated materials. The analysis of translation errors indicated that the most important errors were word-sense errors. Conclusion The results indicate that machine translation plus postediting may be an effective method of producing multilingual health materials with equivalent quality but lower cost compared to manual translations. PMID:21498805
Integrating Six Sigma with total quality management: a case example for measuring medication errors.
Revere, Lee; Black, Ken
2003-01-01
Six Sigma is a new management philosophy that seeks a nonexistent error rate. It is ripe for healthcare because many healthcare processes require a near-zero tolerance for mistakes. For most organizations, establishing a Six Sigma program requires significant resources and produces considerable stress. However, in healthcare, management can piggyback Six Sigma onto current total quality management (TQM) efforts so that minimal disruption occurs in the organization. Six Sigma is an extension of the Failure Mode and Effects Analysis that is required by JCAHO; it can easily be integrated into existing quality management efforts. Integrating Six Sigma into the existing TQM program facilitates process improvement through detailed data analysis. A drilled-down approach to root-cause analysis greatly enhances the existing TQM approach. Using the Six Sigma metrics, internal project comparisons facilitate resource allocation while external project comparisons allow for benchmarking. Thus, the application of Six Sigma makes TQM efforts more successful. This article presents a framework for including Six Sigma in an organization's TQM plan while providing a concrete example using medication errors. Using the process defined in this article, healthcare executives can integrate Six Sigma into all of their TQM projects.
Temperature dependence of emission measure in solar X-ray plasmas. 1: Non-flaring active regions
NASA Technical Reports Server (NTRS)
Phillips, K. J. H.
1974-01-01
X-ray and ultraviolet line emission from hot, optically thin material forming coronal active regions on the sun may be described in terms of an emission measure distribution function, Phi (T). A relationship is developed between line flux and Phi (T), a theory which assumes that the electron density is a single-valued function of temperature. The sources of error involved in deriving Phi (T) from a set of line fluxes are examined in some detail. These include errors in atomic data (collisional excitation rates, assessment of other mechanisms for populating excited states of transitions, element abundances, ion concentrations, oscillator strengths) and errors in observed line fluxes arising from poorly - known instrumental responses. Two previous analyses are discussed in which Phi (T) for a non-flaring active region is derived. A least squares method of Batstone uses X-ray data of low statistical significance, a fact which appears to influence the results considerably. Two methods for finding Phi (T) ab initio are developed. The coefficients are evaluated by least squares. These two methods should have application not only to active-region plasmas, but also to hot, flare-produced plasmas.
Nelson, Jonathan M.; Kinzel, Paul J.; McDonald, Richard R.; Schmeeckle, Mark
2016-01-01
Recently developed optical and videographic methods for measuring water-surface properties in a noninvasive manner hold great promise for extracting river hydraulic and bathymetric information. This paper describes such a technique, concentrating on the method of infrared videog- raphy for measuring surface velocities and both acoustic (laboratory-based) and laser-scanning (field-based) techniques for measuring water-surface elevations. In ideal laboratory situations with simple flows, appropriate spatial and temporal averaging results in accurate water-surface elevations and water-surface velocities. In test cases, this accuracy is sufficient to allow direct inversion of the governing equations of motion to produce estimates of depth and discharge. Unlike other optical techniques for determining local depth that rely on transmissivity of the water column (bathymetric lidar, multi/hyperspectral correlation), this method uses only water-surface information, so even deep and/or turbid flows can be investigated. However, significant errors arise in areas of nonhydrostatic spatial accelerations, such as those associated with flow over bedforms or other relatively steep obstacles. Using laboratory measurements for test cases, the cause of these errors is examined and both a simple semi-empirical method and computational results are presented that can potentially reduce bathymetric inversion errors.
Ramadas, Gisela C V; Rocha, Ana Maria A C; Fernandes, Edite M G P
2015-01-01
This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.
Theoretical Calculation and Validation of the Water Vapor Continuum Absorption
NASA Technical Reports Server (NTRS)
Ma, Qiancheng; Tipping, Richard H.
1998-01-01
The primary objective of this investigation is the development of an improved parameterization of the water vapor continuum absorption through the refinement and validation of our existing theoretical formalism. The chief advantage of our approach is the self-consistent, first principles, basis of the formalism which allows us to predict the frequency, temperature and pressure dependence of the continuum absorption as well as provide insights into the physical mechanisms responsible for the continuum absorption. Moreover, our approach is such that the calculated continuum absorption can be easily incorporated into satellite retrieval algorithms and climate models. Accurate determination of the water vapor continuum is essential for the next generation of retrieval algorithms which propose to use the combined constraints of multispectral measurements such as those under development for EOS data analysis (e.g., retrieval algorithms based on MODIS and AIRS measurements); current Pathfinder activities which seek to use the combined constraints of infrared and microwave (e.g., HIRS and MSU) measurements to improve temperature and water profile retrievals, and field campaigns which seek to reconcile spectrally-resolved and broad-band measurements such as those obtained as part of FIRE. Current widely used continuum treatments have been shown to produce spectrally dependent errors, with the magnitude of the error dependent on temperature and abundance which produces errors with a seasonal and latitude dependence. Translated into flux, current water vapor continuum parameterizations produce flux errors of order 10 W/sq m, which compared to the 4 W/sq m magnitude of the greenhouse gas forcing and the 1-2 W/sq m estimated aerosol forcing is certainly climatologically significant and unacceptably large. While it is possible to tune the empirical formalisms, the paucity of laboratory measurements, especially at temperatures of interest for atmospheric applications, preclude tuning, the empirical continuum models over the full spectral range of interest for remote sensing and climate applications. Thus, we propose to further develop and refine our existing, far-wing formalism to provide an improved treatment applicable from the near-infrared through the microwave. Based on the results of this investigation, we will provide to the remote sensing/climate modeling community a practical and accurate tabulation of the continuum absorption covering the near-infrared through the microwave region of the spectrum for the range of temperatures and pressures of interest for atmospheric applications.
Theoretical Calculation and Validation of the Water Vapor Continuum Absorption
NASA Technical Reports Server (NTRS)
Ma, Qiancheng; Tipping, Richard H.
1998-01-01
The primary objective of this investigation is the development of an improved parameterization of the water vapor continuum absorption through the refinement and validation of our existing theoretical formalism. The chief advantage of our approach is the self-consistent, first principles, basis of the formalism which allows us to predict the frequency, temperature and pressure dependence of the continuum absorption as well as provide insights into the physical mechanisms responsible for the continuum absorption. Moreover, our approach is such that the calculated continuum absorption can be easily incorporated into satellite retrieval algorithms and climate models. Accurate determination of the water vapor continuum is essential for the next generation of retrieval algorithms which propose to use the combined constraints of multi-spectral measurements such as those under development for EOS data analysis (e.g., retrieval algorithms based on MODIS and AIRS measurements); current Pathfinder activities which seek to use the combined constraints of infrared and microwave (e.g., HIRS and MSU) measurements to improve temperature and water profile retrievals, and field campaigns which seek to reconcile spectrally-resolved and broad-band measurements such as those obtained as part of FIRE. Current widely used continuum treatments have been shown to produce spectrally dependent errors, with the magnitude of the error dependent on temperature and abundance which produces errors with a seasonal and latitude dependence. Translated into flux, current water vapor continuum parameterizations produce flux errors of order 10 W/ml, which compared to the 4 W/m' magnitude of the greenhouse gas forcing and the 1-2 W/m' estimated aerosol forcing is certainly climatologically significant and unacceptably large. While it is possible to tune the empirical formalisms, the paucity of laboratory measurements, especially at temperatures of interest for atmospheric applications, preclude tuning the empirical continuum models over the full spectral range of interest for remote sensing and climate applications. Thus, we propose to further develop and refine our existing far-wing formalism to provide an improved treatment applicable from the near-infrared through the microwave. Based on the results of this investigation, we will provide to the remote sensing/climate modeling community a practical and accurate tabulation of the continuum absorption covering the near-infrared through the microwave region of the spectrum for the range of temperatures and pressures of interest for atmospheric applications.
Lee, Wonseok; Bae, Hyoung Won; Lee, Si Hyung; Kim, Chan Yun; Seong, Gong Je
2017-03-01
To assess the accuracy of intraocular lens (IOL) power prediction for cataract surgery with open angle glaucoma (OAG) and to identify preoperative angle parameters correlated with postoperative unpredicted refractive errors. This study comprised 45 eyes from 45 OAG subjects and 63 eyes from 63 non-glaucomatous cataract subjects (controls). We investigated differences in preoperative predicted refractive errors and postoperative refractive errors for each group. Preoperative predicted refractive errors were obtained by biometry (IOL-master) and compared to postoperative refractive errors measured by auto-refractometer 2 months postoperatively. Anterior angle parameters were determined using swept source optical coherence tomography. We investigated correlations between preoperative angle parameters [angle open distance (AOD); trabecular iris surface area (TISA); angle recess area (ARA); trabecular iris angle (TIA)] and postoperative unpredicted refractive errors. In patients with OAG, significant differences were noted between preoperative predicted and postoperative real refractive errors, with more myopia than predicted. No significant differences were recorded in controls. Angle parameters (AOD, ARA, TISA, and TIA) at the superior and inferior quadrant were significantly correlated with differences between predicted and postoperative refractive errors in OAG patients (-0.321 to -0.408, p<0.05). Superior quadrant AOD 500 was significantly correlated with postoperative refractive differences in multivariate linear regression analysis (β=-2.925, R²=0.404). Clinically unpredicted refractive errors after cataract surgery were more common in OAG than in controls. Certain preoperative angle parameters, especially AOD 500 at the superior quadrant, were significantly correlated with these unpredicted errors.
Lee, Wonseok; Bae, Hyoung Won; Lee, Si Hyung; Kim, Chan Yun
2017-01-01
Purpose To assess the accuracy of intraocular lens (IOL) power prediction for cataract surgery with open angle glaucoma (OAG) and to identify preoperative angle parameters correlated with postoperative unpredicted refractive errors. Materials and Methods This study comprised 45 eyes from 45 OAG subjects and 63 eyes from 63 non-glaucomatous cataract subjects (controls). We investigated differences in preoperative predicted refractive errors and postoperative refractive errors for each group. Preoperative predicted refractive errors were obtained by biometry (IOL-master) and compared to postoperative refractive errors measured by auto-refractometer 2 months postoperatively. Anterior angle parameters were determined using swept source optical coherence tomography. We investigated correlations between preoperative angle parameters [angle open distance (AOD); trabecular iris surface area (TISA); angle recess area (ARA); trabecular iris angle (TIA)] and postoperative unpredicted refractive errors. Results In patients with OAG, significant differences were noted between preoperative predicted and postoperative real refractive errors, with more myopia than predicted. No significant differences were recorded in controls. Angle parameters (AOD, ARA, TISA, and TIA) at the superior and inferior quadrant were significantly correlated with differences between predicted and postoperative refractive errors in OAG patients (-0.321 to -0.408, p<0.05). Superior quadrant AOD 500 was significantly correlated with postoperative refractive differences in multivariate linear regression analysis (β=-2.925, R2=0.404). Conclusion Clinically unpredicted refractive errors after cataract surgery were more common in OAG than in controls. Certain preoperative angle parameters, especially AOD 500 at the superior quadrant, were significantly correlated with these unpredicted errors. PMID:28120576
First-order approximation error analysis of Risley-prism-based beam directing system.
Zhao, Yanyan; Yuan, Yan
2014-12-01
To improve the performance of a Risley-prism system for optical detection and measuring applications, it is necessary to be able to determine the direction of the outgoing beam with high accuracy. In previous works, error sources and their impact on the performance of the Risley-prism system have been analyzed, but their numerical approximation accuracy was not high. Besides, pointing error analysis of the Risley-prism system has provided results for the case when the component errors, prism orientation errors, and assembly errors are certain. In this work, the prototype of a Risley-prism system was designed. The first-order approximations of the error analysis were derived and compared with the exact results. The directing errors of a Risley-prism system associated with wedge-angle errors, prism mounting errors, and bearing assembly errors were analyzed based on the exact formula and the first-order approximation. The comparisons indicated that our first-order approximation is accurate. In addition, the combined errors produced by the wedge-angle errors and mounting errors of the two prisms together were derived and in both cases were proved to be the sum of errors caused by the first and the second prism separately. Based on these results, the system error of our prototype was estimated. The derived formulas can be implemented to evaluate beam directing errors of any Risley-prism beam directing system with a similar configuration.
Liu, Jien-Wei; Ko, Wen-Chien; Huang, Cheng-Hua; Liao, Chun-Hsing; Lu, Chin-Te; Chuang, Yin-Ching; Tsao, Shih-Ming; Chen, Yao-Shen; Liu, Yung-Ching; Chen, Wei-Yu; Jang, Tsrang-Neng; Lin, Hsiu-Chen; Chen, Chih-Ming; Shi, Zhi-Yuan; Pan, Sung-Ching; Yang, Jia-Ling; Kung, Hsiang-Chi; Liu, Chun-Eng; Cheng, Yu-Jen; Chen, Yen-Hsu; Lu, Po-Liang; Sun, Wu; Wang, Lih-Shinn; Yu, Kwok-Woon; Chiang, Ping-Cherng; Lee, Ming-Hsun; Lee, Chun-Ming; Hsu, Gwo-Jong
2012-01-01
The Tigecycline In Vitro Surveillance in Taiwan (TIST) study, initiated in 2006, is a nationwide surveillance program designed to longitudinally monitor the in vitro activity of tigecycline against commonly encountered drug-resistant bacteria. This study compared the in vitro activity of tigecycline against 3,014 isolates of clinically important drug-resistant bacteria using the standard broth microdilution and disk diffusion methods. Species studied included methicillin-resistant Staphylococcus aureus (MRSA; n = 759), vancomycin-resistant Enterococcus faecium (VRE; n = 191), extended-spectrum β-lactamase (ESBL)-producing Escherichia coli (n = 602), ESBL-producing Klebsiella pneumoniae (n = 736), and Acinetobacter baumannii (n = 726) that had been collected from patients treated between 2008 and 2010 at 20 hospitals in Taiwan. MICs and inhibition zone diameters were interpreted according to the currently recommended U.S. Food and Drug Administration (FDA) criteria and the European Committee on Antimicrobial Susceptibility Testing (EUCAST) criteria. The MIC90 values of tigecycline against MRSA, VRE, ESBL-producing E. coli, ESBL-producing K. pneumoniae, and A. baumannii were 0.5, 0.125, 0.5, 2, and 8 μg/ml, respectively. The total error rates between the two methods using the FDA criteria were high: 38.4% for ESBL-producing K. pneumoniae and 33.8% for A. baumannii. Using the EUCAST criteria, the total error rate was also high (54.6%) for A. baumannii isolates. The total error rates between these two methods were <5% for MRSA, VRE, and ESBL-producing E. coli. For routine susceptibility testing of ESBL-producing K. pneumoniae and A. baumannii against tigecycline, the broth microdilution method should be used because of the poor correlation of results between these two methods. PMID:22155819
NASA Astrophysics Data System (ADS)
Sigmund, Armin; Pfister, Lena; Sayde, Chadi; Thomas, Christoph K.
2017-06-01
In recent years, the spatial resolution of fiber-optic distributed temperature sensing (DTS) has been enhanced in various studies by helically coiling the fiber around a support structure. While solid polyvinyl chloride tubes are an appropriate support structure under water, they can produce considerable errors in aerial deployments due to the radiative heating or cooling. We used meshed reinforcing fabric as a novel support structure to measure high-resolution vertical temperature profiles with a height of several meters above a meadow and within and above a small lake. This study aimed at quantifying the radiation error for the coiled DTS system and the contribution caused by the novel support structure via heat conduction. A quantitative and comprehensive energy balance model is proposed and tested, which includes the shortwave radiative, longwave radiative, convective, and conductive heat transfers and allows for modeling fiber temperatures as well as quantifying the radiation error. The sensitivity of the energy balance model to the conduction error caused by the reinforcing fabric is discussed in terms of its albedo, emissivity, and thermal conductivity. Modeled radiation errors amounted to -1.0 and 1.3 K at 2 m height but ranged up to 2.8 K for very high incoming shortwave radiation (1000 J s-1 m-2) and very weak winds (0.1 m s-1). After correcting for the radiation error by means of the presented energy balance, the root mean square error between DTS and reference air temperatures from an aspirated resistance thermometer or an ultrasonic anemometer was 0.42 and 0.26 K above the meadow and the lake, respectively. Conduction between reinforcing fabric and fiber cable had a small effect on fiber temperatures (< 0.18 K). Only for locations where the plastic rings that supported the reinforcing fabric touched the fiber-optic cable were significant temperature artifacts of up to 2.5 K observed. Overall, the reinforcing fabric offers several advantages over conventional support structures published to date in the literature as it minimizes both radiation and conduction errors.
Cognitive deficits induced by 56Fe radiation exposure
NASA Technical Reports Server (NTRS)
Shukitt-Hale, B.; Casadesus, G.; Cantuti-Castelvetri, I.; Rabin, B. M.; Joseph, J. A.
2003-01-01
Exposing rats to particles of high energy and charge (e.g., 56Fe) disrupts neuronal systems and the behaviors mediated by them; these adverse behavioral and neuronal effects are similar to those seen in aged animals. Because cognition declines with age, and our previous study showed that radiation disrupted Morris water maze spatial learning and memory performance, the present study used an 8-arm radial maze (RAM) to further test the cognitive behavioral consequences of radiation exposure. Control rats or rats exposed to whole-body irradiation with 1.0 Gy of 1 GeV/n high-energy 56Fe particles (delivered at the alternating gradient synchrotron at Brookhaven National Laboratory) were tested nine months following exposure. Radiation adversely affected RAM performance, and the changes seen parallel those of aging. Irradiated animals entered baited arms during the first 4 choices significantly less than did controls, produced their first error sooner, and also tended to make more errors as measured by re-entries into non-baited arms. These results show that irradiation with high-energy particles produces age-like decrements in cognitive behavior that may impair the ability of astronauts to perform critical tasks during long-term space travel beyond the magnetosphere. Published by Elsevier Science Ltd on behalf of COSPAR.
Huber, Jessica E.; Darling, Meghan
2012-01-01
Purpose The purpose of the present study was to examine the effects of cognitive-linguistic deficits and respiratory physiologic changes on respiratory support for speech in PD, using two speech tasks, reading and extemporaneous speech. Methods Five women with PD, 9 men with PD, and 14 age- and sex-matched control participants read a passage and spoke extemporaneously on a topic of their choice at comfortable loudness. Sound pressure level, syllables per breath group, speech rate, and lung volume parameters were measured. Number of formulation errors, disfluencies, and filled pauses were counted. Results Individuals with PD produced shorter utterances as compared to control participants. The relationships between utterance length and lung volume initiation and inspiratory duration were weaker in individuals with PD than for control participants, particularly for the extemporaneous speech task. These results suggest less consistent planning for utterance length by individuals with PD in extemporaneous speech. Individuals with PD produced more formulation errors in both tasks and significantly fewer filled pauses in extemporaneous speech. Conclusions Both respiratory physiologic and cognitive-linguistic issues affected speech production by individuals with PD. Overall, individuals with PD had difficulty planning or coordinating language formulation and respiratory support, in particular during extemporaneous speech. PMID:20844256
ERIC Educational Resources Information Center
Cole, Russell; Haimson, Joshua; Perez-Johnson, Irma; May, Henry
2011-01-01
State assessments are increasingly used as outcome measures for education evaluations. The scaling of state assessments produces variability in measurement error, with the conditional standard error of measurement increasing as average student ability moves toward the tails of the achievement distribution. This report examines the variability in…
Older, Not Younger, Children Learn More False Facts from Stories
ERIC Educational Resources Information Center
Fazio, Lisa K.; Marsh, Elizabeth J.
2008-01-01
Early school-aged children listened to stories that contained correct and incorrect facts. All ages answered more questions correctly after having heard the correct fact in the story. Only the older children, however, produced story errors on a later general knowledge test. Source errors did not drive the increased suggestibility in older…
ERIC Educational Resources Information Center
Mirandola, C.; Paparella, G.; Re, A. M.; Ghetti, S.; Cornoldi, C.
2012-01-01
Enhanced semantic processing is associated with increased false recognition of items consistent with studied material, suggesting that children with poor semantic skills could produce fewer false memories. We examined whether memory errors differed in children with Attention Deficit/Hyperactivity Disorder (ADHD) and controls. Children viewed 18…
Grammar Errors Made by ESL Tertiary Students in Writing
ERIC Educational Resources Information Center
Singh, Charanjit Kaur Swaran; Singh, Amreet Kaur Jageer; Razak, Nur Qistina Abd; Ravinthar, Thilaga
2017-01-01
The educational context in Malaysia demands students to be equipped with sound grammar so that they can produce good essays in the examination. However, despite having learnt English in primary and secondary schools, students in the higher learning institutions tend to make some grammatical errors in their writing. This study presents the…
Having Fun with Error Analysis
ERIC Educational Resources Information Center
Siegel, Peter
2007-01-01
We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…
Scheduling periodic jobs that allow imprecise results
NASA Technical Reports Server (NTRS)
Chung, Jen-Yao; Liu, Jane W. S.; Lin, Kwei-Jay
1990-01-01
The problem of scheduling periodic jobs in hard real-time systems that support imprecise computations is discussed. Two workload models of imprecise computations are presented. These models differ from traditional models in that a task may be terminated any time after it has produced an acceptable result. Each task is logically decomposed into a mandatory part followed by an optional part. In a feasible schedule, the mandatory part of every task is completed before the deadline of the task. The optional part refines the result produced by the mandatory part to reduce the error in the result. Applications are classified as type N and type C, according to undesirable effects of errors. The two workload models characterize the two types of applications. The optional parts of the tasks in an N job need not ever be completed. The resulting quality of each type-N job is measured in terms of the average error in the results over several consecutive periods. A class of preemptive, priority-driven algorithms that leads to feasible schedules with small average error is described and evaluated.
Astigmatism following retinal detachment surgery.
Goel, R; Crewdson, J; Chignell, A H
1983-01-01
Eighty-three patients on whom successful retinal detachment had been performed were studied to note astigmatic changes following surgery. In the majority of cases the errors following such surgery are of no great clinical importance. However, in some situations a high degree of astigmatism may be produced. This study showed that these sequelae are particularly likely after radial buckling procedures, and surgeons favouring these techniques should be aware that astigmatic errors can be induced. The astigmatic errors may persist for several years after surgery. PMID:6838807
Research on Spectroscopy, Opacity, and Atmospheres
NASA Technical Reports Server (NTRS)
Kurucz, Robert L.
1996-01-01
I discuss errors in theory and in interpreting observations that are produced by the failure to consider resolution in space, time, and energy. I discuss convection in stellar model atmospheres and in stars. Large errors in abundances are possible such as the factor of ten error in the Li abundance for extreme Population II stars. Finally I discuss the variation of microturbulent velocity with depth, effective temperature, gravity and abundance. These variations must be dealt with in computing models and grids and in any type of photometric calibration.
Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G
NASA Astrophysics Data System (ADS)
DeSalvo, Riccardo
2015-06-01
Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.
Performance monitoring and error significance in patients with obsessive-compulsive disorder.
Endrass, Tanja; Schuermann, Beate; Kaufmann, Christan; Spielberg, Rüdiger; Kniesche, Rainer; Kathmann, Norbert
2010-05-01
Performance monitoring has been consistently found to be overactive in obsessive-compulsive disorder (OCD). The present study examines whether performance monitoring in OCD is adjusted with error significance. Therefore, errors in a flanker task were followed by neutral (standard condition) or punishment feedbacks (punishment condition). In the standard condition patients had significantly larger error-related negativity (ERN) and correct-related negativity (CRN) ampliudes than controls. But, in the punishment condition groups did not differ in ERN and CRN amplitudes. While healthy controls showed an amplitude enhancement between standard and punishment condition, OCD patients showed no variation. In contrast, group differences were not found for the error positivity (Pe): both groups had larger Pe amplitudes in the punishment condition. Results confirm earlier findings of overactive error monitoring in OCD. The absence of a variation with error significance might indicate that OCD patients are unable to down-regulate their monitoring activity according to external requirements. Copyright 2010 Elsevier B.V. All rights reserved.
Total energy based flight control system
NASA Technical Reports Server (NTRS)
Lambregts, Antonius A. (Inventor)
1985-01-01
An integrated aircraft longitudinal flight control system uses a generalized thrust and elevator command computation (38), which accepts flight path angle, longitudinal acceleration command signals, along with associated feedback signals, to form energy rate error (20) and energy rate distribution error (18) signals. The engine thrust command is developed (22) as a function of the energy rate distribution error and the elevator position command is developed (26) as a function of the energy distribution error. For any vertical flight path and speed mode the outerloop errors are normalized (30, 34) to produce flight path angle and longitudinal acceleration commands. The system provides decoupled flight path and speed control for all control modes previously provided by the longitudinal autopilot, autothrottle and flight management systems.
Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks
Besada, Juan A.
2017-01-01
In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation. PMID:28934157
Lock-in amplifier error prediction and correction in frequency sweep measurements.
Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose
2007-01-01
This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.
Assessing Working Memory in Mild Cognitive Impairment with Serial Order Recall.
Emrani, Sheina; Libon, David J; Lamar, Melissa; Price, Catherine C; Jefferson, Angela L; Gifford, Katherine A; Hohman, Timothy J; Nation, Daniel A; Delano-Wood, Lisa; Jak, Amy; Bangen, Katherine J; Bondi, Mark W; Brickman, Adam M; Manly, Jennifer; Swenson, Rodney; Au, Rhoda
2018-01-01
Working memory (WM) is often assessed with serial order tests such as repeating digits backward. In prior dementia research using the Backward Digit Span Test (BDT), only aggregate test performance was examined. The current research tallied primacy/recency effects, out-of-sequence transposition errors, perseverations, and omissions to assess WM deficits in patients with mild cognitive impairment (MCI). Memory clinic patients (n = 66) were classified into three groups: single domain amnestic MCI (aMCI), combined mixed domain/dysexecutive MCI (mixed/dys MCI), and non-MCI where patients did not meet criteria for MCI. Serial order/WM ability was assessed by asking participants to repeat 7 trials of five digits backwards. Serial order position accuracy, transposition errors, perseverations, and omission errors were tallied. A 3 (group)×5 (serial position) repeated measures ANOVA yielded a significant group×trial interaction. Follow-up analyses found attenuation of the recency effect for mixed/dys MCI patients. Mixed/dys MCI patients scored lower than non-MCI patients for serial position 3 (p < 0.003) serial position 4 (p < 0.002); and lower than both group for serial position 5 (recency; p < 0.002). Mixed/dys MCI patients also produced more transposition errors than both groups (p < 0.010); and more omissions (p < 0.020), and perseverations errors (p < 0.018) than non-MCI patients. The attenuation of a recency effect using serial order parameters obtained from the BDT may provide a useful operational definition as well as additional diagnostic information regarding working memory deficits in MCI.
Lifelong modelling of properties for materials with technological memory
NASA Astrophysics Data System (ADS)
Falaleev, AP; Meshkov, VV; Vetrogon, AA; Ogrizkov, SV; Shymchenko, AV
2016-10-01
An investigation of real automobile parts produced from dual phase steel during standard periods of life cycle is presented, which considers such processes as stamping, exploitation, automobile accident, and further repair. The development of the phenomenological model of the mechanical properties of such parts was based on the two surface plastic theory of Chaboche. As a consequence of the composite structure of dual phase steel, it was shown that local mechanical properties of parts produced from this material change significantly their during their life cycle, depending on accumulated plastic deformations and thermal treatments. Such mechanical property changes have a considerable impact on the accuracy of the computer modelling of automobile behaviour. The most significant errors of modelling were obtained at the critical operating conditions, such as crashes and accidents. The model developed takes into account the kinematics (Bauschinger effect), isotropic hardening, non-linear elastic steel behaviour and changes caused by the thermal treatment. Using finite element analysis, the model allows the evaluation of the passive safety of a repaired car body, and enables increased restoration accuracy following an accident. The model was confirmed experimentally for parts produced from dual phase steel DP780.
At the cross-roads: an on-road examination of driving errors at intersections.
Young, Kristie L; Salmon, Paul M; Lenné, Michael G
2013-09-01
A significant proportion of road trauma occurs at intersections. Understanding the nature of driving errors at intersections therefore has the potential to lead to significant injury reductions. To further understand how the complexity of modern intersections shapes behaviour of these errors are compared to errors made mid-block, and the role of wider systems failures in intersection error causation is investigated in an on-road study. Twenty-five participants drove a pre-determined urban route incorporating 25 intersections. Two in-vehicle observers recorded the errors made while a range of other data was collected, including driver verbal protocols, video, driver eye glance behaviour and vehicle data (e.g., speed, braking and lane position). Participants also completed a post-trial cognitive task analysis interview. Participants were found to make 39 specific error types, with speeding violations the most common. Participants made significantly more errors at intersections compared to mid-block, with misjudgement, action and perceptual/observation errors more commonly observed at intersections. Traffic signal configuration was found to play a key role in intersection error causation, with drivers making more errors at partially signalised compared to fully signalised intersections. Copyright © 2012 Elsevier Ltd. All rights reserved.
The influence of phonological context on the sound errors of a speaker with Wernicke's aphasia.
Goldmann, R E; Schwartz, M F; Wilshire, C E
2001-09-01
A corpus of phonological errors produced in narrative speech by a Wernicke's aphasic speaker (R.W.B.) was tested for context effects using two new methods for establishing chance baselines. A reliable anticipatory effect was found using the second method, which estimated chance from the distance between phoneme repeats in the speech sample containing the errors. Relative to this baseline, error-source distances were shorter than expected for anticipations, but not perseverations. R.W.B.'s anticipation/perseveration ratio measured intermediate between a nonaphasic error corpus and that of a more severe aphasic speaker (both reported in Schwartz et al., 1994), supporting the view that the anticipatory bias correlates to severity. Finally, R.W.B's anticipations favored word-initial segments, although errors and sources did not consistently share word or syllable position. Copyright 2001 Academic Press.
Analysis of frequency mixing error on heterodyne interferometric ellipsometry
NASA Astrophysics Data System (ADS)
Deng, Yuan-long; Li, Xue-jin; Wu, Yu-bin; Hu, Ju-guang; Yao, Jian-quan
2007-11-01
A heterodyne interferometric ellipsometer, with no moving parts and a transverse Zeeman laser, is demonstrated. The modified Mach-Zehnder interferometer characterized as a separate frequency and common-path configuration is designed and theoretically analyzed. The experimental data show a fluctuation mainly resulting from the frequency mixing error which is caused by the imperfection of polarizing beam splitters (PBS), the elliptical polarization and non-orthogonality of light beams. The producing mechanism of the frequency mixing error and its influence on measurement are analyzed with the Jones matrix method; the calculation indicates that it results in an error up to several nanometres in the thickness measurement of thin films. The non-orthogonality has no contribution to the phase difference error when it is relatively small; the elliptical polarization and the imperfection of PBS have a major effect on the error.
NASA Astrophysics Data System (ADS)
Natividad, Gina May R.; Cawiding, Olive R.; Addawe, Rizavel C.
2017-11-01
The increase in the merchandise exports of the country offers information about the Philippines' trading role within the global economy. Merchandise exports statistics are used to monitor the country's overall production that is consumed overseas. This paper investigates the comparison between two models obtained by a) clustering the commodity groups into two based on its proportional contribution to the total exports, and b) treating only the total exports. Different seasonal autoregressive integrated moving average (SARIMA) models were then developed for the clustered commodities and for the total exports based on the monthly merchandise exports of the Philippines from 2011 to 2016. The data set used in this study was retrieved from the Philippine Statistics Authority (PSA) which is the central statistical authority in the country responsible for primary data collection. A test for significance of the difference between means at 0.05 level of significance was then performed on the forecasts produced. The result indicates that there is a significant difference between the mean of the forecasts of the two models. Moreover, upon a comparison of the root mean square error (RMSE) and mean absolute error (MAE) of the models, it was found that the models used for the clustered groups outperform the model for the total exports.
Selecting a restoration technique to minimize OCR error.
Cannon, M; Fugate, M; Hush, D R; Scovel, C
2003-01-01
This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.
Looking for trouble? Diagnostics expanding disease and producing patients.
Hofmann, Bjørn
2018-05-23
Novel tests give great opportunities for earlier and more precise diagnostics. At the same time, new tests expand disease, produce patients, and cause unnecessary harm in overdiagnosis and overtreatment. How can we evaluate diagnostics to obtain the benefits and avoid harm? One way is to pay close attention to the diagnostic process and its core concepts. Doing so reveals 3 errors that expand disease and increase overdiagnosis. The first error is to decouple diagnostics from harm, eg, by diagnosing insignificant conditions. The second error is to bypass proper validation of the relationship between test indicator and disease, eg, by introducing biomarkers for Alzheimer's disease before the tests are properly validated. The third error is to couple the name of disease to insignificant or indecisive indicators, eg, by lending the cancer name to preconditions, such as ductal carcinoma in situ. We need to avoid these errors to promote beneficial testing, bar harmful diagnostics, and evade unwarranted expansion of disease. Accordingly, we must stop identifying and testing for conditions that are only remotely associated with harm. We need more stringent verification of tests, and we must avoid naming indicators and indicative conditions after diseases. If not, we will end like ancient tragic heroes, succumbing because of our very best abilities. © 2018 John Wiley & Sons, Ltd.
On the importance of Task 1 and error performance measures in PRP dual-task studies.
Strobach, Tilo; Schütz, Anja; Schubert, Torsten
2015-01-01
The psychological refractory period (PRP) paradigm is a dominant research tool in the literature on dual-task performance. In this paradigm a first and second component task (i.e., Task 1 and Task 2) are presented with variable stimulus onset asynchronies (SOAs) and priority to perform Task 1. The main indicator of dual-task impairment in PRP situations is an increasing Task 2-RT with decreasing SOAs. This impairment is typically explained with some task components being processed strictly sequentially in the context of the prominent central bottleneck theory. This assumption could implicitly suggest that processes of Task 1 are unaffected by Task 2 and bottleneck processing, i.e., decreasing SOAs do not increase reaction times (RTs) and error rates of the first task. The aim of the present review is to assess whether PRP dual-task studies included both RT and error data presentations and statistical analyses and whether studies including both data types (i.e., RTs and error rates) show data consistent with this assumption (i.e., decreasing SOAs and unaffected RTs and/or error rates in Task 1). This review demonstrates that, in contrast to RT presentations and analyses, error data is underrepresented in a substantial number of studies. Furthermore, a substantial number of studies with RT and error data showed a statistically significant impairment of Task 1 performance with decreasing SOA. Thus, these studies produced data that is not primarily consistent with the strong assumption that processes of Task 1 are unaffected by Task 2 and bottleneck processing in the context of PRP dual-task situations; this calls for a more careful report and analysis of Task 1 performance in PRP studies and for a more careful consideration of theories proposing additions to the bottleneck assumption, which are sufficiently general to explain Task 1 and Task 2 effects.
On the importance of Task 1 and error performance measures in PRP dual-task studies
Strobach, Tilo; Schütz, Anja; Schubert, Torsten
2015-01-01
The psychological refractory period (PRP) paradigm is a dominant research tool in the literature on dual-task performance. In this paradigm a first and second component task (i.e., Task 1 and Task 2) are presented with variable stimulus onset asynchronies (SOAs) and priority to perform Task 1. The main indicator of dual-task impairment in PRP situations is an increasing Task 2-RT with decreasing SOAs. This impairment is typically explained with some task components being processed strictly sequentially in the context of the prominent central bottleneck theory. This assumption could implicitly suggest that processes of Task 1 are unaffected by Task 2 and bottleneck processing, i.e., decreasing SOAs do not increase reaction times (RTs) and error rates of the first task. The aim of the present review is to assess whether PRP dual-task studies included both RT and error data presentations and statistical analyses and whether studies including both data types (i.e., RTs and error rates) show data consistent with this assumption (i.e., decreasing SOAs and unaffected RTs and/or error rates in Task 1). This review demonstrates that, in contrast to RT presentations and analyses, error data is underrepresented in a substantial number of studies. Furthermore, a substantial number of studies with RT and error data showed a statistically significant impairment of Task 1 performance with decreasing SOA. Thus, these studies produced data that is not primarily consistent with the strong assumption that processes of Task 1 are unaffected by Task 2 and bottleneck processing in the context of PRP dual-task situations; this calls for a more careful report and analysis of Task 1 performance in PRP studies and for a more careful consideration of theories proposing additions to the bottleneck assumption, which are sufficiently general to explain Task 1 and Task 2 effects. PMID:25904890
XCO2 Retrieval Errors from a PCA-based Approach to Fast Radiative Transfer
NASA Astrophysics Data System (ADS)
Somkuti, Peter; Boesch, Hartmut; Natraj, Vijay; Kopparla, Pushkar
2017-04-01
Multiple-scattering radiative transfer (RT) calculations are an integral part of forward models used to infer greenhouse gas concentrations in the shortwave-infrared spectral range from satellite missions such as GOSAT or OCO-2. Such calculations are, however, computationally expensive and, combined with the recent growth in data volume, necessitate the use of acceleration methods in order to make retrievals feasible on an operational level. The principle component analysis (PCA)-based approach to fast radiative transfer introduced by Natraj et al. 2005 is a spectral binning method, in which the many line-by-line monochromatic calculations are replaced by a small set of representative ones. From the PCA performed on the optical layer properties for a scene-dependent atmosphere, the results of the representative calculations are mapped onto all spectral points in the given band. Since this RT scheme is an approximation, the computed top-of-atmosphere radiances exhibit errors compared to the "full" line-by-line calculation. These errors ultimately propagate into the final retrieved greenhouse gas concentrations, and their magnitude depends on scene-dependent parameters such as aerosol loadings or viewing geometry. An advantage of this method is the ability to choose the degree of accuracy by increasing or decreasing the number of empirical orthogonal functions used for the reconstruction of the radiances. We have performed a large set of global simulations based on real GOSAT scenes and assess the retrieval errors induced by the fast RT approximation through linear error analysis. We find that across a wide range of geophysical parameters, the errors are for the most part smaller than ± 0.2 ppm and ± 0.06 ppm (out of roughly 400 ppm) for ocean and land scenes respectively. A fast RT scheme that produces low errors is important, since regional biases in XCO2 even in the low sub-ppm range can cause significant changes in carbon fluxes obtained from inversions (Chevallier et al. 2007).
Vocal Generalization Depends on Gesture Identity and Sequence
Sober, Samuel J.
2014-01-01
Generalization, the brain's ability to transfer motor learning from one context to another, occurs in a wide range of complex behaviors. However, the rules of generalization in vocal behavior are poorly understood, and it is unknown how vocal learning generalizes across an animal's entire repertoire of natural vocalizations and sequences. Here, we asked whether generalization occurs in a nonhuman vocal learner and quantified its properties. We hypothesized that adaptive error correction of a vocal gesture produced in one sequence would generalize to the same gesture produced in other sequences. To test our hypothesis, we manipulated the fundamental frequency (pitch) of auditory feedback in Bengalese finches (Lonchura striata var. domestica) to create sensory errors during vocal gestures (song syllables) produced in particular sequences. As hypothesized, error-corrective learning on pitch-shifted vocal gestures generalized to the same gestures produced in other sequential contexts. Surprisingly, generalization magnitude depended strongly on sequential distance from the pitch-shifted syllables, with greater adaptation for gestures produced near to the pitch-shifted syllable. A further unexpected result was that nonshifted syllables changed their pitch in the direction opposite from the shifted syllables. This apparently antiadaptive pattern of generalization could not be explained by correlations between generalization and the acoustic similarity to the pitch-shifted syllable. These findings therefore suggest that generalization depends on the type of vocal gesture and its sequential context relative to other gestures and may reflect an advantageous strategy for vocal learning and maintenance. PMID:24741046
Development of an accurate portable recording peak-flow meter for the diagnosis of asthma.
Hitchings, D J; Dickinson, S A; Miller, M R; Fairfax, A J
1993-05-01
This article describes the systematic design of an electronic recording peak expiratory flow (PEF) meter to provide accurate data for the diagnosis of occupational asthma. Traditional diagnosis of asthma relies on accurate data of PEF tests performed by the patients in their own homes and places of work. Unfortunately there are high error rates in data produced and recorded by the patient, most of these are transcription errors and some patients falsify their records. The PEF measurement itself is not effort independent, the data produced depending on the way in which the patient performs the test. Patients are taught how to perform the test giving maximal effort to the expiration being measured. If the measurement is performed incorrectly then errors will occur. Accurate data can be produced if an electronically recording PEF instrument is developed, thus freeing the patient from the task of recording the test data. It should also be capable of determining whether the PEF measurement has been correctly performed. A requirement specification for a recording PEF meter was produced. A commercially available electronic PEF meter was modified to provide the functions required for accurate serial recording of the measurements produced by the patients. This is now being used in three hospitals in the West Midlands for investigations into the diagnosis of occupational asthma. In investigating current methods of measuring PEF and other pulmonary quantities a greater understanding was obtained of the limitations of current methods of measurement, and quantities being measured.(ABSTRACT TRUNCATED AT 250 WORDS)
RECKONER: read error corrector based on KMC.
Dlugosz, Maciej; Deorowicz, Sebastian
2017-04-01
Presence of sequencing errors in data produced by next-generation sequencers affects quality of downstream analyzes. Accuracy of them can be improved by performing error correction of sequencing reads. We introduce a new correction algorithm capable of processing eukaryotic close to 500 Mbp-genome-size, high error-rated data using less than 4 GB of RAM in about 35 min on 16-core computer. Program is freely available at http://sun.aei.polsl.pl/REFRESH/reckoner . sebastian.deorowicz@polsl.pl. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Error analysis and prevention of cosmic ion-induced soft errors in static CMOS RAMs
NASA Astrophysics Data System (ADS)
Diehl, S. E.; Ochoa, A., Jr.; Dressendorfer, P. V.; Koga, P.; Kolasinski, W. A.
1982-12-01
Cosmic ray interactions with memory cells are known to cause temporary, random, bit errors in some designs. The sensitivity of polysilicon gate CMOS static RAM designs to logic upset by impinging ions has been studied using computer simulations and experimental heavy ion bombardment. Results of the simulations are confirmed by experimental upset cross-section data. Analytical models have been extended to determine and evaluate design modifications which reduce memory cell sensitivity to cosmic ions. A simple design modification, the addition of decoupling resistance in the feedback path, is shown to produce static RAMs immune to cosmic ray-induced bit errors.
Errors made by animals in memory paradigms are not always due to failure of memory.
Wilkie, D M; Willson, R J; Carr, J A
1999-01-01
It is commonly assumed that errors in animal memory paradigms such as delayed matching to sample, radial mazes, and food-cache recovery are due to failures in memory for information necessary to perform the task successfully. A body of research, reviewed here, suggests that this is not always the case: animals sometimes make errors despite apparently being able to remember the appropriate information. In this paper a case study of this phenomenon is described, along with a demonstration of a simple procedural modification that successfully reduced these non-memory errors, thereby producing a better measure of memory.
Adaptive constructive processes and the future of memory.
Schacter, Daniel L
2012-11-01
Memory serves critical functions in everyday life but is also prone to error. This article examines adaptive constructive processes, which play a functional role in memory and cognition but can also produce distortions, errors, and illusions. The article describes several types of memory errors that are produced by adaptive constructive processes and focuses in particular on the process of imagining or simulating events that might occur in one's personal future. Simulating future events relies on many of the same cognitive and neural processes as remembering past events, which may help to explain why imagination and memory can be easily confused. The article considers both pitfalls and adaptive aspects of future event simulation in the context of research on planning, prediction, problem solving, mind-wandering, prospective and retrospective memory, coping and positivity bias, and the interconnected set of brain regions known as the default network. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Color extended visual cryptography using error diffusion.
Kang, InKoo; Arce, Gonzalo R; Lee, Heung-Kyu
2011-01-01
Color visual cryptography (VC) encrypts a color secret message into n color halftone image shares. Previous methods in the literature show good results for black and white or gray scale VC schemes, however, they are not sufficient to be applied directly to color shares due to different color structures. Some methods for color visual cryptography are not satisfactory in terms of producing either meaningless shares or meaningful shares with low visual quality, leading to suspicion of encryption. This paper introduces the concept of visual information pixel (VIP) synchronization and error diffusion to attain a color visual cryptography encryption method that produces meaningful color shares with high visual quality. VIP synchronization retains the positions of pixels carrying visual information of original images throughout the color channels and error diffusion generates shares pleasant to human eyes. Comparisons with previous approaches show the superior performance of the new method.
Documentation of procedures for textural/spatial pattern recognition techniques
NASA Technical Reports Server (NTRS)
Haralick, R. M.; Bryant, W. F.
1976-01-01
A C-130 aircraft was flown over the Sam Houston National Forest on March 21, 1973 at 10,000 feet altitude to collect multispectral scanner (MSS) data. Existing textural and spatial automatic processing techniques were used to classify the MSS imagery into specified timber categories. Several classification experiments were performed on this data using features selected from the spectral bands and a textural transform band. The results indicate that (1) spatial post-processing a classified image can cut the classification error to 1/2 or 1/3 of its initial value, (2) spatial post-processing the classified image using combined spectral and textural features produces a resulting image with less error than post-processing a classified image using only spectral features and (3) classification without spatial post processing using the combined spectral textural features tends to produce about the same error rate as a classification without spatial post processing using only spectral features.
A theory for predicting composite laminate warpage resulting from fabrication
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1975-01-01
Linear laminate theory is used in conjunction with the moment-curvature relationship to derive equations for predicting end deflections due to warpage without solving the coupled fourth-order partial differential equations of the plate. Using these equations, it is found that a 1 deg error in the orientation angle of one ply is sufficient to produce warpage end deflection equal to two laminate thicknesses in a 10 inch by 10 inch laminate made from 8-ply Mod-I/epoxy. From a sensitivity analysis on the governing parameters, it is found that a 3 deg fiber migration or a void volume ratio of three percent in some plies is sufficient to produce laminate warpage corner deflection equal to several laminate thicknesses. Tabular and graphical data are presented which can be used to identify possible errors contributing to laminate warpage and/or to obtain an a priori assessment when unavoidable errors during fabrication are anticipated.
Measuring Parameters of Massive Black Hole Binaries with Partially-Aligned Spins
NASA Technical Reports Server (NTRS)
Lang, Ryan N.; Hughes, Scott A.; Cornish, Neil J.
2010-01-01
It is important to understand how well the gravitational-wave observatory LISA can measure parameters of massive black hole binaries. It has been shown that including spin precession in the waveform breaks degeneracies and produces smaller expected parameter errors than a simpler, precession-free analysis. However, recent work has shown that gas in binaries can partially align the spins with the orbital angular momentum, thus reducing the precession effect. We show how this degrades the earlier results, producing more pessimistic errors in gaseous mergers. However, we then add higher harmonics to the signal model; these also break degeneracies, but they are not affected by the presence of gas. The harmonics often restore the errors in partially-aligned binaries to the same as, or better than/ those that are obtained for fully precessing binaries with no harmonics. Finally, we investigate what LISA measurements of spin alignment can tell us about the nature of gas around a binary,
NASA Astrophysics Data System (ADS)
Ha, Minsu; Nehm, Ross H.
2016-06-01
Automated computerized scoring systems (ACSSs) are being increasingly used to analyze text in many educational settings. Nevertheless, the impact of misspelled words (MSW) on scoring accuracy remains to be investigated in many domains, particularly jargon-rich disciplines such as the life sciences. Empirical studies confirm that MSW are a pervasive feature of human-generated text and that despite improvements, spell-check and auto-replace programs continue to be characterized by significant errors. Our study explored four research questions relating to MSW and text-based computer assessments: (1) Do English language learners (ELLs) produce equivalent magnitudes and types of spelling errors as non-ELLs? (2) To what degree do MSW impact concept-specific computer scoring rules? (3) What impact do MSW have on computer scoring accuracy? and (4) Are MSW more likely to impact false-positive or false-negative feedback to students? We found that although ELLs produced twice as many MSW as non-ELLs, MSW were relatively uncommon in our corpora. The MSW in the corpora were found to be important features of the computer scoring models. Although MSW did not significantly or meaningfully impact computer scoring efficacy across nine different computer scoring models, MSW had a greater impact on the scoring algorithms for naïve ideas than key concepts. Linguistic and concept redundancy in student responses explains the weak connection between MSW and scoring accuracy. Lastly, we found that MSW tend to have a greater impact on false-positive feedback. We discuss the implications of these findings for the development of next-generation science assessments.
Effects of Foveal Ablation on Emmetropization and Form-Deprivation Myopia
Smith, Earl L.; Ramamirtham, Ramkumar; Qiao-Grider, Ying; Hung, Li-Fang; Huang, Juan; Kee, Chea-su; Coats, David; Paysse, Evelyn
2009-01-01
Purpose Because of the prominence of central vision in primates, it has generally been assumed that signals from the fovea dominate refractive development. To test this assumption, the authors determined whether an intact fovea was essential for either normal emmetropization or the vision-induced myopic errors produced by form deprivation. Methods In 13 rhesus monkeys at 3 weeks of age, the fovea and most of the perifovea in one eye were ablated by laser photocoagulation. Five of these animals were subsequently allowed unrestricted vision. For the other eight monkeys with foveal ablations, a diffuser lens was secured in front of the treated eyes to produce form deprivation. Refractive development was assessed along the pupillary axis by retinoscopy, keratometry, and A-scan ultrasonography. Control data were obtained from 21 normal monkeys and three infants reared with plano lenses in front of both eyes. Results Foveal ablations had no apparent effect on emmetropization. Refractive errors for both eyes of the treated infants allowed unrestricted vision were within the control range throughout the observation period, and there were no systematic interocular differences in refractive error or axial length. In addition, foveal ablation did not prevent form deprivation myopia; six of the eight infants that experienced monocular form deprivation developed myopic axial anisometropias outside the control range. Conclusions Visual signals from the fovea are not essential for normal refractive development or the vision-induced alterations in ocular growth produced by form deprivation. Conversely, the peripheral retina, in isolation, can regulate emmetropizing responses and produce anomalous refractive errors in response to abnormal visual experience. These results indicate that peripheral vision should be considered when assessing the effects of visual experience on refractive development. PMID:17724167
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-11-01
Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-01-01
Background Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. Objective The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. Methods The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Results Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Conclusions Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. PMID:24906806
Moon, Jordan R; Hull, Holly R; Tobkin, Sarah E; Teramoto, Masaru; Karabulut, Murat; Roberts, Michael D; Ryan, Eric D; Kim, So Jung; Dalbo, Vincent J; Walter, Ashley A; Smith, Abbie T; Cramer, Joel T; Stout, Jeffrey R
2007-01-01
Background Methods used to estimate percent body fat can be classified as a laboratory or field technique. However, the validity of these methods compared to multiple-compartment models has not been fully established. This investigation sought to determine the validity of field and laboratory methods for estimating percent fat (%fat) in healthy college-age women compared to the Siri three-compartment model (3C). Methods Thirty Caucasian women (21.1 ± 1.5 yrs; 164.8 ± 4.7 cm; 61.2 ± 6.8 kg) had their %fat estimated by BIA using the BodyGram™ computer program (BIA-AK) and population-specific equation (BIA-Lohman), NIR (Futrex® 6100/XL), a quadratic (SF3JPW) and linear (SF3WB) skinfold equation, air-displacement plethysmography (BP), and hydrostatic weighing (HW). Results All methods produced acceptable total error (TE) values compared to the 3C model. Both laboratory methods produced similar TE values (HW, TE = 2.4%fat; BP, TE = 2.3%fat) when compared to the 3C model, though a significant constant error (CE) was detected for HW (1.5%fat, p ≤ 0.006). The field methods produced acceptable TE values ranging from 1.8 – 3.8 %fat. BIA-AK (TE = 1.8%fat) yielded the lowest TE among the field methods, while BIA-Lohman (TE = 2.1%fat) and NIR (TE = 2.7%fat) produced lower TE values than both skinfold equations (TE > 2.7%fat) compared to the 3C model. Additionally, the SF3JPW %fat estimation equation resulted in a significant CE (2.6%fat, p ≤ 0.007). Conclusion Data suggest that the BP and HW are valid laboratory methods when compared to the 3C model to estimate %fat in college-age Caucasian women. When the use of a laboratory method is not feasible, NIR, BIA-AK, BIA-Lohman, SF3JPW, and SF3WB are acceptable field methods to estimate %fat in this population. PMID:17988393
Analysis of error type and frequency in apraxia of speech among Portuguese speakers.
Cera, Maysa Luchesi; Minett, Thaís Soares Cianciarullo; Ortiz, Karin Zazo
2010-01-01
Most studies characterizing errors in the speech of patients with apraxia involve English language. To analyze the types and frequency of errors produced by patients with apraxia of speech whose mother tongue was Brazilian Portuguese. 20 adults with apraxia of speech caused by stroke were assessed. The types of error committed by patients were analyzed both quantitatively and qualitatively, and frequencies compared. We observed the presence of substitution, omission, trial-and-error, repetition, self-correction, anticipation, addition, reiteration and metathesis, in descending order of frequency, respectively. Omission type errors were one of the most commonly occurring whereas addition errors were infrequent. These findings differed to those reported in English speaking patients, probably owing to differences in the methodologies used for classifying error types; the inclusion of speakers with apraxia secondary to aphasia; and the difference in the structure of Portuguese language to English in terms of syllable onset complexity and effect on motor control. The frequency of omission and addition errors observed differed to the frequency reported for speakers of English.
Automated drug dispensing system reduces medication errors in an intensive care setting.
Chapuis, Claire; Roustit, Matthieu; Bal, Gaëlle; Schwebel, Carole; Pansu, Pascal; David-Tchouda, Sandra; Foroni, Luc; Calop, Jean; Timsit, Jean-François; Allenet, Benoît; Bosson, Jean-Luc; Bedouch, Pierrick
2010-12-01
We aimed to assess the impact of an automated dispensing system on the incidence of medication errors related to picking, preparation, and administration of drugs in a medical intensive care unit. We also evaluated the clinical significance of such errors and user satisfaction. Preintervention and postintervention study involving a control and an intervention medical intensive care unit. Two medical intensive care units in the same department of a 2,000-bed university hospital. Adult medical intensive care patients. After a 2-month observation period, we implemented an automated dispensing system in one of the units (study unit) chosen randomly, with the other unit being the control. The overall error rate was expressed as a percentage of total opportunities for error. The severity of errors was classified according to National Coordinating Council for Medication Error Reporting and Prevention categories by an expert committee. User satisfaction was assessed through self-administered questionnaires completed by nurses. A total of 1,476 medications for 115 patients were observed. After automated dispensing system implementation, we observed a reduced percentage of total opportunities for error in the study compared to the control unit (13.5% and 18.6%, respectively; p<.05); however, no significant difference was observed before automated dispensing system implementation (20.4% and 19.3%, respectively; not significant). Before-and-after comparisons in the study unit also showed a significantly reduced percentage of total opportunities for error (20.4% and 13.5%; p<.01). An analysis of detailed opportunities for error showed a significant impact of the automated dispensing system in reducing preparation errors (p<.05). Most errors caused no harm (National Coordinating Council for Medication Error Reporting and Prevention category C). The automated dispensing system did not reduce errors causing harm. Finally, the mean for working conditions improved from 1.0±0.8 to 2.5±0.8 on the four-point Likert scale. The implementation of an automated dispensing system reduced overall medication errors related to picking, preparation, and administration of drugs in the intensive care unit. Furthermore, most nurses favored the new drug dispensation organization.
Blumenfeld, Philip; Hata, Nobuhiko; DiMaio, Simon; Zou, Kelly; Haker, Steven; Fichtinger, Gabor; Tempany, Clare M C
2007-09-01
To quantify needle placement accuracy of magnetic resonance image (MRI)-guided core needle biopsy of the prostate. A total of 10 biopsies were performed with 18-gauge (G) core biopsy needle via a percutaneous transperineal approach. Needle placement error was assessed by comparing the coordinates of preplanned targets with the needle tip measured from the intraprocedural coherent gradient echo images. The source of these errors was subsequently investigated by measuring displacement caused by needle deflection and needle susceptibility artifact shift in controlled phantom studies. Needle placement error due to misalignment of the needle template guide was also evaluated. The mean and standard deviation (SD) of errors in targeted biopsies was 6.5 +/- 3.5 mm. Phantom experiments showed significant placement error due to needle deflection with a needle with an asymmetrically beveled tip (3.2-8.7 mm depending on tissue type) but significantly smaller error with a symmetrical bevel (0.6-1.1 mm). Needle susceptibility artifacts observed a shift of 1.6 +/- 0.4 mm from the true needle axis. Misalignment of the needle template guide contributed an error of 1.5 +/- 0.3 mm. Needle placement error was clinically significant in MRI-guided biopsy for diagnosis of prostate cancer. Needle placement error due to needle deflection was the most significant cause of error, especially for needles with an asymmetrical bevel. (c) 2007 Wiley-Liss, Inc.
Booth, Rachelle; Sturgess, Emma; Taberner-Stokes, Alison; Peters, Mark
2012-11-01
To establish the baseline prescribing error rate in a tertiary paediatric intensive care unit (PICU) and to determine the impact of a zero tolerance prescribing (ZTP) policy incorporating a dedicated prescribing area and daily feedback of prescribing errors. A prospective, non-blinded, observational study was undertaken in a 12-bed tertiary PICU over a period of 134 weeks. Baseline prescribing error data were collected on weekdays for all patients for a period of 32 weeks, following which the ZTP policy was introduced. Daily error feedback was introduced after a further 12 months. Errors were sub-classified as 'clinical', 'non-clinical' and 'infusion prescription' errors and the effects of interventions considered separately. The baseline combined prescribing error rate was 892 (95 % confidence interval (CI) 765-1,019) errors per 1,000 PICU occupied bed days (OBDs), comprising 25.6 % clinical, 44 % non-clinical and 30.4 % infusion prescription errors. The combined interventions of ZTP plus daily error feedback were associated with a reduction in the combined prescribing error rate to 447 (95 % CI 389-504) errors per 1,000 OBDs (p < 0.0001), an absolute risk reduction of 44.5 % (95 % CI 40.8-48.0 %). Introduction of the ZTP policy was associated with a significant decrease in clinical and infusion prescription errors, while the introduction of daily error feedback was associated with a significant reduction in non-clinical prescribing errors. The combined interventions of ZTP and daily error feedback were associated with a significant reduction in prescribing errors in the PICU, in line with Department of Health requirements of a 40 % reduction within 5 years.
Integrating automated structured analysis and design with Ada programming support environments
NASA Technical Reports Server (NTRS)
Hecht, Alan; Simmons, Andy
1986-01-01
Ada Programming Support Environments (APSE) include many powerful tools that address the implementation of Ada code. These tools do not address the entire software development process. Structured analysis is a methodology that addresses the creation of complete and accurate system specifications. Structured design takes a specification and derives a plan to decompose the system subcomponents, and provides heuristics to optimize the software design to minimize errors and maintenance. It can also produce the creation of useable modules. Studies have shown that most software errors result from poor system specifications, and that these errors also become more expensive to fix as the development process continues. Structured analysis and design help to uncover error in the early stages of development. The APSE tools help to insure that the code produced is correct, and aid in finding obscure coding errors. However, they do not have the capability to detect errors in specifications or to detect poor designs. An automated system for structured analysis and design TEAMWORK, which can be integrated with an APSE to support software systems development from specification through implementation is described. These tools completement each other to help developers improve quality and productivity, as well as to reduce development and maintenance costs. Complete system documentation and reusable code also resultss from the use of these tools. Integrating an APSE with automated tools for structured analysis and design provide capabilities and advantages beyond those realized with any of these systems used by themselves.
AKLSQF - LEAST SQUARES CURVE FITTING
NASA Technical Reports Server (NTRS)
Kantak, A. V.
1994-01-01
The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.
Sharma, Ity; Kaminski, George A
2017-01-15
Our Fuzzy-Border (FB) continuum solvent model has been extended and modified to produce hydration parameters for small molecules using POlarizable Simulations Second-order Interaction Model (POSSIM) framework with an average error of 0.136 kcal/mol. It was then used to compute pK a shifts for carboxylic and basic residues of the turkey ovomucoid third domain (OMTKY3) protein. The average unsigned errors in the acid and base pK a values were 0.37 and 0.4 pH units, respectively, versus 0.58 and 0.7 pH units as calculated with a previous version of polarizable protein force field and Poisson Boltzmann continuum solvent. This POSSIM/FB result is produced with explicit refitting of the hydration parameters to the pK a values of the carboxylic and basic residues of the OMTKY3 protein; thus, the values of the acidity constants can be viewed as additional fitting target data. In addition to calculating pK a shifts for the OMTKY3 residues, we have studied aspartic acid residues of Rnase Sa. This was done without any further refitting of the parameters and agreement with the experimental pK a values is within an average unsigned error of 0.65 pH units. This result included the Asp79 residue that is buried and thus has a high experimental pK a value of 7.37 units. Thus, the presented model is capable or reproducing pK a results for residues in an environment that is significantly different from the solvated protein surface used in the fitting. Therefore, the POSSIM force field and the FB continuum solvent parameters have been demonstrated to be sufficiently robust and transferable. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Validity of the Omron HJ-112 pedometer during treadmill walking.
Hasson, Rebecca E; Haller, Jeannie; Pober, David M; Staudenmayer, John; Freedson, Patty S
2009-04-01
The purpose of this investigation was to examine the validity of step counts measured with the Omron HJ-112 pedometer and to assess the effect of pedometer placement. Ninety-two subjects (44 males and 48 females; 71 with body mass index [BMI] <30 kg.m and 21 with BMI >or=30 kg.m) completed three, 12-min bouts of treadmill walking at speeds of 1.12, 1.34, and 1.56 mxs. A subset (21 males and 23 females; 38 BMI <30 kg.m and 6 BMI >or=30 kg.m) completed a variable walking condition. For all conditions, participants wore an Omron HJ-112 pedometer on the hip, in the pants pocket, in the chest shirt pocket, and around the neck. Hip pedometer placement was alternated between right and left sides with the Yamax Digiwalker SW-701. During each walk, an investigator recorded actual steps with a manual hand counter. There was no substantial bias with the Omron in any speed condition (-0.1% to 0.5%). Bias was larger with the Yamax (-3.6% to 2.0%). The largest random error for the Omron was 3.7% in the variable-speed condition for the BMI <30 kg.m group, whereas random errors for the Yamax were larger and up to 20%. None of the Omron placement positions produced statistically significant bias. Hip mounting produced the smallest random error (1.2%), followed by shirt pocket (1.7%), neck (2.2%), and pants pocket (5.8%). The Omron HJ-112 pedometer validly assesses steps in different BMI groups during constant- and variable-speed walking; other than that in the pants pocket, placement of the pedometer has little effect on validity.
Huang, Juan; Hung, Li-Fang
2011-01-01
Purpose. The purpose of this study was to determine whether visual signals from the fovea contribute to the changes in the pattern of peripheral refractions associated with form deprivation myopia in monkeys. Methods. Monocular form-deprivation was produced in 18 rhesus monkeys by securing diffusers in front of their treated eyes between 22 ± 2 and 155 ± 17 days of age. In eight of these form-deprived monkeys, the fovea and most of the perifovea of the treated eye were ablated by laser photocoagulation at the start of the diffuser-rearing period. Each eye's refractive status was measured by retinoscopy along the pupillary axis and at 15° intervals along the horizontal meridian to eccentricities of 45°. Control data were obtained from 12 normal monkeys and five monkeys that had monocular foveal ablations and were subsequently reared with unrestricted vision. Results. Foveal ablation, by itself, did not produce systematic alterations in either the central or peripheral refractive errors of the treated eyes. In addition, foveal ablation did not alter the patterns of peripheral refractions in monkeys with form-deprivation myopia. The patterns of peripheral refractive errors in the two groups of form-deprived monkeys, either with or without foveal ablation, were qualitatively similar (treated eyes: F = 0.31, P = 0.74; anisometropia: F = 0.61, P = 0.59), but significantly different from those found in the normal monkeys (F = 8.46 and 9.38 respectively, P < 0.05). Conclusions. Central retinal signals do not contribute in an essential way to the alterations in eye shape that occur during the development of vision-induced axial myopia. PMID:21693598
Alzoubi, K H; Abdul-Razzak, K K; Khabour, O F; Al-Tuweiq, G M; Alzubi, M A; Alkadhi, K A
2009-12-01
The combined effects of high fat diet (HFD) and chronic stress on the hippocampus-dependent spatial learning and memory were studied in rats using the radial arm water maze (RAWM). Chronic psychosocial stress and/or HFD were simultaneously administered for 3 months to young adult male Wister rats. In the RAWM, rats were subjected to 12 learning trials as well as short-term and long-term memory tests. This procedure was applied on a daily basis until the animal reaches days to criterion (DTC) in the 12th learning trial and in memory tests. DTC is the number of days that the animal takes to make zero error in two consecutive days. Groups were compared based on the number of errors per trial or test as well as on the DTC. Chronic stress, HFD and chronic stress/HFD animal groups showed impaired learning as indicated by committing significantly (P<0.05) more errors than untreated control group in trials 6 through 9 of day 4. In memory tests, chronic stress, HFD and chronic stress/HFD groups showed significantly impaired performance compared to control group. Additionally, the stress/HFD was the only group that showed significantly impaired performance in memory tests on the 5th training day, suggesting more severe memory impairment in that group. Furthermore, DTC value for above groups indicated that chronic stress or HFD, alone, resulted in a mild impairment of spatial memory, but the combination of chronic stress and HFD resulted in a more severe and long-lasting memory impairment. The data indicated that the combination of stress and HFD produced more deleterious effects on hippocampal cognitive function than either chronic stress or HFD alone.
NASA Astrophysics Data System (ADS)
Barik, M. G.; Hogue, T. S.; Franz, K. J.; He, M.
2012-12-01
Snow water equivalent (SWE) estimation is a key factor in producing reliable streamflow simulations and forecasts in snow dominated areas. However, measuring or predicting SWE has significant uncertainty. Sequential data assimilation, which updates states using both observed and modeled data based on error estimation, has been shown to reduce streamflow simulation errors but has had limited testing for forecasting applications. In the current study, a snow data assimilation framework integrated with the National Weather System River Forecasting System (NWSRFS) is evaluated for use in ensemble streamflow prediction (ESP). Seasonal water supply ESP hindcasts are generated for the North Fork of the American River Basin (NFARB) in northern California. Parameter sets from the California Nevada River Forecast Center (CNRFC), the Differential Evolution Adaptive Metropolis (DREAM) algorithm and the Multistep Automated Calibration Scheme (MACS) are tested both with and without sequential data assimilation. The traditional ESP method considers uncertainty in future climate conditions using historical temperature and precipitation time series to generate future streamflow scenarios conditioned on the current basin state. We include data uncertainty analysis in the forecasting framework through the DREAM-based parameter set which is part of a recently developed Integrated Uncertainty and Ensemble-based data Assimilation framework (ICEA). Extensive verification of all tested approaches is undertaken using traditional forecast verification measures, including root mean square error (RMSE), Nash-Sutcliffe efficiency coefficient (NSE), volumetric bias, joint distribution, rank probability score (RPS), and discrimination and reliability plots. In comparison to the RFC parameters, the DREAM and MACS sets show significant improvement in volumetric bias in flow. Use of assimilation improves hindcasts of higher flows but does not significantly improve performance in the mid flow and low flow categories.
Narratives of Response Error from Cognitive Interviews of Survey Questions about Normative Behavior
ERIC Educational Resources Information Center
Brenner, Philip S.
2017-01-01
That rates of normative behaviors produced by sample surveys are higher than actual behavior warrants is well evidenced in the research literature. Less well understood is the source of this error. Twenty-five cognitive interviews were conducted to probe responses to a set of common, conventional survey questions about one such normative behavior:…
Lexical Errors in Second Language Scientific Writing: Some Conceptual Implications
ERIC Educational Resources Information Center
Carrió Pastor, María Luisa; Mestre-Mestre, Eva María
2014-01-01
Nowadays, scientific writers are required not only a thorough knowledge of their subject field, but also a sound command of English as a lingua franca. In this paper, the lexical errors produced in scientific texts written in English by non-native researchers are identified to propose a classification of the categories they contain. This study…
ERIC Educational Resources Information Center
Smalle, Eleonore H. M.; Muylle, Merel; Szmalec, Arnaud; Duyck, Wouter
2017-01-01
Speech errors typically respect the speaker's implicit knowledge of language-wide phonotactics (e.g., /t/ cannot be a syllable onset in the English language). Previous work demonstrated that adults can learn novel experimentally induced phonotactic constraints by producing syllable strings in which the allowable position of a phoneme depends on…
Evaluation of Parenteral Nutrition Errors in an Era of Drug Shortages.
Storey, Michael A; Weber, Robert J; Besco, Kelly; Beatty, Stuart; Aizawa, Kumiko; Mirtallo, Jay M
2016-04-01
Ingredient shortages have forced many organizations to change practices or use unfamiliar ingredients, which creates potential for error. Parenteral nutrition (PN) has been significantly affected, as every ingredient in PN has been impacted in recent years. Ingredient errors involving PN that were reported to the national anonymous MedMARx database between May 2009 and April 2011 were reviewed. Errors were categorized by ingredient, node, and severity. Categorization was validated by experts in medication safety and PN. A timeline of PN ingredient shortages was developed and compared with the PN errors to determine if events correlated with an ingredient shortage. This information was used to determine the prevalence and change in harmful PN errors during periods of shortage, elucidating whether a statistically significant difference exists in errors during shortage as compared with a control period (ie, no shortage). There were 1311 errors identified. Nineteen errors were associated with harm. Fat emulsions and electrolytes were the PN ingredients most frequently associated with error. Insulin was the ingredient most often associated with patient harm. On individual error review, PN shortages were described in 13 errors, most of which were associated with intravenous fat emulsions; none were associated with harm. There was no correlation of drug shortages with the frequency of PN errors. Despite the significant impact that shortages have had on the PN use system, no adverse impact on patient safety could be identified from these reported PN errors. © 2015 American Society for Parenteral and Enteral Nutrition.
NASA Technical Reports Server (NTRS)
Bolton, Matthew L.; Bass, Ellen J.; Comstock, James R., Jr.
2006-01-01
Synthetic Vision Systems (SVS) depict computer generated views of terrain surrounding an aircraft. In the assessment of textures and field of view (FOV) for SVS, no studies have directly measured the 3 levels of spatial awareness: identification of terrain, its relative spatial location, and its relative temporal location. This work introduced spatial awareness measures and used them to evaluate texture and FOV in SVS displays. Eighteen pilots made 4 judgments (relative angle, distance, height, and abeam time) regarding the location of terrain points displayed in 112 5-second, non-interactive simulations of a SVS heads down display. Texture produced significant main effects and trends for the magnitude of error in the relative distance, angle, and abeam time judgments. FOV was significant for the directional magnitude of error in the relative distance, angle, and height judgments. Pilots also provided subjective terrain awareness ratings that were compared with the judgment based measures. The study found that elevation fishnet, photo fishnet, and photo elevation fishnet textures best supported spatial awareness for both the judgments and the subjective awareness measures.
On the sea-state bias of the Geosat altimeter
NASA Technical Reports Server (NTRS)
Ray, Richard D.; Koblinsky, Chester J.
1991-01-01
The sea-state bias in a satellite altimeter's range measurement is caused by the influence of ocean waves on the radar return pulse; it results in an estimate of sea level that is too low according to some function of the wave height. This bias is here estimated for Geosat by correlating collinear differences of altimetric sea-surface heights with collinear differences of significant wave heights (H1/3). Corrections for satellite orbit error are estimated simultaneously with the sea-state bias. Based on twenty 17-day repeat cycles of the Geosat Exact Repeat Mission, the solution for the sea-state bias is 2.6 + or - 0.2 percent of H1/3. The least-squares residuals, however, show a correlation with wind speed U, so the traditional model of the bias has been supplemented with a second term: H1/3 + alpha-2H1/3U. This second term produces a small, but statistically significant, reduction in variance of the residuals. Both systematic and random errors in H1/3 and U tend to bias the estimates of alpha-1 and alpha-2, which complicates comparisons of the results with ground-based measurements of the sea-state bias.
Gesture Imitation in Schizophrenia
Matthews, Natasha; Gold, Brian J.; Sekuler, Robert; Park, Sohee
2013-01-01
Recent evidence suggests that individuals with schizophrenia (SZ) are impaired in their ability to imitate gestures and movements generated by others. This impairment in imitation may be linked to difficulties in generating and maintaining internal representations in working memory (WM). We used a novel quantitative technique to investigate the relationship between WM and imitation ability. SZ outpatients and demographically matched healthy control (HC) participants imitated hand gestures. In Experiment 1, participants imitated single gestures. In Experiment 2, they imitated sequences of 2 gestures, either while viewing the gesture online or after a short delay that forced the use of WM. In Experiment 1, imitation errors were increased in SZ compared with HC. Experiment 2 revealed a significant interaction between imitation ability and WM. SZ produced more errors and required more time to imitate when that imitation depended upon WM compared with HC. Moreover, impaired imitation from WM was significantly correlated with the severity of negative symptoms but not with positive symptoms. In sum, gesture imitation was impaired in schizophrenia, especially when the production of an imitation depended upon WM and when an imitation entailed multiple actions. Such a deficit may have downstream consequences for new skill learning. PMID:21765171
Tracked ultrasound calibration studies with a phantom made of LEGO bricks
NASA Astrophysics Data System (ADS)
Soehl, Marie; Walsh, Ryan; Rankin, Adam; Lasso, Andras; Fichtinger, Gabor
2014-03-01
In this study, spatial calibration of tracked ultrasound was compared by using a calibration phantom made of LEGO® bricks and two 3-D printed N-wire phantoms. METHODS: The accuracy and variance of calibrations were compared under a variety of operating conditions. Twenty trials were performed using an electromagnetic tracking device with a linear probe and three trials were performed using varied probes, varied tracking devices and the three aforementioned phantoms. The accuracy and variance of spatial calibrations found through the standard deviation and error of the 3-D image reprojection were used to compare the calibrations produced from the phantoms. RESULTS: This study found no significant difference between the measured variables of the calibrations. The average standard deviation of multiple 3-D image reprojections with the highest performing printed phantom and those from the phantom made of LEGO® bricks differed by 0.05 mm and the error of the reprojections differed by 0.13 mm. CONCLUSION: Given that the phantom made of LEGO® bricks is significantly less expensive, more readily available, and more easily modified than precision-machined N-wire phantoms, it prompts to be a viable calibration tool especially for quick laboratory research and proof of concept implementations of tracked ultrasound navigation.
On the sea-state bias of the Geosat altimeter
NASA Astrophysics Data System (ADS)
Ray, Richard D.; Koblinsky, Chester J.
1991-06-01
The sea-state bias in a satellite altimeter's range measurement is caused by the influence of ocean waves on the radar return pulse; it results in an estimate of sea level that is too low according to some function of the wave height. This bias is here estimated for Geosat by correlating collinear differences of altimetric sea-surface heights with collinear differences of significant wave heights (H1/3). Corrections for satellite orbit error are estimated simultaneously with the sea-state bias. Based on twenty 17-day repeat cycles of the Geosat Exact Repeat Mission, the solution for the sea-state bias is 2.6 + or - 0.2 percent of H1/3. The least-squares residuals, however, show a correlation with wind speed U, so the traditional model of the bias has been supplemented with a second term: H1/3 + alpha-2H1/3U. This second term produces a small, but statistically significant, reduction in variance of the residuals. Both systematic and random errors in H1/3 and U tend to bias the estimates of alpha-1 and alpha-2, which complicates comparisons of the results with ground-based measurements of the sea-state bias.
Predictive models of safety based on audit findings: Part 2: Measurement of model validity.
Hsiao, Yu-Lin; Drury, Colin; Wu, Changxu; Paquet, Victor
2013-07-01
Part 1 of this study sequence developed a human factors/ergonomics (HF/E) based classification system (termed HFACS-MA) for safety audit findings and proved its measurement reliability. In Part 2, we used the human error categories of HFACS-MA as predictors of future safety performance. Audit records and monthly safety incident reports from two airlines submitted to their regulatory authority were available for analysis, covering over 6.5 years. Two participants derived consensus results of HF/E errors from the audit reports using HFACS-MA. We adopted Neural Network and Poisson regression methods to establish nonlinear and linear prediction models respectively. These models were tested for the validity of prediction of the safety data, and only Neural Network method resulted in substantially significant predictive ability for each airline. Alternative predictions from counting of audit findings and from time sequence of safety data produced some significant results, but of much smaller magnitude than HFACS-MA. The use of HF/E analysis of audit findings provided proactive predictors of future safety performance in the aviation maintenance field. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena
2014-01-01
The AIRS Science Team Version-6 AIRS/AMSU retrieval algorithm is now operational at the Goddard DISC. AIRS Version-6 level-2 products are generated near real-time at the Goddard DISC and all level-2 and level-3 products are available starting from September 2002. This paper describes some of the significant improvements in retrieval methodology contained in the Version-6 retrieval algorithm compared to that previously used in Version-5. In particular, the AIRS Science Team made major improvements with regard to the algorithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the cloud clearing and retrieval procedures; and 3) derive error estimates and use them for Quality Control. Significant improvements have also been made in the generation of cloud parameters. In addition to the basic AIRS/AMSU mode, Version-6 also operates in an AIRS Only (AO) mode which produces results almost as good as those of the full AIRS/AMSU mode. This paper also demonstrates the improvements of some AIRS Version-6 and Version-6 AO products compared to those obtained using Version-5.
Monitoring Precipitation from Space: targeting Hydrology Community?
NASA Astrophysics Data System (ADS)
Hong, Y.; Turk, J.
2005-12-01
During the past decades, advances in space, sensor and computer technology have made it possible to estimate precipitation nearly globally from a variety of observations in a relatively direct manner. The success of Tropical Precipitation Measuring Mission (TRMM) has been a significant advance for modern precipitation estimation algorithms to move toward daily quarter degree measurements, while the need for precipitation data at temporal-spatial resolutions compatible with hydrologic modeling has been emphasized by the end user: hydrology community. Can the future deployment of Global Precipitation Measurement constellation of low-altitude orbiting satellites (covering 90% of the global with a sampling interval of less than 3-hours), in conjunction with the existing suite of geostationary satellites, results in significant improvements in scale and accuracy of precipitation estimates suitable for hydrology applications? This presentation will review the current state of satellite-derived precipitation estimation and demonstrate the early results and primary barriers to full global high-resolution precipitation coverage. An attempt to facilitate the communication between data producers and users will be discussed by developing an 'end-to-end' uncertainty propagation analysis framework to quantify both the precipitation estimation error structure and the error influence on hydrological modeling.
NASA Technical Reports Server (NTRS)
Reale, O.; Lau, W.K.; Susskind, J.; Brin, E.; Liu, E.; Riishojgaard, L. P.; Rosenburg, R.; Fuentes, M.
2009-01-01
Tropical cyclones in the northern Indian Ocean pose serious challenges to operational weather forecasting systems, partly due to their shorter lifespan and more erratic track, compared to those in the Atlantic and the Pacific. Moreover, the automated analyses of cyclones over the northern Indian Ocean, produced by operational global data assimilation systems (DASs), are generally of inferior quality than in other basins. In this work it is shown that the assimilation of Atmospheric Infrared Sounder (AIRS) temperature retrievals under partial cloudy conditions can significantly impact the representation of the cyclone Nargis (which caused devastating loss of life in Myanmar in May 2008) in a global DAS. Forecasts produced from these improved analyses by a global model produce substantially smaller track errors. The impact of the assimilation of clear-sky radiances on the same DAS and forecasting system is positive, but smaller than the one obtained by ingestion of AIRS retrievals, possibly due to poorer coverage.
NASA Astrophysics Data System (ADS)
Henderson, Laura S.; Subbarao, Kamesh
2017-12-01
This work presents a case wherein the selection of models when producing synthetic light curves affects the estimation of the size of unresolved space objects. Through this case, "inverse crime" (using the same model for the generation of synthetic data and data inversion), is illustrated. This is done by using two models to produce the synthetic light curve and later invert it. It is shown here that the choice of model indeed affects the estimation of the shape/size parameters. When a higher fidelity model (henceforth the one that results in the smallest error residuals after the crime is committed) is used to both create, and invert the light curve model the estimates of the shape/size parameters are significantly better than those obtained when a lower fidelity model (in comparison) is implemented for the estimation. It is therefore of utmost importance to consider the choice of models when producing synthetic data, which later will be inverted, as the results might be misleadingly optimistic.
NASA Technical Reports Server (NTRS)
Roberts, J. Brent; Clayson, C. A.
2012-01-01
Residual forcing necessary to close the MLTB on seasonal time scales are largest in regions of strongest surface heat flux forcing. Identifying the dominant source of error - surface heat flux error, mixed layer depth estimation, ocean dynamical forcing - remains a challenge in the eastern tropical oceans where ocean processes are very active. Improved sub-surface observations are necessary to better constrain errors. 1. Mixed layer depth evolution is critical to the seasonal evolution of mixed layer temperatures. It determines the inertia of the mixed layer, and scales the sensitivity of the MLTB to errors in surface heat flux and ocean dynamical forcing. This role produces timing impacts for errors in SST prediction. 2. Errors in the MLTB are larger than the historical 10Wm-2 target accuracy. In some regions, a larger accuracy can be tolerated if the goal is to resolve the seasonal SST cycle.
Form Overrides Meaning When Bilinguals Monitor for Errors
Ivanova, Iva; Ferreira, Victor S.; Gollan, Tamar H.
2016-01-01
Bilinguals rarely produce unintended language switches, which may in part be because switches are detected and corrected by an internal monitor. But are language switches easier or harder to detect than within-language semantic errors? To approximate internal monitoring, bilinguals listened (Experiment 1) or read aloud (Experiment 2) stories, and detected language switches (translation equivalents or semantically unrelated to expected words) and within-language errors (semantically related or unrelated to expected words). Bilinguals detected semantically related within-language errors most slowly and least accurately, language switches more quickly and accurately than within-language errors, and (in Experiment 2), translation equivalents as quickly and accurately as unrelated language switches. These results suggest that internal monitoring of form (which can detect mismatches in language membership) completes earlier than, and is independent of, monitoring of meaning. However, analysis of reading times prior to error detection revealed meaning violations to be more disruptive for processing than language violations. PMID:28649169
Discriminative echolocation in a porpoise, 12
Turner, Ronald N.; Norris, Kenneth S.
1966-01-01
Operant conditioning techniques were used to establish a discriminative echolocation performance in a porpoise. Pairs of spheres of disparate diameters were presented in an under-water display, and the positions of the spheres were switched according to a scrambled sequence while the blindfolded porpoise responded on a pair of submerged response levers. Responses which identified the momentary state of the display were food-reinforced, while those which did not (errors) produced time out. Errors were then studied in relation to decreased disparity between the spheres. As disparity was decreased, errors which terminated runs of correct responses occurred more frequently and were followed by longer strings of consecutive errors. Increased errors and disruption of a stable pattern of collateral behavior were associated. Since some sources of error other than decreased disparity were present, the porpoise's final performance did not fully reflect the acuity of its echolocation channel. PMID:5964509
NASA Technical Reports Server (NTRS)
Long, Junsheng
1994-01-01
This thesis studies a forward recovery strategy using checkpointing and optimistic execution in parallel and distributed systems. The approach uses replicated tasks executing on different processors for forwared recovery and checkpoint comparison for error detection. To reduce overall redundancy, this approach employs a lower static redundancy in the common error-free situation to detect error than the standard N Module Redundancy scheme (NMR) does to mask off errors. For the rare occurrence of an error, this approach uses some extra redundancy for recovery. To reduce the run-time recovery overhead, look-ahead processes are used to advance computation speculatively and a rollback process is used to produce a diagnosis for correct look-ahead processes without rollback of the whole system. Both analytical and experimental evaluation have shown that this strategy can provide a nearly error-free execution time even under faults with a lower average redundancy than NMR.
Cascading activation from lexical processing to letter-level processing in written word production.
Buchwald, Adam; Falconer, Carolyn
2014-01-01
Descriptions of language production have identified processes involved in producing language and the presence and type of interaction among those processes. In the case of spoken language production, consensus has emerged that there is interaction among lexical selection processes and phoneme-level processing. This issue has received less attention in written language production. In this paper, we present a novel analysis of the writing-to-dictation performance of an individual with acquired dysgraphia revealing cascading activation from lexical processing to letter-level processing. The individual produced frequent lexical-semantic errors (e.g., chipmunk → SQUIRREL) as well as letter errors (e.g., inhibit → INBHITI) and had a profile consistent with impairment affecting both lexical processing and letter-level processing. The presence of cascading activation is suggested by lower letter accuracy on words that are more weakly activated during lexical selection than on those that are more strongly activated. We operationalize weakly activated lexemes as those lexemes that are produced as lexical-semantic errors (e.g., lethal in deadly → LETAHL) compared to strongly activated lexemes where the intended target word (e.g., lethal) is the lexeme selected for production.
Schaeffel, F; Bartmann, M; Hagel, G; Zrenner, E
1995-05-01
We have found that development of both deprivation-induced and lens-induced refractive errors in chickens implicates changes of the diurnal growth rhythms in the eye (Fig. 1). Because the major diurnal oscillator in the eye is expressed by the retinal dopamine/melatonin system, effects of drugs were studied that change retinal dopamine and/or serotonin levels. Vehicle-injected and drug-injected eyes treated with either translucent occluders or lenses were compared to focus on visual growth mechanisms. Retinal biogenic amine levels were measured at the end of each experiment by HPLC with electrochemical detection. For reserpine (which was most extensively studied) electroretinograms were recorded to test retinal function [Fig. 3 (C)] and catecholaminergic and serotonergic retinal neurons were observed by immunohistochemical labelling [Fig. 3(D)]. Deprivation myopia was readily altered by a single intravitreal injection of drugs that affected retinal dopamine or serotonin levels; reserpine which depleted both serotonin and dopamine stores blocked deprivation myopia very efficiently [Fig. 3(A)], whereas 5,7-dihydroxy-tryptamine (5,7-DHT), sulpiride, melatonin and Sch23390 could enhance deprivation myopia (Table 1, Fig. 5). In contrast to other procedures that were previously employed to block deprivation myopia (6-OHDA injections or continuous light) and which had no significant effect on lens-induced refractive errors, reserpine also affected lens-induced changes in eye growth. At lower doses, the effect was selective for negative lenses (Fig. 4). We found that the individual retinal dopamine levels were very variable among individuals but were correlated in both eyes of an animal; a similar variability was previously found with regard to deprivation myopia. To test a hypothesis raised by Li, Schaeffel, Kohler and Zrenner [(1992) Visual Neuroscience, 9, 483-492] that individual dopamine levels might determine the susceptibility to deprivation myopia, refractive errors were correlated with dopamine levels in occluded and untreated eyes of monocularly deprived chickens (Fig. 6). The hypothesis was rejected. Although it has been previously found that the static retinal tissue levels of dopamine are not altered by lens treatment, subtle changes in the ratio of DOPAC to dopamine were detected in the present study. The result indicates that retinal dopamine might be implicated also in lens-induced growth changes. Surprisingly, the changes were in the opposite direction for deprivation and negative lenses although both produce myopia. Currently, there is evidence that deprivation-induced and lens-induced refractive errors in chicks are produced by different mechanisms. However, findings (1), (3) and (5) suggest that there may also be common features. Although it has not yet been resolved how both mechanisms merge to produce the appropriate axial eye growth rates, we propose a scheme (Fig. 7).
Wang, Maya Zhe; Marshall, Andrew T; Kirkpatrick, Kimberly
2017-06-01
Early life experience profoundly impacts behavior and cognitive functions in rats. The present study investigated how the presence of conspecifics and/or novel objects, could independently influence individual differences in impulsivity and behavioral flexibility. Twenty-four rats were reared in an isolated condition, an isolated condition with a novel object, a pair-housed social condition, or a pair-housed social condition with a novel object. The rats were then tested on an impulsive choice task, a behavioral flexibility task, and an impulsive action task. Novelty enrichment produced an overall increase in impulsive choice, while social enrichment decreased impulsive choice in the absence of novelty enrichment and also produced an overall increase in impulsive action. In the behavioral flexibility task, social enrichment increased regressive errors, whereas both social and novelty enrichment reduced never-reinforced errors. Individual differences analyses indicated a significant relationship between performance in the behavioral flexibility and impulsive action tasks, which may reflect a common psychological correlate of action inhibition. Moreover, there was a relationship between delay sensitivity in the impulsive choice task and performance on the DRL and behavioral flexibility tasks, suggesting a dual role for timing and inhibitory processes in driving the interrelationship between these tasks. Overall, these results indicate that social and novelty enrichment produce distinct effects on impulsivity and adaptability, suggesting the need to parse out the different elements of enrichment in future studies. Further research is warranted to better understand how individual differences in sensitivity to enrichment affect individuals' interactions with and the resulting consequences of the rearing environment. Copyright © 2017 Elsevier B.V. All rights reserved.
Wang, Maya Zhe; Marshall, Andrew T.; Kirkpatrick, Kimberly
2017-01-01
Early life experience profoundly impacts behavior and cognitive functions in rats. The present study investigated how the presence of conspecifics and/or novel objects, could independently influence individual differences in impulsivity and behavioral flexibility. Twenty-four rats were reared in an isolated condition, an isolated condition with a novel object, a pair-housed social condition, or a pair-housed social condition with a novel object. The rats were then tested on an impulsive choice task, a behavioral flexibility task, and an impulsive action task. Novelty enrichment produced an overall increase in impulsive choice, while social enrichment decreased impulsive choice in the absence of novelty enrichment and also produced an overall increase in impulsive action. In the behavioral flexibility task, social enrichment increased regressive errors, whereas both social and novelty enrichment reduced never reinforced errors. Individual differences analyses indicated a significant relationship between performance in the behavioral flexibility and impulsive action tasks, which may reflect a common psychological correlate of action inhibition. Moreover, there was a relationship between delay sensitivity in the impulsive choice task and performance on the DRL and behavioral flexibility tasks, suggesting a dual role for timing and inhibitory processes in driving the interrelationship between these tasks. Overall, these results indicate that social and novelty enrichment produce distinct effects on impulsivity and adaptability, suggesting the need to parse out the different elements of enrichment in future studies. Further research is warranted to better understand how individual differences in sensitivity to enrichment affect individuals’ interactions with and the resulting consequences of the rearing environment. PMID:28341610
Multiple Cognitive Control Effects of Error Likelihood and Conflict
Brown, Joshua W.
2010-01-01
Recent work on cognitive control has suggested a variety of performance monitoring functions of the anterior cingulate cortex, such as errors, conflict, error likelihood, and others. Given the variety of monitoring effects, a corresponding variety of control effects on behavior might be expected. This paper explores whether conflict and error likelihood produce distinct cognitive control effects on behavior, as measured by response time. A change signal task (Brown & Braver, 2005) was modified to include conditions of likely errors due to tardy as well as premature responses, in conditions with and without conflict. The results discriminate between competing hypotheses of independent vs. interacting conflict and error likelihood control effects. Specifically, the results suggest that the likelihood of premature vs. tardy response errors can lead to multiple distinct control effects, which are independent of cognitive control effects driven by response conflict. As a whole, the results point to the existence of multiple distinct cognitive control mechanisms and challenge existing models of cognitive control that incorporate only a single control signal. PMID:19030873
Error Analyses of the North Alabama Lightning Mapping Array (LMA)
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solokiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J. M.; Bailey, J. C.; Krider, E. P.; Bateman, M. G.; Boccippio, D. J.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA-MSFC and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results, except that the chi-squared theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
Disturbance torque rejection properties of the NASA/JPL 70-meter antenna axis servos
NASA Technical Reports Server (NTRS)
Hill, R. E.
1989-01-01
Analytic methods for evaluating pointing errors caused by external disturbance torques are developed and applied to determine the effects of representative values of wind and friction torque. The expressions relating pointing errors to disturbance torques are shown to be strongly dependent upon the state estimator parameters, as well as upon the state feedback gain and the flow versus pressure characteristics of the hydraulic system. Under certain conditions, when control is derived from an uncorrected estimate of integral position error, the desired type 2 servo properties are not realized and finite steady-state position errors result. Methods for reducing these errors to negligible proportions through the proper selection of control gain and estimator correction parameters are demonstrated. The steady-state error produced by a disturbance torque is found to be directly proportional to the hydraulic internal leakage. This property can be exploited to provide a convenient method of determining system leakage from field measurements of estimator error, axis rate, and hydraulic differential pressure.
Statistical models for estimating daily streamflow in Michigan
Holtschlag, D.J.; Salehi, Habib
1992-01-01
Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.
Method for Real-Time Model Based Structural Anomaly Detection
NASA Technical Reports Server (NTRS)
Urnes, James M., Sr. (Inventor); Smith, Timothy A. (Inventor); Reichenbach, Eric Y. (Inventor)
2015-01-01
A system and methods for real-time model based vehicle structural anomaly detection are disclosed. A real-time measurement corresponding to a location on a vehicle structure during an operation of the vehicle is received, and the real-time measurement is compared to expected operation data for the location to provide a modeling error signal. A statistical significance of the modeling error signal to provide an error significance is calculated, and a persistence of the error significance is determined. A structural anomaly is indicated, if the persistence exceeds a persistence threshold value.
A Self-Tuning Kalman Filter for Autonomous Navigation Using the Global Positioning System (GPS)
NASA Technical Reports Server (NTRS)
Truong, Son H.
1999-01-01
Most navigation systems currently operated by NASA are ground-based, and require extensive support to produce accurate results. Recently developed systems that use Kalman filter and GPS (Global Positioning Systems) data for orbit determination greatly reduce dependency on ground support, and have potential to provide significant economies for NASA spacecraft navigation. These systems, however, still rely on manual tuning from analysts. A sophisticated neuro-fuzzy component fully integrated with the flight navigation system can perform the self-tuning capability for the Kalman filter and help the navigation system recover from estimation errors in real time.
A Self-Tuning Kalman Filter for Autonomous Navigation using the Global Positioning System (GPS)
NASA Technical Reports Server (NTRS)
Truong, S. H.
1999-01-01
Most navigation systems currently operated by NASA are ground-based, and require extensive support to produce accurate results. Recently developed systems that use Kalman filter and GPS data for orbit determination greatly reduce dependency on ground support, and have potential to provide significant economies for NASA spacecraft navigation. These systems, however, still rely on manual tuning from analysts. A sophisticated neuro-fuzzy component fully integrated with the flight navigation system can perform the self-tuning capability for the Kalman filter and help the navigation system recover from estimation errors in real time.
Marzban, Caren; Viswanathan, Raju; Yurtsever, Ulvi
2014-01-09
A recent study argued, based on data on functional genome size of major phyla, that there is evidence life may have originated significantly prior to the formation of the Earth. Here a more refined regression analysis is performed in which 1) measurement error is systematically taken into account, and 2) interval estimates (e.g., confidence or prediction intervals) are produced. It is shown that such models for which the interval estimate for the time origin of the genome includes the age of the Earth are consistent with observed data. The appearance of life after the formation of the Earth is consistent with the data set under examination.
Intrusion errors in visuospatial working memory performance.
Cornoldi, Cesare; Mammarella, Nicola
2006-02-01
This study tested the hypothesis that failure in active visuospatial working memory tasks involves a difficulty in avoiding intrusions due to information that is already activated. Two experiments are described, in which participants were required to process several series of locations on a 4 x 4 matrix and then to produce only the final location of each series. Results revealed a higher number of errors due to already activated locations (intrusions) compared with errors due to new locations (inventions). Moreover, when participants were required to pay extra attention to some irrelevant (non-final) locations by tapping on the table, intrusion errors increased. Results are discussed in terms of current models of working memory functioning.
Mapping GRACE Accelerometer Error
NASA Astrophysics Data System (ADS)
Sakumura, C.; Harvey, N.; McCullough, C. M.; Bandikova, T.; Kruizinga, G. L. H.
2017-12-01
After more than fifteen years in orbit, instrument noise, and accelerometer noise in particular, remains one of the limiting error sources for the NASA/DLR Gravity Recovery and Climate Experiment mission. The recent V03 Level-1 reprocessing campaign used a Kalman filter approach to produce a high fidelity, smooth attitude solution fusing star camera and angular acceleration data. This process provided an unprecedented method for analysis and error estimation of each instrument. The accelerometer exhibited signal aliasing, differential scale factors between electrode plates, and magnetic effects. By applying the noise model developed for the angular acceleration data to the linear measurements, we explore the magnitude and geophysical pattern of gravity field error due to the electrostatic accelerometer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogson, E; Liverpool and Macarthur Cancer Therapy Centres, Liverpool, NSW; Ingham Institute for Applied Medical Research, Sydney, NSW
Purpose: To quantify the impact of differing magnitudes of simulated linear accelerator errors on the dose to the target volume and organs at risk for nasopharynx VMAT. Methods: Ten nasopharynx cancer patients were retrospectively replanned twice with one full arc VMAT by two institutions. Treatment uncertainties (gantry angle and collimator in degrees, MLC field size and MLC shifts in mm) were introduced into these plans at increments of 5,2,1,−1,−2 and −5. This was completed using an in-house Python script within Pinnacle3 and analysed using 3DVH and MatLab. The mean and maximum dose were calculated for the Planning Target Volume (PTV1),more » parotids, brainstem, and spinal cord and then compared to the original baseline plan. The D1cc was also calculated for the spinal cord and brainstem. Patient average results were compared across institutions. Results: Introduced gantry angle errors had the smallest effect of dose, no tolerances were exceeded for one institution, and the second institutions VMAT plans were only exceeded for gantry angle of ±5° affecting different sided parotids by 14–18%. PTV1, brainstem and spinal cord tolerances were exceeded for collimator angles of ±5 degrees, MLC shifts and MLC field sizes of ±1 and beyond, at the first institution. At the second institution, sensitivity to errors was marginally higher for some errors including the collimator error producing doses exceeding tolerances above ±2 degrees, and marginally lower with tolerances exceeded above MLC shifts of ±2. The largest differences occur with MLC field sizes, with both institutions reporting exceeded tolerances, for all introduced errors (±1 and beyond). Conclusion: The plan robustness for VMAT nasopharynx plans has been demonstrated. Gantry errors have the least impact on patient doses, however MLC field sizes exceed tolerances even with relatively low introduced errors and also produce the largest errors. This was consistent across both departments. The authors acknowledge funding support from the NSW Cancer Council.« less
Defining the Relationship Between Human Error Classes and Technology Intervention Strategies
NASA Technical Reports Server (NTRS)
Wiegmann, Douglas A.; Rantanen, Eas M.
2003-01-01
The modus operandi in addressing human error in aviation systems is predominantly that of technological interventions or fixes. Such interventions exhibit considerable variability both in terms of sophistication and application. Some technological interventions address human error directly while others do so only indirectly. Some attempt to eliminate the occurrence of errors altogether whereas others look to reduce the negative consequences of these errors. In any case, technological interventions add to the complexity of the systems and may interact with other system components in unforeseeable ways and often create opportunities for novel human errors. Consequently, there is a need to develop standards for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the biggest benefit to flight safety as well as to mitigate any adverse ramifications. The purpose of this project was to help define the relationship between human error and technological interventions, with the ultimate goal of developing a set of standards for evaluating or measuring the potential benefits of new human error fixes.
Error analysis and experiments of attitude measurement using laser gyroscope
NASA Astrophysics Data System (ADS)
Ren, Xin-ran; Ma, Wen-li; Jiang, Ping; Huang, Jin-long; Pan, Nian; Guo, Shuai; Luo, Jun; Li, Xiao
2018-03-01
The precision of photoelectric tracking and measuring equipment on the vehicle and vessel is deteriorated by the platform's movement. Specifically, the platform's movement leads to the deviation or loss of the target, it also causes the jitter of visual axis and then produces image blur. In order to improve the precision of photoelectric equipment, the attitude of photoelectric equipment fixed with the platform must be measured. Currently, laser gyroscope is widely used to measure the attitude of the platform. However, the measurement accuracy of laser gyro is affected by its zero bias, scale factor, installation error and random error. In this paper, these errors were analyzed and compensated based on the laser gyro's error model. The static and dynamic experiments were carried out on a single axis turntable, and the error model was verified by comparing the gyro's output with an encoder with an accuracy of 0.1 arc sec. The accuracy of the gyroscope has increased from 7000 arc sec to 5 arc sec for an hour after error compensation. The method used in this paper is suitable for decreasing the laser gyro errors in inertial measurement applications.
NASA Astrophysics Data System (ADS)
Sharma, Navneet; Rawat, Tarun Kumar; Parthasarathy, Harish; Gautam, Kumar
2016-06-01
The aim of this paper is to design a current source obtained as a representation of p information symbols \\{I_k\\} so that the electromagnetic (EM) field generated interacts with a quantum atomic system producing after a fixed duration T a unitary gate U( T) that is as close as possible to a given unitary gate U_g. The design procedure involves calculating the EM field produced by \\{I_k\\} and hence the perturbing Hamiltonian produced by \\{I_k\\} finally resulting in the evolution operator produced by \\{I_k\\} up to cubic order based on the Dyson series expansion. The gate error energy is thus obtained as a cubic polynomial in \\{I_k\\} which is minimized using gravitational search algorithm. The signal to noise ratio (SNR) in the designed gate is higher as compared to that using quadratic Dyson series expansion. The SNR is calculated as the ratio of the Frobenius norm square of the desired gate to that of the desired gate error.
Effects of blurring and vertical misalignment on visual fatigue of stereoscopic displays
NASA Astrophysics Data System (ADS)
Baek, Sangwook; Lee, Chulhee
2015-03-01
In this paper, we investigate two error issues in stereo images, which may produce visual fatigue. When two cameras are used to produce 3D video sequences, vertical misalignment can be a problem. Although this problem may not occur in professionally produced 3D programs, it is still a major issue in many low-cost 3D programs. Recently, efforts have been made to produce 3D video programs using smart phones or tablets, which may present the vertical alignment problem. Also, in 2D-3D conversion techniques, the simulated frame may have blur effects, which can also introduce visual fatigue in 3D programs. In this paper, to investigate the relationship between these two errors (vertical misalignment and blurring in one image), we performed a subjective test using simulated 3D video sequences that include stereo video sequences with various vertical misalignments and blurring in a stereo image. We present some analyses along with objective models to predict the degree of visual fatigue from vertical misalignment and blurring.
Correcting evaluation bias of relational classifiers with network cross validation
Neville, Jennifer; Gallagher, Brian; Eliassi-Rad, Tina; ...
2011-01-04
Recently, a number of modeling techniques have been developed for data mining and machine learning in relational and network domains where the instances are not independent and identically distributed (i.i.d.). These methods specifically exploit the statistical dependencies among instances in order to improve classification accuracy. However, there has been little focus on how these same dependencies affect our ability to draw accurate conclusions about the performance of the models. More specifically, the complex link structure and attribute dependencies in relational data violate the assumptions of many conventional statistical tests and make it difficult to use these tests to assess themore » models in an unbiased manner. In this work, we examine the task of within-network classification and the question of whether two algorithms will learn models that will result in significantly different levels of performance. We show that the commonly used form of evaluation (paired t-test on overlapping network samples) can result in an unacceptable level of Type I error. Furthermore, we show that Type I error increases as (1) the correlation among instances increases and (2) the size of the evaluation set increases (i.e., the proportion of labeled nodes in the network decreases). Lastly, we propose a method for network cross-validation that combined with paired t-tests produces more acceptable levels of Type I error while still providing reasonable levels of statistical power (i.e., 1–Type II error).« less
Evaluating the utility of dynamical downscaling in agricultural impacts projections
Glotter, Michael; Elliott, Joshua; McInerney, David; Best, Neil; Foster, Ian; Moyer, Elisabeth J.
2014-01-01
Interest in estimating the potential socioeconomic costs of climate change has led to the increasing use of dynamical downscaling—nested modeling in which regional climate models (RCMs) are driven with general circulation model (GCM) output—to produce fine-spatial-scale climate projections for impacts assessments. We evaluate here whether this computationally intensive approach significantly alters projections of agricultural yield, one of the greatest concerns under climate change. Our results suggest that it does not. We simulate US maize yields under current and future CO2 concentrations with the widely used Decision Support System for Agrotechnology Transfer crop model, driven by a variety of climate inputs including two GCMs, each in turn downscaled by two RCMs. We find that no climate model output can reproduce yields driven by observed climate unless a bias correction is first applied. Once a bias correction is applied, GCM- and RCM-driven US maize yields are essentially indistinguishable in all scenarios (<10% discrepancy, equivalent to error from observations). Although RCMs correct some GCM biases related to fine-scale geographic features, errors in yield are dominated by broad-scale (100s of kilometers) GCM systematic errors that RCMs cannot compensate for. These results support previous suggestions that the benefits for impacts assessments of dynamically downscaling raw GCM output may not be sufficient to justify its computational demands. Progress on fidelity of yield projections may benefit more from continuing efforts to understand and minimize systematic error in underlying climate projections. PMID:24872455
Soriano, Vincent V; Tesoro, Eljim P; Kane, Sean P
2017-08-01
The Winter-Tozer (WT) equation has been shown to reliably predict free phenytoin levels in healthy patients. In patients with end-stage renal disease (ESRD), phenytoin-albumin binding is altered and, thus, affects interpretation of total serum levels. Although an ESRD WT equation was historically proposed for this population, there is a lack of data evaluating its accuracy. The objective of this study was to determine the accuracy of the ESRD WT equation in predicting free serum phenytoin concentration in patients with ESRD on hemodialysis (HD). A retrospective analysis of adult patients with ESRD on HD and concurrent free and total phenytoin concentrations was conducted. Each patient's true free phenytoin concentration was compared with a calculated value using the ESRD WT equation and a revised version of the ESRD WT equation. A total of 21 patients were included for analysis. The ESRD WT equation produced a percentage error of 75% and a root mean square error of 1.76 µg/mL. Additionally, 67% of the samples had an error >50% when using the ESRD WT equation. A revised equation was found to have high predictive accuracy, with only 5% of the samples demonstrating >50% error. The ESRD WT equation was not accurate in predicting free phenytoin concentration in patients with ESRD on HD. A revised ESRD WT equation was found to be significantly more accurate. Given the small study sample, further studies are required to fully evaluate the clinical utility of the revised ESRD WT equation.
NASA Technical Reports Server (NTRS)
Siu, Marie-Michele; Martos, Borja; Foster, John V.
2013-01-01
As part of a joint partnership between the NASA Aviation Safety Program (AvSP) and the University of Tennessee Space Institute (UTSI), research on advanced air data calibration methods has been in progress. This research was initiated to expand a novel pitot-static calibration method that was developed to allow rapid in-flight calibration for the NASA Airborne Subscale Transport Aircraft Research (AirSTAR) facility. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. Subscale flight tests demonstrated small 2-s error bounds with significant reduction in test time compared to other methods. Recent UTSI full scale flight tests have shown airspeed calibrations with the same accuracy or better as the Federal Aviation Administration (FAA) accepted GPS 'four-leg' method in a smaller test area and in less time. The current research was motivated by the desire to extend this method for inflight calibration of angle of attack (AOA) and angle of sideslip (AOS) flow vanes. An instrumented Piper Saratoga research aircraft from the UTSI was used to collect the flight test data and evaluate flight test maneuvers. Results showed that the output-error approach produces good results for flow vane calibration. In addition, maneuvers for pitot-static and flow vane calibration can be integrated to enable simultaneous and efficient testing of each system.
Headaches associated with refractive errors: myth or reality?
Gil-Gouveia, R; Martins, I P
2002-04-01
Headache and refractive errors are very common conditions in the general population, and those with headache often attribute their pain to a visual problem. The International Headache Society (IHS) criteria for the classification of headache includes an entity of headache associated with refractive errors (HARE), but indicates that its importance is widely overestimated. To compare overall headache frequency and HARE frequency in healthy subjects with uncorrected or miscorrected refractive errors and a control group. We interviewed 105 individuals with uncorrected refractive errors and a control group of 71 subjects (with properly corrected or without refractive errors) regarding their headache history. We compared the occurrence of headache and its diagnosis in both groups and assessed its relation to their habits of visual effort and type of refractive errors. Headache frequency was similar in both subjects and controls. Headache associated with refractive errors was the only headache type significantly more common in subjects with refractive errors than in controls (6.7% versus 0%). It was associated with hyperopia and was unrelated to visual effort or to the severity of visual error. With adequate correction, 72.5% of the subjects with headache and refractive error reported improvement in their headaches, and 38% had complete remission of headache. Regardless of the type of headache present, headache frequency was significantly reduced in these subjects (t = 2.34, P =.02). Headache associated with refractive errors was rarely identified in individuals with refractive errors. In those with chronic headache, proper correction of refractive errors significantly improved headache complaints and did so primarily by decreasing the frequency of headache episodes.
Wade T. Tinkham; Alistair M. S. Smith; Chad Hoffman; Andrew T. Hudak; Michael J. Falkowski; Mark E. Swanson; Paul E. Gessler
2012-01-01
Light detection and ranging, or LiDAR, effectively produces products spatially characterizing both terrain and vegetation structure; however, development and use of those products has outpaced our understanding of the errors within them. LiDAR's ability to capture three-dimensional structure has led to interest in conducting or augmenting forest inventories with...
When Is a Failure to Replicate Not a Type II Error?
ERIC Educational Resources Information Center
Vasconcelos, Marco; Urcuioli, Peter J.; Lionello-DeNolf, Karen M.
2007-01-01
Zentall and Singer (2007) challenge our conclusion that the work-ethic effect reported by Clement, Feltus, Kaiser, and Zentall (2000) may have been a Type I error by arguing that (a) the effect has been extensively replicated and (b) the amount of overtraining our pigeons received may not have been sufficient to produce it. We believe that our…
Quantum error-correcting codes from algebraic geometry codes of Castle type
NASA Astrophysics Data System (ADS)
Munuera, Carlos; Tenório, Wanderson; Torres, Fernando
2016-10-01
We study algebraic geometry codes producing quantum error-correcting codes by the CSS construction. We pay particular attention to the family of Castle codes. We show that many of the examples known in the literature in fact belong to this family of codes. We systematize these constructions by showing the common theory that underlies all of them.
Analysis of Covariance: Is It the Appropriate Model to Study Change?
ERIC Educational Resources Information Center
Marston, Paul T., Borich, Gary D.
The four main approaches to measuring treatment effects in schools; raw gain, residual gain, covariance, and true scores; were compared. A simulation study showed true score analysis produced a large number of Type-I errors. When corrected for this error, this method showed the least power of the four. This outcome was clearly the result of the…
ERIC Educational Resources Information Center
Wichmann, Astrid; Funk, Alexandra; Rummel, Nikol
2018-01-01
The act of revising is an important aspect of academic writing. Although revision is crucial for eliminating writing errors and producing high-quality texts, research on writing expertise shows that novices rarely engage in revision activities. Providing information on writing errors by means of peer feedback has become a popular method in writing…
The effects of time-varying observation errors on semi-empirical sea-level projections
Ruckert, Kelsey L.; Guan, Yawen; Bakker, Alexander M. R.; ...
2016-11-30
Sea-level rise is a key driver of projected flooding risks. The design of strategies to manage these risks often hinges on projections that inform decision-makers about the surrounding uncertainties. Producing semi-empirical sea-level projections is difficult, for example, due to the complexity of the error structure of the observations, such as time-varying (heteroskedastic) observation errors and autocorrelation of the data-model residuals. This raises the question of how neglecting the error structure impacts hindcasts and projections. Here, we quantify this effect on sea-level projections and parameter distributions by using a simple semi-empirical sea-level model. Specifically, we compare three model-fitting methods: a frequentistmore » bootstrap as well as a Bayesian inversion with and without considering heteroskedastic residuals. All methods produce comparable hindcasts, but the parametric distributions and projections differ considerably based on methodological choices. In conclusion, our results show that the differences based on the methodological choices are enhanced in the upper tail projections. For example, the Bayesian inversion accounting for heteroskedasticity increases the sea-level anomaly with a 1% probability of being equaled or exceeded in the year 2050 by about 34% and about 40% in the year 2100 compared to a frequentist bootstrap. These results indicate that neglecting known properties of the observation errors and the data-model residuals can lead to low-biased sea-level projections.« less
Bilingual language intrusions and other speech errors in Alzheimer's disease.
Gollan, Tamar H; Stasenko, Alena; Li, Chuchu; Salmon, David P
2017-11-01
The current study investigated how Alzheimer's disease (AD) affects production of speech errors in reading-aloud. Twelve Spanish-English bilinguals with AD and 19 matched controls read-aloud 8 paragraphs in four conditions (a) English-only, (b) Spanish-only, (c) English-mixed (mostly English with 6 Spanish words), and (d) Spanish-mixed (mostly Spanish with 6 English words). Reading elicited language intrusions (e.g., saying la instead of the), and several types of within-language errors (e.g., saying their instead of the). Patients produced more intrusions (and self-corrected less often) than controls, particularly when reading non-dominant language paragraphs with switches into the dominant language. Patients also produced more within-language errors than controls, but differences between groups for these were not consistently larger with dominant versus non-dominant language targets. These results illustrate the potential utility of speech errors for diagnosis of AD, suggest a variety of linguistic and executive control impairments in AD, and reveal multiple cognitive mechanisms needed to mix languages fluently. The observed pattern of deficits, and unique sensitivity of intrusions to AD in bilinguals, suggests intact ability to select a default language with contextual support, to rapidly translate and switch languages in production of connected speech, but impaired ability to monitor language membership while regulating inhibitory control. Copyright © 2017 Elsevier Inc. All rights reserved.
The effects of time-varying observation errors on semi-empirical sea-level projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruckert, Kelsey L.; Guan, Yawen; Bakker, Alexander M. R.
Sea-level rise is a key driver of projected flooding risks. The design of strategies to manage these risks often hinges on projections that inform decision-makers about the surrounding uncertainties. Producing semi-empirical sea-level projections is difficult, for example, due to the complexity of the error structure of the observations, such as time-varying (heteroskedastic) observation errors and autocorrelation of the data-model residuals. This raises the question of how neglecting the error structure impacts hindcasts and projections. Here, we quantify this effect on sea-level projections and parameter distributions by using a simple semi-empirical sea-level model. Specifically, we compare three model-fitting methods: a frequentistmore » bootstrap as well as a Bayesian inversion with and without considering heteroskedastic residuals. All methods produce comparable hindcasts, but the parametric distributions and projections differ considerably based on methodological choices. In conclusion, our results show that the differences based on the methodological choices are enhanced in the upper tail projections. For example, the Bayesian inversion accounting for heteroskedasticity increases the sea-level anomaly with a 1% probability of being equaled or exceeded in the year 2050 by about 34% and about 40% in the year 2100 compared to a frequentist bootstrap. These results indicate that neglecting known properties of the observation errors and the data-model residuals can lead to low-biased sea-level projections.« less
Down's syndrome and the acquisition of phonology by Cantonese-speaking children.
So, L K; Dodd, B J
1994-10-01
The phonological abilities of two groups of 4-9-year-old intellectually impaired Cantonese-speaking children are described. Children with Down's syndrome did not differ from matched non-Down's syndrome controls in terms of a lexical comprehension measure, the size of their phoneme repertoires, the range of sounds affected by articulatory imprecision, or the number of consonants, vowels or tones produced in error. However, the types of errors made by the Down's syndrome children were different from those made by the control subjects. Cantonese-speaking children with Down's syndrome, as compared with controls, made a greater number of inconsistent errors, were more likely to produce non-developmental errors and were better in imitation than in spontaneous production. Despite extensive differences between the phonological structures of Cantonese and English, children with Down's syndrome acquiring these languages show the same characteristic pattern of speech errors. One unexpected finding was that the control group of non-Down's syndrome children failed to present with delayed phonological development typically reported for their English-speaking counterparts. The argument made is that cross-linguistic studies of intellectually impaired children's language acquisition provide evidence concerning language-specific characteristics of impairment, as opposed to those characteristics that, remaining constant across languages, are an integral part of the disorder. The results reported here support the hypothesis that the speech disorder typically associated with Down's syndrome arises from impaired phonological planning, i.e. a cognitive linguistic deficit.
Learning, memory, and the role of neural network architecture.
Hermundstad, Ann M; Brown, Kevin S; Bassett, Danielle S; Carlson, Jean M
2011-06-01
The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.
A Benchmark Study on Error Assessment and Quality Control of CCS Reads Derived from the PacBio RS
Jiao, Xiaoli; Zheng, Xin; Ma, Liang; Kutty, Geetha; Gogineni, Emile; Sun, Qiang; Sherman, Brad T.; Hu, Xiaojun; Jones, Kristine; Raley, Castle; Tran, Bao; Munroe, David J.; Stephens, Robert; Liang, Dun; Imamichi, Tomozumi; Kovacs, Joseph A.; Lempicki, Richard A.; Huang, Da Wei
2013-01-01
PacBio RS, a newly emerging third-generation DNA sequencing platform, is based on a real-time, single-molecule, nano-nitch sequencing technology that can generate very long reads (up to 20-kb) in contrast to the shorter reads produced by the first and second generation sequencing technologies. As a new platform, it is important to assess the sequencing error rate, as well as the quality control (QC) parameters associated with the PacBio sequence data. In this study, a mixture of 10 prior known, closely related DNA amplicons were sequenced using the PacBio RS sequencing platform. After aligning Circular Consensus Sequence (CCS) reads derived from the above sequencing experiment to the known reference sequences, we found that the median error rate was 2.5% without read QC, and improved to 1.3% with an SVM based multi-parameter QC method. In addition, a De Novo assembly was used as a downstream application to evaluate the effects of different QC approaches. This benchmark study indicates that even though CCS reads are post error-corrected it is still necessary to perform appropriate QC on CCS reads in order to produce successful downstream bioinformatics analytical results. PMID:24179701
A Benchmark Study on Error Assessment and Quality Control of CCS Reads Derived from the PacBio RS.
Jiao, Xiaoli; Zheng, Xin; Ma, Liang; Kutty, Geetha; Gogineni, Emile; Sun, Qiang; Sherman, Brad T; Hu, Xiaojun; Jones, Kristine; Raley, Castle; Tran, Bao; Munroe, David J; Stephens, Robert; Liang, Dun; Imamichi, Tomozumi; Kovacs, Joseph A; Lempicki, Richard A; Huang, Da Wei
2013-07-31
PacBio RS, a newly emerging third-generation DNA sequencing platform, is based on a real-time, single-molecule, nano-nitch sequencing technology that can generate very long reads (up to 20-kb) in contrast to the shorter reads produced by the first and second generation sequencing technologies. As a new platform, it is important to assess the sequencing error rate, as well as the quality control (QC) parameters associated with the PacBio sequence data. In this study, a mixture of 10 prior known, closely related DNA amplicons were sequenced using the PacBio RS sequencing platform. After aligning Circular Consensus Sequence (CCS) reads derived from the above sequencing experiment to the known reference sequences, we found that the median error rate was 2.5% without read QC, and improved to 1.3% with an SVM based multi-parameter QC method. In addition, a De Novo assembly was used as a downstream application to evaluate the effects of different QC approaches. This benchmark study indicates that even though CCS reads are post error-corrected it is still necessary to perform appropriate QC on CCS reads in order to produce successful downstream bioinformatics analytical results.
Mass load estimation errors utilizing grab sampling strategies in a karst watershed
Fogle, A.W.; Taraba, J.L.; Dinger, J.S.
2003-01-01
Developing a mass load estimation method appropriate for a given stream and constituent is difficult due to inconsistencies in hydrologic and constituent characteristics. The difficulty may be increased in flashy flow conditions such as karst. Many projects undertaken are constrained by budget and manpower and do not have the luxury of sophisticated sampling strategies. The objectives of this study were to: (1) examine two grab sampling strategies with varying sampling intervals and determine the error in mass load estimates, and (2) determine the error that can be expected when a grab sample is collected at a time of day when the diurnal variation is most divergent from the daily mean. Results show grab sampling with continuous flow to be a viable data collection method for estimating mass load in the study watershed. Comparing weekly, biweekly, and monthly grab sampling, monthly sampling produces the best results with this method. However, the time of day the sample is collected is important. Failure to account for diurnal variability when collecting a grab sample may produce unacceptable error in mass load estimates. The best time to collect a sample is when the diurnal cycle is nearest the daily mean.
Sleep, mental health status, and medical errors among hospital nurses in Japan.
Arimura, Mayumi; Imai, Makoto; Okawa, Masako; Fujimura, Toshimasa; Yamada, Naoto
2010-01-01
Medical error involving nurses is a critical issue since nurses' actions will have a direct and often significant effect on the prognosis of their patients. To investigate the significance of nurse health in Japan and its potential impact on patient services, a questionnaire-based survey amongst nurses working in hospitals was conducted, with the specific purpose of examining the relationship between shift work, mental health and self-reported medical errors. Multivariate analysis revealed significant associations between the shift work system, General Health Questionnaire (GHQ) scores and nurse errors: the odds ratios for shift system and GHQ were 2.1 and 1.1, respectively. It was confirmed that both sleep and mental health status among hospital nurses were relatively poor, and that shift work and poor mental health were significant factors contributing to medical errors.
NASA Technical Reports Server (NTRS)
Brown, W. H.; Ahuja, K. K.
1989-01-01
The effects of mechanical protrusions on the jet mixing characteristics of rectangular nozzles for heated and unheated subsonic and supersonic jet plumes were studied. The characteristics of a rectangular nozzle of aspect ratio 4 without the mechanical protrusions were first investigated. Intrusive probes were used to make the flow measurements. Possible errors introduced by intrusive probes in making shear flow measurements were also examined. Several scaled sizes of mechanical tabs were then tested, configured around the perimeter of the rectangular jet. Both the number and the location of the tabs were varied. From this, the best configuration was selected. The conclusions derived were: (1) intrusive probes can produce significant errors in the measurements of the velocity of jets if they are large in diameter and penetrate beyond the jet center; (2) rectangular jets without tabs, compared to circular jets of the same exit area, provide faster jet mixing; and (3) further mixing enhancement is possible by using mechanical tabs.
Integrating aerodynamics and structures in the minimum weight design of a supersonic transport wing
NASA Technical Reports Server (NTRS)
Barthelemy, Jean-Francois M.; Wrenn, Gregory A.; Dovi, Augustine R.; Coen, Peter G.; Hall, Laura E.
1992-01-01
An approach is presented for determining the minimum weight design of aircraft wing models which takes into consideration aerodynamics-structure coupling when calculating both zeroth order information needed for analysis and first order information needed for optimization. When performing sensitivity analysis, coupling is accounted for by using a generalized sensitivity formulation. The results presented show that the aeroelastic effects are calculated properly and noticeably reduce constraint approximation errors. However, for the particular example selected, the error introduced by ignoring aeroelastic effects are not sufficient to significantly affect the convergence of the optimization process. Trade studies are reported that consider different structural materials, internal spar layouts, and panel buckling lengths. For the formulation, model and materials used in this study, an advanced aluminum material produced the lightest design while satisfying the problem constraints. Also, shorter panel buckling lengths resulted in lower weights by permitting smaller panel thicknesses and generally, by unloading the wing skins and loading the spar caps. Finally, straight spars required slightly lower wing weights than angled spars.
Oceanic geoid and tides derived from GEOS 3 satellite data in the Northwestern Atlantic Ocean
NASA Technical Reports Server (NTRS)
Won, I. J.; Miller, L. S.
1979-01-01
Two sets of GEOS 3 altimeter data which fall within about a 2.5-deg width are analyzed for ocean geoid and tides. One set covers a path from Newfoundland to Cuba, and the other a path from Puerto Rico to the North Carolina coast. Forty different analyses using various parameters are performed in order to investigate convergence. Profiles of the geoid and four tides, M2, O1, S2, and K1, are derived along the two strips. While the analyses produced convergent solutions for all 40 cases, the uncertainty caused by the linear orbital bias error of the satellite is too large to claim that the solutions represent the true ocean tides in the area. A spot check of the result with the Mode deep-sea tide gauge data shows poor agreement. A positive conclusion of this study is that despite the uncertain orbital error the oceanic geoid obtained through this analysis can improve significantly the short-wavelength structure over existing spherical harmonic geoid models.
The penta-prism LTP: A long-trace-profiler with stationary optical head and moving penta prism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qian, S.; Jark, W.; Takacs, P.Z.
1995-03-01
Metrology requirements for optical components for third-generation synchrotron sources are taxing the state of the art in manufacturing technology. We have investigated a number of error sources in a commercial figure measurement instrument, the Long-Trace-Profiler II, and have demonstrated that, with some simple modifications, we can significantly reduce the effect of error sources and improve the accuracy and reliability of the measurement. By keeping the optical head stationary and moving a penta prism along the translation stage, as in the original pencil-beam interferometer design of von Bieren, the stability of the optical system is greatly improved, and the remaining errormore » signals can be corrected by a simple reference beam subtraction. We illustrate the performance of the modified system by investigating the distortion produced by gravity on a typical synchrotron mirror and demonstrate the repeatability of the instrument despite relaxed tolerances on the translation stage.« less
Van Nguyen; Javaid, Abdul Q; Weitnauer, Mary Ann
2014-01-01
We introduce the Spectrum-averaged Harmonic Path (SHAPA) algorithm for estimation of heart rate (HR) and respiration rate (RR) with Impulse Radio Ultrawideband (IR-UWB) radar. Periodic movement of human torso caused by respiration and heart beat induces fundamental frequencies and their harmonics at the respiration and heart rates. IR-UWB enables capture of these spectral components and frequency domain processing enables a low cost implementation. Most existing methods of identifying the fundamental component either in frequency or time domain to estimate the HR and/or RR lead to significant error if the fundamental is distorted or cancelled by interference. The SHAPA algorithm (1) takes advantage of the HR harmonics, where there is less interference, and (2) exploits the information in previous spectra to achieve more reliable and robust estimation of the fundamental frequency in the spectrum under consideration. Example experimental results for HR estimation demonstrate how our algorithm eliminates errors caused by interference and produces 16% to 60% more valid estimates.
Hickok, Gregory; Pickell, Herbert; Klima, Edward; Bellugi, Ursula
2009-01-01
We examine the hemispheric organization for the production of two classes of ASL signs, lexical signs and classifier signs. Previous work has found strong left hemisphere dominance for the production of lexical signs, but several authors have speculated that classifier signs may involve the right hemisphere to a greater degree because they can represent spatial information in a topographic, non-categorical manner. Twenty-one unilaterally brain damaged signers (13 left hemisphere damaged, 8 right hemisphere damaged) were presented with a story narration task designed to elicit both lexical and classifier signs. Relative frequencies of the two types of errors were tabulated. Left hemisphere damaged signers produced significantly more lexical errors than did right hemisphere damaged signers, whereas the reverse pattern held for classifier signs. Our findings argue for different patterns of hemispheric asymmetry for these two classes of ASL signs. We suggest that the requirement to encode analogue spatial information in the production of classifier signs results in the increased involvement of the right hemisphere systems.
Moisen, Gretchen G.; Freeman, E.A.; Blackard, J.A.; Frescino, T.S.; Zimmermann, N.E.; Edwards, T.C.
2006-01-01
Many efforts are underway to produce broad-scale forest attribute maps by modelling forest class and structure variables collected in forest inventories as functions of satellite-based and biophysical information. Typically, variants of classification and regression trees implemented in Rulequest's?? See5 and Cubist (for binary and continuous responses, respectively) are the tools of choice in many of these applications. These tools are widely used in large remote sensing applications, but are not easily interpretable, do not have ties with survey estimation methods, and use proprietary unpublished algorithms. Consequently, three alternative modelling techniques were compared for mapping presence and basal area of 13 species located in the mountain ranges of Utah, USA. The modelling techniques compared included the widely used See5/Cubist, generalized additive models (GAMs), and stochastic gradient boosting (SGB). Model performance was evaluated using independent test data sets. Evaluation criteria for mapping species presence included specificity, sensitivity, Kappa, and area under the curve (AUC). Evaluation criteria for the continuous basal area variables included correlation and relative mean squared error. For predicting species presence (setting thresholds to maximize Kappa), SGB had higher values for the majority of the species for specificity and Kappa, while GAMs had higher values for the majority of the species for sensitivity. In evaluating resultant AUC values, GAM and/or SGB models had significantly better results than the See5 models where significant differences could be detected between models. For nine out of 13 species, basal area prediction results for all modelling techniques were poor (correlations less than 0.5 and relative mean squared errors greater than 0.8), but SGB provided the most stable predictions in these instances. SGB and Cubist performed equally well for modelling basal area for three species with moderate prediction success, while all three modelling tools produced comparably good predictions (correlation of 0.68 and relative mean squared error of 0.56) for one species. ?? 2006 Elsevier B.V. All rights reserved.
Chen, Yi-Ching; Lin, Linda L; Lin, Yen-Ting; Hu, Chia-Ling; Hwang, Ing-Shiou
2017-01-01
Error amplification (EA) feedback is a promising approach to advance visuomotor skill. As error detection and visuomotor processing at short time scales decline with age, this study examined whether older adults could benefit from EA feedback that included higher-frequency information to guide a force-tracking task. Fourteen young and 14 older adults performed low-level static isometric force-tracking with visual guidance of typical visual feedback and EA feedback containing augmented high-frequency errors. Stabilogram diffusion analysis was used to characterize force fluctuation dynamics. Also, the discharge behaviors of motor units and pooled motor unit coherence were assessed following the decomposition of multi-channel surface electromyography (EMG). EA produced different behavioral and neurophysiological impacts on young and older adults. Older adults exhibited inferior task accuracy with EA feedback than with typical visual feedback, but not young adults. Although stabilogram diffusion analysis revealed that EA led to a significant decrease in critical time points for both groups, EA potentiated the critical point of force fluctuations [Formula: see text], short-term effective diffusion coefficients (Ds), and short-term exponent scaling only for the older adults. Moreover, in older adults, EA added to the size of discharge variability of motor units and discharge regularity of cumulative discharge rate, but suppressed the pooled motor unit coherence in the 13-35 Hz band. Virtual EA alters the strategic balance between open-loop and closed-loop controls for force-tracking. Contrary to expectations, the prevailing use of closed-loop control with EA that contained high-frequency error information enhanced the motor unit discharge variability and undermined the force steadiness in the older group, concerning declines in physiological complexity in the neurobehavioral system and the common drive to the motoneuronal pool against force destabilization.
Devakumar, Delan; Grijalva-Eternod, Carlos S; Roberts, Sebastian; Chaube, Shiva Shankar; Saville, Naomi M; Manandhar, Dharma S; Costello, Anthony; Osrin, David; Wells, Jonathan C K
2015-01-01
Background. Body composition is important as a marker of both current and future health. Bioelectrical impedance (BIA) is a simple and accurate method for estimating body composition, but requires population-specific calibration equations. Objectives. (1) To generate population specific calibration equations to predict lean mass (LM) from BIA in Nepalese children aged 7-9 years. (2) To explore methodological changes that may extend the range and improve accuracy. Methods. BIA measurements were obtained from 102 Nepalese children (52 girls) using the Tanita BC-418. Isotope dilution with deuterium oxide was used to measure total body water and to estimate LM. Prediction equations for estimating LM from BIA data were developed using linear regression, and estimates were compared with those obtained from the Tanita system. We assessed the effects of flexing the arms of children to extend the range of coverage towards lower weights. We also estimated potential error if the number of children included in the study was reduced. Findings. Prediction equations were generated, incorporating height, impedance index, weight and sex as predictors (R (2) 93%). The Tanita system tended to under-estimate LM, with a mean error of 2.2%, but extending up to 25.8%. Flexing the arms to 90° increased the lower weight range, but produced a small error that was not significant when applied to children <16 kg (p 0.42). Reducing the number of children increased the error at the tails of the weight distribution. Conclusions. Population-specific isotope calibration of BIA for Nepalese children has high accuracy. Arm position is important and can be used to extend the range of low weight covered. Smaller samples reduce resource requirements, but leads to large errors at the tails of the weight distribution.
Chen, Yi-Ching; Lin, Linda L.; Lin, Yen-Ting; Hu, Chia-Ling; Hwang, Ing-Shiou
2017-01-01
Error amplification (EA) feedback is a promising approach to advance visuomotor skill. As error detection and visuomotor processing at short time scales decline with age, this study examined whether older adults could benefit from EA feedback that included higher-frequency information to guide a force-tracking task. Fourteen young and 14 older adults performed low-level static isometric force-tracking with visual guidance of typical visual feedback and EA feedback containing augmented high-frequency errors. Stabilogram diffusion analysis was used to characterize force fluctuation dynamics. Also, the discharge behaviors of motor units and pooled motor unit coherence were assessed following the decomposition of multi-channel surface electromyography (EMG). EA produced different behavioral and neurophysiological impacts on young and older adults. Older adults exhibited inferior task accuracy with EA feedback than with typical visual feedback, but not young adults. Although stabilogram diffusion analysis revealed that EA led to a significant decrease in critical time points for both groups, EA potentiated the critical point of force fluctuations <ΔFc2>, short-term effective diffusion coefficients (Ds), and short-term exponent scaling only for the older adults. Moreover, in older adults, EA added to the size of discharge variability of motor units and discharge regularity of cumulative discharge rate, but suppressed the pooled motor unit coherence in the 13–35 Hz band. Virtual EA alters the strategic balance between open-loop and closed-loop controls for force-tracking. Contrary to expectations, the prevailing use of closed-loop control with EA that contained high-frequency error information enhanced the motor unit discharge variability and undermined the force steadiness in the older group, concerning declines in physiological complexity in the neurobehavioral system and the common drive to the motoneuronal pool against force destabilization. PMID:29167637
Smith, S. Jerrod; Lewis, Jason M.; Graves, Grant M.
2015-09-28
Generalized-least-squares multiple-linear regression analysis was used to formulate regression relations between peak-streamflow frequency statistics and basin characteristics. Contributing drainage area was the only basin characteristic determined to be statistically significant for all percentage of annual exceedance probabilities and was the only basin characteristic used in regional regression equations for estimating peak-streamflow frequency statistics on unregulated streams in and near the Oklahoma Panhandle. The regression model pseudo-coefficient of determination, converted to percent, for the Oklahoma Panhandle regional regression equations ranged from about 38 to 63 percent. The standard errors of prediction and the standard model errors for the Oklahoma Panhandle regional regression equations ranged from about 84 to 148 percent and from about 76 to 138 percent, respectively. These errors were comparable to those reported for regional peak-streamflow frequency regression equations for the High Plains areas of Texas and Colorado. The root mean square errors for the Oklahoma Panhandle regional regression equations (ranging from 3,170 to 92,000 cubic feet per second) were less than the root mean square errors for the Oklahoma statewide regression equations (ranging from 18,900 to 412,000 cubic feet per second); therefore, the Oklahoma Panhandle regional regression equations produce more accurate peak-streamflow statistic estimates for the irrigated period of record in the Oklahoma Panhandle than do the Oklahoma statewide regression equations. The regression equations developed in this report are applicable to streams that are not substantially affected by regulation, impoundment, or surface-water withdrawals. These regression equations are intended for use for stream sites with contributing drainage areas less than or equal to about 2,060 square miles, the maximum value for the independent variable used in the regression analysis.
Systematic error of the Gaia DR1 TGAS parallaxes from data for the red giant clump
NASA Astrophysics Data System (ADS)
Gontcharov, G. A.
2017-08-01
Based on the Gaia DR1 TGAS parallaxes and photometry from the Tycho-2, Gaia, 2MASS, andWISE catalogues, we have produced a sample of 100 000 clump red giants within 800 pc of the Sun. The systematic variations of the mode of their absolute magnitude as a function of the distance, magnitude, and other parameters have been analyzed. We show that these variations reach 0.7 mag and cannot be explained by variations in the interstellar extinction or intrinsic properties of stars and by selection. The only explanation seems to be a systematic error of the Gaia DR1 TGAS parallax dependent on the square of the observed distance in kpc: 0.18 R 2 mas. Allowance for this error reduces significantly the systematic dependences of the absolute magnitude mode on all parameters. This error reaches 0.1 mas within 800 pc of the Sun and allows an upper limit for the accuracy of the TGAS parallaxes to be estimated as 0.2 mas. A careful allowance for such errors is needed to use clump red giants as "standard candles." This eliminates all discrepancies between the theoretical and empirical estimates of the characteristics of these stars and allows us to obtain the first estimates of the modes of their absolute magnitudes from the Gaia parallaxes: mode( M H ) = -1.49 m ± 0.04 m , mode( M Ks ) = -1.63 m ± 0.03 m , mode( M W1) = -1.67 m ± 0.05 m mode( M W2) = -1.67 m ± 0.05 m , mode( M W3) = -1.66 m ± 0.02 m , mode( M W4) = -1.73 m ± 0.03 m , as well as the corresponding estimates of their de-reddened colors.
Is adult gait less susceptible than paediatric gait to hip joint centre regression equation error?
Kiernan, D; Hosking, J; O'Brien, T
2016-03-01
Hip joint centre (HJC) regression equation error during paediatric gait has recently been shown to have clinical significance. In relation to adult gait, it has been inferred that comparable errors with children in absolute HJC position may in fact result in less significant kinematic and kinetic error. This study investigated the clinical agreement of three commonly used regression equation sets (Bell et al., Davis et al. and Orthotrak) for adult subjects against the equations of Harrington et al. The relationship between HJC position error and subject size was also investigated for the Davis et al. set. Full 3-dimensional gait analysis was performed on 12 healthy adult subjects with data for each set compared to Harrington et al. The Gait Profile Score, Gait Variable Score and GDI-kinetic were used to assess clinical significance while differences in HJC position between the Davis and Harrington sets were compared to leg length and subject height using regression analysis. A number of statistically significant differences were present in absolute HJC position. However, all sets fell below the clinically significant thresholds (GPS <1.6°, GDI-Kinetic <3.6 points). Linear regression revealed a statistically significant relationship for both increasing leg length and increasing subject height with decreasing error in anterior/posterior and superior/inferior directions. Results confirm a negligible clinical error for adult subjects suggesting that any of the examined sets could be used interchangeably. Decreasing error with both increasing leg length and increasing subject height suggests that the Davis set should be used cautiously on smaller subjects. Copyright © 2016 Elsevier B.V. All rights reserved.
Guan, W; Meng, X F; Dong, X M
2014-12-01
Rectification error is a critical characteristic of inertial accelerometers. Accelerometers working in operational situations are stimulated by composite inputs, including constant acceleration and vibration, from multiple directions. However, traditional methods for evaluating rectification error only use one-dimensional vibration. In this paper, a double turntable centrifuge (DTC) was utilized to produce the constant acceleration and vibration simultaneously and we tested the rectification error due to the composite accelerations. At first, we deduced the expression of the rectification error with the output of the DTC and a static model of the single-axis pendulous accelerometer under test. Theoretical investigation and analysis were carried out in accordance with the rectification error model. Then a detailed experimental procedure and testing results were described. We measured the rectification error with various constant accelerations at different frequencies and amplitudes of the vibration. The experimental results showed the distinguished characteristics of the rectification error caused by the composite accelerations. The linear relation between the constant acceleration and the rectification error was proved. The experimental procedure and results presented in this context can be referenced for the investigation of the characteristics of accelerometer with multiple inputs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H; Chen, J; Pouliot, J
2015-06-15
Purpose: Deformable image registration (DIR) is a powerful tool with the potential to deformably map dose from one computed-tomography (CT) image to another. Errors in the DIR, however, will produce errors in the transferred dose distribution. We have proposed a software tool, called AUTODIRECT (automated DIR evaluation of confidence tool), which predicts voxel-specific dose mapping errors on a patient-by-patient basis. This work validates the effectiveness of AUTODIRECT to predict dose mapping errors with virtual and physical phantom datasets. Methods: AUTODIRECT requires 4 inputs: moving and fixed CT images and two noise scans of a water phantom (for noise characterization). Then,more » AUTODIRECT uses algorithms to generate test deformations and applies them to the moving and fixed images (along with processing) to digitally create sets of test images, with known ground-truth deformations that are similar to the actual one. The clinical DIR algorithm is then applied to these test image sets (currently 4) . From these tests, AUTODIRECT generates spatial and dose uncertainty estimates for each image voxel based on a Student’s t distribution. This work compares these uncertainty estimates to the actual errors made by the Velocity Deformable Multi Pass algorithm on 11 virtual and 1 physical phantom datasets. Results: For 11 of the 12 tests, the predicted dose error distributions from AUTODIRECT are well matched to the actual error distributions within 1–6% for 10 virtual phantoms, and 9% for the physical phantom. For one of the cases though, the predictions underestimated the errors in the tail of the distribution. Conclusion: Overall, the AUTODIRECT algorithm performed well on the 12 phantom cases for Velocity and was shown to generate accurate estimates of dose warping uncertainty. AUTODIRECT is able to automatically generate patient-, organ- , and voxel-specific DIR uncertainty estimates. This ability would be useful for patient-specific DIR quality assurance.« less
LACIE performance predictor final operational capability program description, volume 1
NASA Technical Reports Server (NTRS)
1976-01-01
The program EPHEMS computes the orbital parameters for up to two vehicles orbiting the earth for up to 549 days. The data represents a continuous swath about the earth, producing tables which can be used to determine when and if certain land segments will be covered. The program GRID processes NASA's climatology tape to obtain the weather indices along with associated latitudes and longitudes. The program LUMP takes substrata historical data and sample segment ID, crop window, crop window error and statistical data, checks for valid input parameters and generates the segment ID file, crop window file and the substrata historical file. Finally, the System Error Executive (SEE) Program checks YES error and truth data, CAMS error data, and signature extension data for validity and missing elements. A message is printed for each error found.
Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You
2013-11-04
A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.
Maurer, Willi; Jones, Byron; Chen, Ying
2018-05-10
In a 2×2 crossover trial for establishing average bioequivalence (ABE) of a generic agent and a currently marketed drug, the recommended approach to hypothesis testing is the two one-sided test (TOST) procedure, which depends, among other things, on the estimated within-subject variability. The power of this procedure, and therefore the sample size required to achieve a minimum power, depends on having a good estimate of this variability. When there is uncertainty, it is advisable to plan the design in two stages, with an interim sample size reestimation after the first stage, using an interim estimate of the within-subject variability. One method and 3 variations of doing this were proposed by Potvin et al. Using simulation, the operating characteristics, including the empirical type I error rate, of the 4 variations (called Methods A, B, C, and D) were assessed by Potvin et al and Methods B and C were recommended. However, none of these 4 variations formally controls the type I error rate of falsely claiming ABE, even though the amount of inflation produced by Method C was considered acceptable. A major disadvantage of assessing type I error rate inflation using simulation is that unless all possible scenarios for the intended design and analysis are investigated, it is impossible to be sure that the type I error rate is controlled. Here, we propose an alternative, principled method of sample size reestimation that is guaranteed to control the type I error rate at any given significance level. This method uses a new version of the inverse-normal combination of p-values test, in conjunction with standard group sequential techniques, that is more robust to large deviations in initial assumptions regarding the variability of the pharmacokinetic endpoints. The sample size reestimation step is based on significance levels and power requirements that are conditional on the first-stage results. This necessitates a discussion and exploitation of the peculiar properties of the power curve of the TOST testing procedure. We illustrate our approach with an example based on a real ABE study and compare the operating characteristics of our proposed method with those of Method B of Povin et al. Copyright © 2018 John Wiley & Sons, Ltd.
Tang, Jing; Thorhauer, Eric; Marsh, Chelsea; Fu, Freddie H.
2013-01-01
Purpose Femoral tunnel angle (FTA) has been proposed as a metric for evaluating whether ACL reconstruction was performed anatomically. In clinic, radiographic images are typically acquired with an uncertain amount of internal/external knee rotation. The extent to which knee rotation will influence FTA measurement is unclear. Furthermore, differences in FTA measurement between the two common positions (0° and 45° knee flexion) have not been established. The purpose of this study was to investigate the influence of knee rotation on FTA measurement after ACL reconstruction. Methods Knee CT data from 16 subjects were segmented to produce 3D bone models. Central axes of tunnels were identified. The 0° and 45° flexion angles were simulated. Knee internal/external rotations were simulated in a range of ±20°. FTA was defined as the angle between the tunnel axis and femoral shaft axis, orthogonally projected into the coronal plane. Results Femoral tunnel angle was positively/negatively correlated with knee rotation angle at 0°/45° knee flexion. At 0° knee flexion, FTA for anterio-medial (AM) tunnels was significantly decreased at 20° of external knee rotation. At 45° knee flexion, more than 16° external or 19° internal rotation significantly altered FTA measurements for single-bundle tunnels; smaller rotations (±9° for AM, ±5° for PL) created significant errors in FTA measurements after double-bundle reconstruction. Conclusion Femoral tunnel angle measurements were correlated with knee rotation. Relatively small imaging malalignment introduced significant errors with knee flexed 45°. This study supports using the 0° flexion position for knee radiographs to reduce errors in FTA measurement due to knee internal/external rotation. Level of evidence Case–control study, Level III. PMID:23589127
Design Techniques for Power-Aware Combinational Logic SER Mitigation
NASA Astrophysics Data System (ADS)
Mahatme, Nihaar N.
The history of modern semiconductor devices and circuits suggests that technologists have been able to maintain scaling at the rate predicted by Moore's Law [Moor-65]. With improved performance, speed and lower area, technology scaling has also exacerbated reliability issues such as soft errors. Soft errors are transient errors that occur in microelectronic circuits due to ionizing radiation particle strikes on reverse biased semiconductor junctions. These radiation induced errors at the terrestrial-level are caused due to radiation particle strikes by (1) alpha particles emitted as decay products of packing material (2) cosmic rays that produce energetic protons and neutrons, and (3) thermal neutrons [Dodd-03], [Srou-88] and more recently muons and electrons [Ma-79] [Nara-08] [Siew-10] [King-10]. In the space environment radiation induced errors are a much bigger threat and are mainly caused by cosmic heavy-ions, protons etc. The effects of radiation exposure on circuits and measures to protect against them have been studied extensively for the past 40 years, especially for parts operating in space. Radiation particle strikes can affect memory as well as combinational logic. Typically when these particles strike semiconductor junctions of transistors that are part of feedback structures such as SRAM memory cells or flip-flops, it can lead to an inversion of the cell content. Such a failure is formally called a bit-flip or single-event upset (SEU). When such particles strike sensitive junctions part of combinational logic gates they produce transient voltage spikes or glitches called single-event transients (SETs) that could be latched by receiving flip-flops. As the circuits are clocked faster, there are more number of clocking edges which increases the likelihood of latching these transients. In older technology generations the probability of errors in flip-flops due to SETs being latched was much lower compared to direct strikes on flip-flops or SRAMs leading to SEUs. This was mainly because the operating frequencies were much lower for older technology generations. The Intel Pentium II for example was fabricated using 0.35 microm technology and operated between 200-330 MHz. With technology scaling however, operating frequencies have increased tremendously and the contribution of soft errors due to latched SETs from combinational logic could account for a significant proportion of the chip-level soft error rate [Sief-12][Maha-11][Shiv02] [Bu97]. Therefore there is a need to systematically characterize the problem of combinational logic single-event effects (SEE) and understand the various factors that affect the combinational logic single-event error rate. Just as scaling has led to soft errors emerging as a reliability-limiting failure mode for modern digital ICs, the problem of increasing power consumption has arguably been a bigger bane of scaling. While Moore's Law loftily states the blessing of technology scaling to be smaller and faster transistor it fails to highlight that the power density increases exponentially with every technology generation. The power density problem was partially solved in the 1970's and 1980's by moving from bipolar and GaAs technologies to full-scale silicon CMOS technologies. Following this however, technology miniaturization that enabled high-speed, multicore and parallel computing has steadily increased the power density and the power consumption problem. Today minimizing the power consumption is as much critical for power hungry server farms as it for portable devices, all pervasive sensor networks and future eco-bio-sensors. Low-power consumption is now regularly part of design philosophies for various digital products with diverse applications from computing to communication to healthcare. Thus designers in today's world are left grappling with both a "power wall" as well as a "reliability wall". Unfortunately, when it comes to improving reliability through soft error mitigation, most approaches are invariably straddled with overheads in terms of area or speed and more importantly power. Thus, the cost of protecting combinational logic through the use of power hungry mitigation approaches can disrupt the power budget significantly. Therefore there is a strong need to develop techniques that can provide both power minimization as well as combinational logic soft error mitigation. This dissertation, advances hitherto untapped opportunities to jointly reduce power consumption and deliver soft error resilient designs. Circuit as well as architectural approaches are employed to achieve this objective and the advantages of cross-layer optimization for power and soft error reliability are emphasized.
Drach-Zahavy, A; Somech, A; Admi, H; Peterfreund, I; Peker, H; Priente, O
2014-03-01
Attention in the ward should shift from preventing medication administration errors to managing them. Nevertheless, little is known in regard with the practices nursing wards apply to learn from medication administration errors as a means of limiting them. To test the effectiveness of four types of learning practices, namely, non-integrated, integrated, supervisory and patchy learning practices in limiting medication administration errors. Data were collected from a convenient sample of 4 hospitals in Israel by multiple methods (observations and self-report questionnaires) at two time points. The sample included 76 wards (360 nurses). Medication administration error was defined as any deviation from prescribed medication processes and measured by a validated structured observation sheet. Wards' use of medication administration technologies, location of the medication station, and workload were observed; learning practices and demographics were measured by validated questionnaires. Results of the mixed linear model analysis indicated that the use of technology and quiet location of the medication cabinet were significantly associated with reduced medication administration errors (estimate=.03, p<.05 and estimate=-.17, p<.01 correspondingly), while workload was significantly linked to inflated medication administration errors (estimate=.04, p<.05). Of the learning practices, supervisory learning was the only practice significantly linked to reduced medication administration errors (estimate=-.04, p<.05). Integrated and patchy learning were significantly linked to higher levels of medication administration errors (estimate=-.03, p<.05 and estimate=-.04, p<.01 correspondingly). Non-integrated learning was not associated with it (p>.05). How wards manage errors might have implications for medication administration errors beyond the effects of typical individual, organizational and technology risk factors. Head nurse can facilitate learning from errors by "management by walking around" and monitoring nurses' medication administration behaviors. Copyright © 2013 Elsevier Ltd. All rights reserved.
Sethuraman, Usha; Kannikeswaran, Nirupama; Murray, Kyle P; Zidan, Marwan A; Chamberlain, James M
2015-06-01
Prescription errors occur frequently in pediatric emergency departments (PEDs).The effect of computerized physician order entry (CPOE) with electronic medication alert system (EMAS) on these is unknown. The objective was to compare prescription errors rates before and after introduction of CPOE with EMAS in a PED. The hypothesis was that CPOE with EMAS would significantly reduce the rate and severity of prescription errors in the PED. A prospective comparison of a sample of outpatient, medication prescriptions 5 months before and after CPOE with EMAS implementation (7,268 before and 7,292 after) was performed. Error types and rates, alert types and significance, and physician response were noted. Medication errors were deemed significant if there was a potential to cause life-threatening injury, failure of therapy, or an adverse drug effect. There was a significant reduction in the errors per 100 prescriptions (10.4 before vs. 7.3 after; absolute risk reduction = 3.1, 95% confidence interval [CI] = 2.2 to 4.0). Drug dosing error rates decreased from 8 to 5.4 per 100 (absolute risk reduction = 2.6, 95% CI = 1.8 to 3.4). Alerts were generated for 29.6% of prescriptions, with 45% involving drug dose range checking. The sensitivity of CPOE with EMAS in identifying errors in prescriptions was 45.1% (95% CI = 40.8% to 49.6%), and the specificity was 57% (95% CI = 55.6% to 58.5%). Prescribers modified 20% of the dosing alerts, resulting in the error not reaching the patient. Conversely, 11% of true dosing alerts for medication errors were overridden by the prescribers: 88 (11.3%) resulted in medication errors, and 684 (88.6%) were false-positive alerts. A CPOE with EMAS was associated with a decrease in overall prescription errors in our PED. Further system refinements are required to reduce the high false-positive alert rates. © 2015 by the Society for Academic Emergency Medicine.
Derivation and precision of mean field electrodynamics with mesoscale fluctuations
NASA Astrophysics Data System (ADS)
Zhou, Hongzhe; Blackman, Eric G.
2018-06-01
Mean field electrodynamics (MFE) facilitates practical modelling of secular, large scale properties of astrophysical or laboratory systems with fluctuations. Practitioners commonly assume wide scale separation between mean and fluctuating quantities, to justify equality of ensemble and spatial or temporal averages. Often however, real systems do not exhibit such scale separation. This raises two questions: (I) What are the appropriate generalized equations of MFE in the presence of mesoscale fluctuations? (II) How precise are theoretical predictions from MFE? We address both by first deriving the equations of MFE for different types of averaging, along with mesoscale correction terms that depend on the ratio of averaging scale to variation scale of the mean. We then show that even if these terms are small, predictions of MFE can still have a significant precision error. This error has an intrinsic contribution from the dynamo input parameters and a filtering contribution from differences in the way observations and theory are projected through the measurement kernel. Minimizing the sum of these contributions can produce an optimal scale of averaging that makes the theory maximally precise. The precision error is important to quantify when comparing to observations because it quantifies the resolution of predictive power. We exemplify these principles for galactic dynamos, comment on broader implications, and identify possibilities for further work.
NASA Astrophysics Data System (ADS)
Merino, G. G.; Jones, D.; Stooksbury, D. E.; Hubbard, K. G.
2001-06-01
In this paper, linear and spherical semivariogram models were determined for use in kriging hourly and daily solar irradiation for every season of the year. The data used to generate the models were from 18 weather stations in western Nebraska. The models generated were tested using cross validation. The performance of the spherical and linear semivariogram models were compared with each other and also with the semivariogram models based on the best fit to the sample semivariogram of a particular day or hour. There were no significant differences in the performance of the three models. This result and the comparable errors produced by the models in kriging indicated that the linear and spherical models could be used to perform kriging at any hour and day of the year without deriving an individual semivariogram model for that day or hour.The seasonal mean absolute errors associated with kriging, within the network, when using the spherical or the linear semivariograms models were between 10% and 13% of the mean irradiation for daily irradiation and between 12% and 20% for hourly irradiation. These errors represent an improvement of 1%-2% when compared with replacing data at a given site with the data of the nearest weather station.
Vauhkonen, P J; Vauhkonen, M; Kaipio, J P
2000-02-01
In electrical impedance tomography (EIT), an approximation for the internal resistivity distribution is computed based on the knowledge of the injected currents and measured voltages on the surface of the body. The currents spread out in three dimensions and therefore off-plane structures have a significant effect on the reconstructed images. A question arises: how far from the current carrying electrodes should the discretized model of the object be extended? If the model is truncated too near the electrodes, errors are produced in the reconstructed images. On the other hand if the model is extended very far from the electrodes the computational time may become too long in practice. In this paper the model truncation problem is studied with the extended finite element method. Forward solutions obtained using so-called infinite elements, long finite elements and separable long finite elements are compared to the correct solution. The effects of the truncation of the computational domain on the reconstructed images are also discussed and results from the three-dimensional (3D) sensitivity analysis are given. We show that if the finite element method with ordinary elements is used in static 3D EIT, the dimension of the problem can become fairly large if the errors associated with the domain truncation are to be avoided.
Gesture production and comprehension in children with specific language impairment.
Botting, Nicola; Riches, Nicholas; Gaynor, Marguerite; Morgan, Gary
2010-03-01
Children with specific language impairment (SLI) have difficulties with spoken language. However, some recent research suggests that these impairments reflect underlying cognitive limitations. Studying gesture may inform us clinically and theoretically about the nature of the association between language and cognition. A total of 20 children with SLI and 19 typically developing (TD) peers were assessed on a novel measure of gesture production. Children were also assessed for sentence comprehension errors in a speech-gesture integration task. Children with SLI performed equally to peers on gesture production but performed less well when comprehending integrated speech and gesture. Error patterns revealed a significant group interaction: children with SLI made more gesture-based errors, whilst TD children made semantically based ones. Children with SLI accessed and produced lexically encoded gestures despite having impaired spoken vocabulary and this group also showed stronger associations between gesture and language than TD children. When SLI comprehension breaks down, gesture may be relied on over speech, whilst TD children have a preference for spoken cues. The findings suggest that for children with SLI, gesture scaffolds are still more related to language development than for TD peers who have out-grown earlier reliance on gestures. Future clinical implications may include standardized assessment of symbolic gesture and classroom based gesture support for clinical groups.
Albin, Thomas J
2013-01-01
Designers and ergonomists occasionally must produce anthropometric models of workstations with only summary percentile data available regarding the intended users. Until now the only option available was adding or subtracting percentiles of the anthropometric elements, e.g. heights and widths, used in the model, despite the known resultant errors in the estimate of the percent of users accommodated. This paper introduces a new method, the Median Correlation Method (MCM) that reduces the error. Compare the relative accuracy of MCM to combining percentiles for anthropometric models comprised of all possible pairs of five anthropometric elements. Describe the mathematical basis of the greater accuracy of MCM. MCM is described. 95th percentile accommodation percentiles are calculated for the sums and differences of all combinations of five anthropometric elements by combining percentiles and using MCM. The resulting estimates are compared with empirical values of the 95th percentiles, and the relative errors are reported. The MCM method is shown to be significantly more accurate than adding percentiles. MCM is demonstrated to have a mathematical advantage estimating accommodation relative to adding or subtracting percentiles. The MCM method should be used in preference to adding or subtracting percentiles when limited data prevent more sophisticated anthropometric models.
Combined Henyey-Greenstein and Rayleigh phase function.
Liu, Quanhua; Weng, Fuzhong
2006-10-01
The phase function is an important parameter that affects the distribution of scattered radiation. In Rayleigh scattering, a scatterer is approximated by a dipole, and its phase function is analytically related to the scattering angle. For the Henyey-Greenstein (HG) approximation, the phase function preserves only the correct asymmetry factor (i.e., the first moment), which is essentially important for anisotropic scattering. When the HG function is applied to small particles, it produces a significant error in radiance. In addition, the HG function is applied only for an intensity radiative transfer. We develop a combined HG and Rayleigh (HG-Rayleigh) phase function. The HG phase function plays the role of modulator extending the application of the Rayleigh phase function for small asymmetry scattering. The HG-Rayleigh phase function guarantees the correct asymmetry factor and is valid for a polarization radiative transfer. It approaches the Rayleigh phase function for small particles. Thus the HG-Rayleigh phase function has wider applications for both intensity and polarimetric radiative transfers. For microwave radiative transfer modeling in this study, the largest errors in the brightness temperature calculations for weak asymmetry scattering are generally below 0.02 K by using the HG-Rayleigh phase function. The errors can be much larger, in the 1-3 K range, if the Rayleigh and HG functions are applied separately.
Studies in automatic speech recognition and its application in aerospace
NASA Astrophysics Data System (ADS)
Taylor, Michael Robinson
Human communication is characterized in terms of the spectral and temporal dimensions of speech waveforms. Electronic speech recognition strategies based on Dynamic Time Warping and Markov Model algorithms are described and typical digit recognition error rates are tabulated. The application of Direct Voice Input (DVI) as an interface between man and machine is explored within the context of civil and military aerospace programmes. Sources of physical and emotional stress affecting speech production within military high performance aircraft are identified. Experimental results are reported which quantify fundamental frequency and coarse temporal dimensions of male speech as a function of the vibration, linear acceleration and noise levels typical of aerospace environments; preliminary indications of acoustic phonetic variability reported by other researchers are summarized. Connected whole-word pattern recognition error rates are presented for digits spoken under controlled Gz sinusoidal whole-body vibration. Correlations are made between significant increases in recognition error rate and resonance of the abdomen-thorax and head subsystems of the body. The phenomenon of vibrato style speech produced under low frequency whole-body Gz vibration is also examined. Interactive DVI system architectures and avionic data bus integration concepts are outlined together with design procedures for the efficient development of pilot-vehicle command and control protocols.
Calibrating First-Order Strong Lensing Mass Estimates in Clusters of Galaxies
NASA Astrophysics Data System (ADS)
Reed, Brendan; Remolian, Juan; Sharon, Keren; Li, Nan; SPT Clusters Cooperation
2018-01-01
We investigate methods to reduce the statistical and systematic errors inherent to using the Einstein Radius as a first-order mass estimate in strong lensing galaxy clusters. By finding an empirical universal calibration function, we aim to enable a first-order mass estimate of large cluster data sets in a fraction of the time and effort of full-scale strong lensing mass modeling. We use 74 simulated cluster data from the Argonne National Laboratory in a lens redshift slice of [0.159, 0.667] with various source redshifts in the range of [1.23, 2.69]. From the simulated density maps, we calculate the exact mass enclosed within the Einstein Radius. We find that the mass inferred from the Einstein Radius alone produces an error width of ~39% with respect to the true mass. We explore an array of polynomial and exponential correction functions with dependence on cluster redshift and projected radii of the lensed images, aiming to reduce the statistical and systematic uncertainty. We find that the error on the the mass inferred from the Einstein Radius can be reduced significantly by using a universal correction function. Our study has implications for current and future large galaxy cluster surveys aiming to measure cluster mass, and the mass-concentration relation.
An Introduction to SPEAR (Seismogram Picking Error from Analyst Review)
NASA Astrophysics Data System (ADS)
Zeiler, C. P.; Velasco, A. A.; Anderson, D.; Pingitore, N. E.
2008-12-01
A grassroots initiative began in February of 2008 at the University of Texas at El Paso to understand how seismologists measure earthquakes. The Seismogram Picking Error from Analyst Review (SPEAR) project is designed to be a forum where seismologists can propose, discuss and experimentally test theories on proper procedures to identify and measure seismic phases. We outline the history of seismogram analysis and explore areas of seismogram analysis that still need to be defined. The main concern for SPEAR, at this time, is the impact of picking errors produced by merging earthquake catalogs. Our initial effort has been to establish a common data set for seismologists to pick. The preliminary studies from this data set have shown that significant bias between authors of catalogs may exist. We provide techniques to ensure that these biases can be identified and correctly managed to provide accurate mergers of earthquake measurements. The overall goal of SPEAR is to provide a repository of information to aid seismologists in comparing and sharing measurements. We want to document in the repository and explore all aspects of the picking process, from the basics of learning how to read a seismogram to complex transformations and enhancements of signals. Your participation in SPEAR will aid the seismological community to close the knowledge gaps that exist in seismogram analysis.
The use of ZFP lossy floating point data compression in tornado-resolving thunderstorm simulations
NASA Astrophysics Data System (ADS)
Orf, L.
2017-12-01
In the field of atmospheric science, numerical models are used to produce forecasts of weather and climate and serve as virtual laboratories for scientists studying atmospheric phenomena. In both operational and research arenas, atmospheric simulations exploiting modern supercomputing hardware can produce a tremendous amount of data. During model execution, the transfer of floating point data from memory to the file system is often a significant bottleneck where I/O can dominate wallclock time. One way to reduce the I/O footprint is to compress the floating point data, which reduces amount of data saved to the file system. In this presentation we introduce LOFS, a file system developed specifically for use in three-dimensional numerical weather models that are run on massively parallel supercomputers. LOFS utilizes the core (in-memory buffered) HDF5 driver and includes compression options including ZFP, a lossy floating point data compression algorithm. ZFP offers several mechanisms for specifying the amount of lossy compression to be applied to floating point data, including the ability to specify the maximum absolute error allowed in each compressed 3D array. We explore different maximum error tolerances in a tornado-resolving supercell thunderstorm simulation for model variables including cloud and precipitation, temperature, wind velocity and vorticity magnitude. We find that average compression ratios exceeding 20:1 in scientifically interesting regions of the simulation domain produce visually identical results to uncompressed data in visualizations and plots. Since LOFS splits the model domain across many files, compression ratios for a given error tolerance can be compared across different locations within the model domain. We find that regions of high spatial variability (which tend to be where scientifically interesting things are occurring) show the lowest compression ratios, whereas regions of the domain with little spatial variability compress extremely well. We observe that the overhead for compressing data with ZFP is low, and that compressing data in memory reduces the amount of memory overhead needed to store the virtual files before they are flushed to disk.
Corenman, Donald S; Strauch, Eric L; Dornan, Grant J; Otterstrom, Eric; Zalepa King, Lisa
2017-09-01
Advancements in surgical navigation technology coupled with 3-dimensional (3D) radiographic data have significantly enhanced the accuracy and efficiency of spinal fusion implant placement. Increased usage of such technology has led to rising concerns regarding maintenance of the sterile field, as makeshift drape systems are fraught with breaches thus presenting increased risk of surgical site infections (SSIs). A clinical need exists for a sterile draping solution with these techniques. Our objective was to quantify expected accuracy error associated with 2MM and 4MM thickness Sterile-Z Patient Drape ® using Medtronic O-Arm ® Surgical Imaging with StealthStation ® S7 ® Navigation System. Camera distance to reference frame was investigated for contribution to accuracy error. A testing jig was placed on the radiolucent table and the Medtronic passive reference frame was attached to jig. The StealthStation ® S7 ® navigation camera was placed at various distances from testing jig and the geometry error of reference frame was captured for three different drape configurations: no drape, 2MM drape and 4MM drape. The O-Arm ® gantry location and StealthStation ® S7 ® camera position was maintained and seven 3D acquisitions for each of drape configurations were measured. Data was analyzed by a two-factor analysis of variance (ANOVA) and Bonferroni comparisons were used to assess the independent effects of camera angle and drape on accuracy error. Median (and maximum) measurement accuracy error was higher for the 2MM than for the 4MM drape for each camera distance. The most extreme error observed (4.6 mm) occurred when using the 2MM and the 'far' camera distance. The 4MM drape was found to induce an accuracy error of 0.11 mm (95% confidence interval, 0.06-0.15; P<0.001) relative to the no drape testing, regardless of camera distance. Medium camera distance produced lower accuracy error than either the close (additional 0.08 mm error; 95% CI, 0-0.15; P=0.035) or far (additional 0.21mm error; 95% CI, 0.13-0.28; P<0.001) camera distances, regardless of whether a drape was used. In comparison to the 'no drape' condition, the accuracy error of 0.11 mm when using a 4MM film drape is minimal and clinically insignificant.
Huckels-Baumgart, Saskia; Baumgart, André; Buschmann, Ute; Schüpfer, Guido; Manser, Tanja
2016-12-21
Interruptions and errors during the medication process are common, but published literature shows no evidence supporting whether separate medication rooms are an effective single intervention in reducing interruptions and errors during medication preparation in hospitals. We tested the hypothesis that the rate of interruptions and reported medication errors would decrease as a result of the introduction of separate medication rooms. Our aim was to evaluate the effect of separate medication rooms on interruptions during medication preparation and on self-reported medication error rates. We performed a preintervention and postintervention study using direct structured observation of nurses during medication preparation and daily structured medication error self-reporting of nurses by questionnaires in 2 wards at a major teaching hospital in Switzerland. A volunteer sample of 42 nurses was observed preparing 1498 medications for 366 patients over 17 hours preintervention and postintervention on both wards. During 122 days, nurses completed 694 reporting sheets containing 208 medication errors. After the introduction of the separate medication room, the mean interruption rate decreased significantly from 51.8 to 30 interruptions per hour (P < 0.01), and the interruption-free preparation time increased significantly from 1.4 to 2.5 minutes (P < 0.05). Overall, the mean medication error rate per day was also significantly reduced after implementation of the separate medication room from 1.3 to 0.9 errors per day (P < 0.05). The present study showed the positive effect of a hospital-based intervention; after the introduction of the separate medication room, the interruption and medication error rates decreased significantly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajagopal, K. R.; Rao, I. J.
The procedures in place for producing materials in order to optimize their performance with respect to creep characteristics, oxidation resistance, elevation of melting point, thermal and electrical conductivity and other thermal and electrical properties are essentially trial and error experimentation that tend to be tremendously time consuming and expensive. A computational approach has been developed that can replace the trial and error procedures in order that one can efficiently design and engineer materials based on the application in question can lead to enhanced performance of the material, significant decrease in costs and cut down the time necessary to produce suchmore » materials. The work has relevance to the design and manufacture of turbine blades operating at high operating temperature, development of armor and missiles heads; corrosion resistant tanks and containers, better conductors of electricity, and the numerous other applications that are envisaged for specially structured nanocrystalline solids. A robust thermodynamic framework is developed within which the computational approach is developed. The procedure takes into account microstructural features such as the dislocation density, lattice mismatch, stacking faults, volume fractions of inclusions, interfacial area, etc. A robust model for single crystal superalloys that takes into account the microstructure of the alloy within the context of a continuum model is developed. Having developed the model, we then implement in a computational scheme using the software ABAQUS/STANDARD. The results of the simulation are compared against experimental data in realistic geometries.« less
Discovery of error-tolerant biclusters from noisy gene expression data.
Gupta, Rohit; Rao, Navneet; Kumar, Vipin
2011-11-24
An important analysis performed on microarray gene-expression data is to discover biclusters, which denote groups of genes that are coherently expressed for a subset of conditions. Various biclustering algorithms have been proposed to find different types of biclusters from these real-valued gene-expression data sets. However, these algorithms suffer from several limitations such as inability to explicitly handle errors/noise in the data; difficulty in discovering small bicliusters due to their top-down approach; inability of some of the approaches to find overlapping biclusters, which is crucial as many genes participate in multiple biological processes. Association pattern mining also produce biclusters as their result and can naturally address some of these limitations. However, traditional association mining only finds exact biclusters, which limits its applicability in real-life data sets where the biclusters may be fragmented due to random noise/errors. Moreover, as they only work with binary or boolean attributes, their application on gene-expression data require transforming real-valued attributes to binary attributes, which often results in loss of information. Many past approaches have tried to address the issue of noise and handling real-valued attributes independently but there is no systematic approach that addresses both of these issues together. In this paper, we first propose a novel error-tolerant biclustering model, 'ET-bicluster', and then propose a bottom-up heuristic-based mining algorithm to sequentially discover error-tolerant biclusters directly from real-valued gene-expression data. The efficacy of our proposed approach is illustrated by comparing it with a recent approach RAP in the context of two biological problems: discovery of functional modules and discovery of biomarkers. For the first problem, two real-valued S.Cerevisiae microarray gene-expression data sets are used to demonstrate that the biclusters obtained from ET-bicluster approach not only recover larger set of genes as compared to those obtained from RAP approach but also have higher functional coherence as evaluated using the GO-based functional enrichment analysis. The statistical significance of the discovered error-tolerant biclusters as estimated by using two randomization tests, reveal that they are indeed biologically meaningful and statistically significant. For the second problem of biomarker discovery, we used four real-valued Breast Cancer microarray gene-expression data sets and evaluate the biomarkers obtained using MSigDB gene sets. The results obtained for both the problems: functional module discovery and biomarkers discovery, clearly signifies the usefulness of the proposed ET-bicluster approach and illustrate the importance of explicitly incorporating noise/errors in discovering coherent groups of genes from gene-expression data.
Effect of contrast on human speed perception
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Thompson, Peter
1992-01-01
This study is part of an ongoing collaborative research effort between the Life Science and Human Factors Divisions at NASA ARC to measure the accuracy of human motion perception in order to predict potential errors in human perception/performance and to facilitate the design of display systems that minimize the effects of such deficits. The study describes how contrast manipulations can produce significant errors in human speed perception. Specifically, when two simultaneously presented parallel gratings are moving at the same speed within stationary windows, the lower-contrast grating appears to move more slowly. This contrast-induced misperception of relative speed is evident across a wide range of contrasts (2.5-50 percent) and does not appear to saturate (e.g., a 50 percent contrast grating appears slower than a 70 percent contrast grating moving at the same speed). The misperception is large: a 70 percent contrast grating must, on average, be slowed by 35 percent to match a 10 percent contrast grating moving at 2 deg/sec (N = 6). Furthermore, it is largely independent of the absolute contrast level and is a quasilinear function of log contrast ratio. A preliminary parametric study shows that, although spatial frequency has little effect, the relative orientation of the two gratings is important. Finally, the effect depends on the temporal presentation of the stimuli: the effects of contrast on perceived speed appears lessened when the stimuli to be matched are presented sequentially. These data constrain both physiological models of visual cortex and models of human performance. We conclude that viewing conditions that effect contrast, such as fog, may cause significant errors in speed judgments.
Skinner, Kenneth D.
2009-01-01
Elevation data in riverine environments can be used in various applications for which different levels of accuracy are required. The Experimental Advanced Airborne Research LiDAR (Light Detection and Ranging) - or EAARL - system was used to obtain topographic and bathymetric data along the lower Boise River, southwestern Idaho, for use in hydraulic and habitat modeling. The EAARL data were post-processed into bare earth and bathymetric raster and point datasets. Concurrently with the EAARL data collection, real-time kinetic global positioning system and total station ground-survey data were collected in three areas within the lower Boise River basin to assess the accuracy of the EAARL elevation data in different hydrogeomorphic settings. The accuracies of the EAARL-derived elevation data, determined in open, flat terrain, to provide an optimal vertical comparison surface, had root mean square errors ranging from 0.082 to 0.138 m. Accuracies for bank, floodplain, and in-stream bathymetric data had root mean square errors ranging from 0.090 to 0.583 m. The greater root mean square errors for the latter data are the result of high levels of turbidity in the downstream ground-survey area, dense tree canopy, and horizontal location discrepancies between the EAARL and ground-survey data in steeply sloping areas such as riverbanks. The EAARL point to ground-survey comparisons produced results similar to those for the EAARL raster to ground-survey comparisons, indicating that the interpolation of the EAARL points to rasters did not introduce significant additional error. The mean percent error for the wetted cross-sectional areas of the two upstream ground-survey areas was 1 percent. The mean percent error increases to -18 percent if the downstream ground-survey area is included, reflecting the influence of turbidity in that area.
NASA Astrophysics Data System (ADS)
Hillman, B. R.; Marchand, R.; Ackerman, T. P.
2016-12-01
Satellite instrument simulators have emerged as a means to reduce errors in model evaluation by producing simulated or psuedo-retrievals from model fields, which account for limitations in the satellite retrieval process. Because of the mismatch in resolved scales between satellite retrievals and large-scale models, model cloud fields must first be downscaled to scales consistent with satellite retrievals. This downscaling is analogous to that required for model radiative transfer calculations. The assumption is often made in both model radiative transfer codes and satellite simulators that the unresolved clouds follow maximum-random overlap with horizontally homogeneous cloud condensate amounts. We examine errors in simulated MISR and CloudSat retrievals that arise due to these assumptions by applying the MISR and CloudSat simulators to cloud resolving model (CRM) output generated by the Super-parameterized Community Atmosphere Model (SP-CAM). Errors are quantified by comparing simulated retrievals performed directly on the CRM fields with those simulated by first averaging the CRM fields to approximately 2-degree resolution, applying a "subcolumn generator" to regenerate psuedo-resolved cloud and precipitation condensate fields, and then applying the MISR and CloudSat simulators on the regenerated condensate fields. We show that errors due to both assumptions of maximum-random overlap and homogeneous condensate are significant (relative to uncertainties in the observations and other simulator limitations). The treatment of precipitation is particularly problematic for CloudSat-simulated radar reflectivity. We introduce an improved subcolumn generator for use with the simulators, and show that these errors can be greatly reduced by replacing the maximum-random overlap assumption with the more realistic generalized overlap and incorporating a simple parameterization of subgrid-scale cloud and precipitation condensate heterogeneity. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND NO. SAND2016-7485 A
New developments in spatial interpolation methods of Sea-Level Anomalies in the Mediterranean Sea
NASA Astrophysics Data System (ADS)
Troupin, Charles; Barth, Alexander; Beckers, Jean-Marie; Pascual, Ananda
2014-05-01
The gridding of along-track Sea-Level Anomalies (SLA) measured by a constellation of satellites has numerous applications in oceanography, such as model validation, data assimilation or eddy tracking. Optimal Interpolation (OI) is often the preferred method for this task, as it leads to the lowest expected error and provides an error field associated to the analysed field. However, the numerical cost of the method may limit its utilization in situations where the number of data points is significant. Furthermore, the separation of non-adjacent regions with OI requires adaptation of the code, leading to a further increase of the numerical cost. To solve these issues, the Data-Interpolating Variational Analysis (DIVA), a technique designed to produce gridded from sparse in situ measurements, is applied on SLA data in the Mediterranean Sea. DIVA and OI have been shown to be equivalent (provided some assumptions on the covariances are made). The main difference lies in the covariance function, which is not explicitly formulated in DIVA. The particular spatial and temporal distributions of measurements required adaptation in the Software tool (data format, parameter determinations, ...). These adaptation are presented in the poster. The daily analysed and error fields obtained with this technique are compared with available products such as the gridded field from the Archiving, Validation and Interpretation of Satellite Oceanographic data (AVISO) data server. The comparison reveals an overall good agreement between the products. The time evolution of the mean error field evidences the need of a large number of simultaneous altimetry satellites: in period during which 4 satellites are available, the mean error is on the order of 17.5%, while when only 2 satellites are available, the error exceeds 25%. Finally, we propose the use sea currents to improve the results of the interpolation, especially in the coastal area. These currents can be constructed from the bathymetry or extracted from a HF radar located in the Balearic Sea.
Improving NGDC Track-line Data Quality Control
NASA Astrophysics Data System (ADS)
Chandler, M. T.; Wessel, P.
2004-12-01
Ship-board gravity, magnetic and bathymetry data archived at the National Geophysical Data Center (NGDC) represent decades of seagoing research, containing over 4,500 cruises. Cruise data remain relevent despite the prominence of satellite altimetry-derived global grids because many geologic processes remain resolvable by oceanographic research alone. Due to the tremendous investment put forth by scientists and taxpayers to compile this vast archive and the significant errors found within it, additional quality assessment and corrections are warranted. These can best be accomplished by adding to existing quality control measures at NGDC. We are currently developing open source software to provide additional quality control. Along with NGDC's current sanity checking, new data at NGDC will also be subjected to an along-track ``sniffer'' which will detect and flag suspicious data for later graphical inspection using a visual editor. If new data pass these tests, they will undergo further scrutinization using a crossover error (COE) calculator which will compare new data values to existing values at points of intersection within the archive. Data passing these tests will be deemed ``quality data`` and suitable for permanent addition to the archive, while data that fail will be returned to the source institution for correction. Crossover errors will be stored and an online COE database will be available. The COE database will allow users to apply corrections to the NGDC track-line database to produce corrected data files. At no time will the archived data itself be modified. An attempt will also be made to reduce navigational errors for pre-GPS navigated cruises. Upon completion these programs will be used to explore and model systematic errors within the archive, generate correction tables for all cruises, and to quantify the error budget in marine geophysical observations. Software will be released and these procedures will be implemented in cooperation with NGDC staff.
A systematic comparison of error correction enzymes by next-generation sequencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.
Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less
A systematic comparison of error correction enzymes by next-generation sequencing
Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.; ...
2017-08-01
Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less
Inverting Image Data For Optical Testing And Alignment
NASA Technical Reports Server (NTRS)
Shao, Michael; Redding, David; Yu, Jeffrey W.; Dumont, Philip J.
1993-01-01
Data from images produced by slightly incorrectly figured concave primary mirror in telescope processed into estimate of spherical aberration of mirror, by use of algorithm finding nonlinear least-squares best fit between actual images and synthetic images produced by multiparameter mathematical model of telescope optical system. Estimated spherical aberration, in turn, converted into estimate of deviation of reflector surface from nominal precise shape. Algorithm devised as part of effort to determine error in surface figure of primary mirror of Hubble space telescope, so corrective lens designed. Modified versions of algorithm also used to find optical errors in other components of telescope or of other optical systems, for purposes of testing, alignment, and/or correction.
Mirandola, Chiara; Toffalini, Enrico; Grassano, Massimo; Cornoldi, Cesare; Melinder, Annika
2014-01-01
The present experiment was conducted to investigate whether negative emotionally charged and arousing content of to-be-remembered scripted material would affect propensity towards memory distortions. We further investigated whether elaboration of the studied material through free recall would affect the magnitude of memory errors. In this study participants saw eight scripts. Each of the scripts included an effect of an action, the cause of which was not presented. Effects were either negatively emotional or neutral. Participants were assigned to either a yes/no recognition test group (recognition), or to a recall and yes/no recognition test group (elaboration + recognition). Results showed that participants in the recognition group produced fewer memory errors in the emotional condition. Conversely, elaboration + recognition participants had lower accuracy and produced more emotional memory errors than the other group, suggesting a mediating role of semantic elaboration on the generation of false memories. The role of emotions and semantic elaboration on the generation of false memories is discussed.
Pragmatics abilities in narrative production: a cross-disorder comparison.
Norbury, Courtenay Frazier; Gemmell, Tracey; Paul, Rhea
2014-05-01
We aimed to disentangle contributions of socio-pragmatic and structural language deficits to narrative competence by comparing the narratives of children with autism spectrum disorder (ASD; n = 25), non-autistic children with language impairments (LI; n = 23), and children with typical development (TD; n = 27). Groups were matched for age (6½ to 15 years; mean: 10;6) and non-verbal ability; ASD and TD groups were matched on standardized language scores. Despite distinct clinical presentation, children with ASD and LI produced similarly simple narratives that lacked semantic richness and omitted important story elements, when compared to TD peers. Pragmatic errors were common across groups. Within the LI group, pragmatic errors were negatively correlated with story macrostructure scores and with an index of semantic-pragmatic relevance. For the group with ASD, pragmatic errors consisted of comments that, though extraneous, did not detract from the gist of the narrative. These findings underline the importance of both language and socio-pragmatic skill for producing coherent, appropriate narratives.
Emergency Control Aircraft System Using Thrust Modulation
NASA Technical Reports Server (NTRS)
Burken, John J. (Inventor); Burcham, Frank W., Jr. (Inventor)
2000-01-01
A digital longitudinal Aircraft Propulsion Control (APC system of a multiengine aircraft is provided by engine thrust modulation in response to comparing an input flightpath angle signal (gamma)c from a pilot thumbwheel. or an ILS system with a sensed flightpath angle y to produce an error signal (gamma)e that is then integrated (with reasonable limits) to generate a drift correction signal to be added to the error signal (gamma)e after first subtracting a lowpass filtered velocity signal Vel(sub f) for phugoid damping. The output error signal is multiplied by a constant to produce an aircraft thrust control signal ATC of suitable amplitude to drive a throttle servo for all engines. each of which includes its own full-authority digital engine control (FADEC) computer. An alternative APC system omits sensed flightpath angle feedback and instead controls the flightpath angle by feedback of the lowpass filtered velocity signal Vel(sub f) which also inherently provides phugoid damping. The feature of drift compensation is retained.
NASA Technical Reports Server (NTRS)
White, Allan L.; Palumbo, Daniel L.
1991-01-01
Semi-Markov processes have proved to be an effective and convenient tool to construct models of systems that achieve reliability by redundancy and reconfiguration. These models are able to depict complex system architectures and to capture the dynamics of fault arrival and system recovery. A disadvantage of this approach is that the models can be extremely large, which poses both a model and a computational problem. Techniques are needed to reduce the model size. Because these systems are used in critical applications where failure can be expensive, there must be an analytically derived bound for the error produced by the model reduction technique. A model reduction technique called trimming is presented that can be applied to a popular class of systems. Automatic model generation programs were written to help the reliability analyst produce models of complex systems. This method, trimming, is easy to implement and the error bound easy to compute. Hence, the method lends itself to inclusion in an automatic model generator.
Dynamic dielectrophoresis model of multi-phase ionic fluids.
Yan, Ying; Luo, Jing; Guo, Dan; Wen, Shizhu
2015-01-01
Ionic-based dielectrophoretic microchips have attracted significant attention due to their wide-ranging applications in electro kinetic and biological experiments. In this work, a numerical method is used to simulate the dynamic behaviors of ionic droplets in a microchannel under the effect of dielectrophoresis. When a discrete liquid dielectric is encompassed within a continuous fluid dielectric placed in an electric field, an electric force is produced due to the dielectrophoresis effect. If either or both of the fluids are ionic liquids, the magnitude and even the direction of the force will be changed because the net ionic charge induced by an electric field can affect the polarization degree of the dielectrics. However, using a dielectrophoresis model, assuming ideal dielectrics, results in significant errors. To avoid the inaccuracy caused by the model, this work incorporates the electrode kinetic equation and defines a relationship between the polarization charge and the net ionic charge. According to the simulation conditions presented herein, the electric force obtained in this work has an error exceeding 70% of the actual value if the false effect of net ionic charge is not accounted for, which would result in significant issues in the design and optimization of experimental parameters. Therefore, there is a clear motivation for developing a model adapted to ionic liquids to provide precise control for the dielectrophoresis of multi-phase ionic liquids.
Evaluating approaches to find exon chains based on long reads.
Kuosmanen, Anna; Norri, Tuukka; Mäkinen, Veli
2018-05-01
Transcript prediction can be modeled as a graph problem where exons are modeled as nodes and reads spanning two or more exons are modeled as exon chains. Pacific Biosciences third-generation sequencing technology produces significantly longer reads than earlier second-generation sequencing technologies, which gives valuable information about longer exon chains in a graph. However, with the high error rates of third-generation sequencing, aligning long reads correctly around the splice sites is a challenging task. Incorrect alignments lead to spurious nodes and arcs in the graph, which in turn lead to incorrect transcript predictions. We survey several approaches to find the exon chains corresponding to long reads in a splicing graph, and experimentally study the performance of these methods using simulated data to allow for sensitivity/precision analysis. Our experiments show that short reads from second-generation sequencing can be used to significantly improve exon chain correctness either by error-correcting the long reads before splicing graph creation, or by using them to create a splicing graph on which the long-read alignments are then projected. We also study the memory and time consumption of various modules, and show that accurate exon chains lead to significantly increased transcript prediction accuracy. The simulated data and in-house scripts used for this article are available at http://www.cs.helsinki.fi/group/gsa/exon-chains/exon-chains-bib.tar.bz2.
NASA Astrophysics Data System (ADS)
Zhang, Yu; Chen, Changsheng; Beardsley, Robert C.; Gao, Guoping; Qi, Jianhua; Lin, Huichan
2016-11-01
A high-resolution (up to 2 km), unstructured-grid, fully ice-sea coupled Arctic Ocean Finite-Volume Community Ocean Model (AO-FVCOM) was used to simulate the sea ice in the Arctic over the period 1978-2014. The spatial-varying horizontal model resolution was designed to better resolve both topographic and baroclinic dynamics scales over the Arctic slope and narrow straits. The model-simulated sea ice was in good agreement with available observed sea ice extent, concentration, drift velocity and thickness, not only in seasonal and interannual variability but also in spatial distribution. Compared with six other Arctic Ocean models (ECCO2, GSFC, INMOM, ORCA, NAME, and UW), the AO-FVCOM-simulated ice thickness showed a higher mean correlation coefficient of ˜0.63 and a smaller residual with observations. Model-produced ice drift speed and direction errors varied with wind speed: the speed and direction errors increased and decreased as the wind speed increased, respectively. Efforts were made to examine the influences of parameterizations of air-ice external and ice-water interfacial stresses on the model-produced bias. The ice drift direction was more sensitive to air-ice drag coefficients and turning angles than the ice drift speed. Increasing or decreasing either 10% in water-ice drag coefficient or 10° in water-ice turning angle did not show a significant influence on the ice drift velocity simulation results although the sea ice drift speed was more sensitive to these two parameters than the sea ice drift direction. Using the COARE 4.0-derived parameterization of air-water drag coefficient for wind stress did not significantly influence the ice drift velocity simulation.
NASA Technical Reports Server (NTRS)
LaBonte, Barry J.
2004-01-01
A small amount of work has been done on this project; the strategy to be adopted has been better defined, though no experimental work has been started. 1) Wavefront error signals: The best choice appears use a lenslet array at a pupil image to produce defocused image pairs for each subaperture. Then use the method proposed by Molodij et al. to produce subaperture curvature signals. Basically, this method samples a moderate number of locations in the image where the value of the image Laplacian is high, then taking the curvature signal from the difference of the Laplacians of the extrafocal images at those locations. The tip-tilt error is obtained from the temporal dependence of the first spatial derivatives of an in-focus image, at selected locations where these derivatives are significant. The wavefront tilt can be obtained from the full-aperture image. 2) Extrafocal image generation: The important aspect here is to generate symmetrically defocused images, with dynamically adjustable defocus. The adjustment is needed because larger defocus is required before the feedback loop is closed, and at times when the seeing is worse. It may be that the usual membrane mirror is the best choice, though other options should be explored. 3) Detector: Since the proposed sensor is to work on solar granulation, rather than a point source, an array detector for each subaperture is required. A fast CMOS camera such as that developed by the National Solar Observatory would be a satisfactory choice. 4) Processing: Processing requirements have not been defined in detail, though significantly fewer operations per cycle are required than for a correlation tracker.