Space shuttle navigation analysis
NASA Technical Reports Server (NTRS)
Jones, H. L.; Luders, G.; Matchett, G. A.; Sciabarrasi, J. E.
1976-01-01
A detailed analysis of space shuttle navigation for each of the major mission phases is presented. A covariance analysis program for prelaunch IMU calibration and alignment for the orbital flight tests (OFT) is described, and a partial error budget is presented. The ascent, orbital operations and deorbit maneuver study considered GPS-aided inertial navigation in the Phase III GPS (1984+) time frame. The entry and landing study evaluated navigation performance for the OFT baseline system. Detailed error budgets and sensitivity analyses are provided for both the ascent and entry studies.
Recognition Errors Suggest Fast Familiarity and Slow Recollection in Rhesus Monkeys
ERIC Educational Resources Information Center
Basile, Benjamin M.; Hampton, Robert R.
2013-01-01
One influential model of recognition posits two underlying memory processes: recollection, which is detailed but relatively slow, and familiarity, which is quick but lacks detail. Most of the evidence for this dual-process model in nonhumans has come from analyses of receiver operating characteristic (ROC) curves in rats, but whether ROC analyses…
Recognition errors suggest fast familiarity and slow recollection in rhesus monkeys
Basile, Benjamin M.; Hampton, Robert R.
2013-01-01
One influential model of recognition posits two underlying memory processes: recollection, which is detailed but relatively slow, and familiarity, which is quick but lacks detail. Most of the evidence for this dual-process model in nonhumans has come from analyses of receiver operating characteristic (ROC) curves in rats, but whether ROC analyses can demonstrate dual processes has been repeatedly challenged. Here, we present independent converging evidence for the dual-process model from analyses of recognition errors made by rhesus monkeys. Recognition choices were made in three different ways depending on processing duration. Short-latency errors were disproportionately false alarms to familiar lures, suggesting control by familiarity. Medium-latency responses were less likely to be false alarms and were more accurate, suggesting onset of a recollective process that could correctly reject familiar lures. Long-latency responses were guesses. A response deadline increased false alarms, suggesting that limiting processing time weakened the contribution of recollection and strengthened the contribution of familiarity. Together, these findings suggest fast familiarity and slow recollection in monkeys, that monkeys use a “recollect to reject” strategy to countermand false familiarity, and that primate recognition performance is well-characterized by a dual-process model consisting of recollection and familiarity. PMID:23864646
Probabilistic wind/tornado/missile analyses for hazard and fragility evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Y.J.; Reich, M.
Detailed analysis procedures and examples are presented for the probabilistic evaluation of hazard and fragility against high wind, tornado, and tornado-generated missiles. In the tornado hazard analysis, existing risk models are modified to incorporate various uncertainties including modeling errors. A significant feature of this paper is the detailed description of the Monte-Carlo simulation analyses of tornado-generated missiles. A simulation procedure, which includes the wind field modeling, missile injection, solution of flight equations, and missile impact analysis, is described with application examples.
Application of Monte-Carlo Analyses for the Microwave Anisotropy Probe (MAP) Mission
NASA Technical Reports Server (NTRS)
Mesarch, Michael A.; Rohrbaugh, David; Schiff, Conrad; Bauer, Frank H. (Technical Monitor)
2001-01-01
The Microwave Anisotropy Probe (MAP) is the third launch in the National Aeronautics and Space Administration's (NASA's) a Medium Class Explorers (MIDEX) program. MAP will measure, in greater detail, the cosmic microwave background radiation from an orbit about the Sun-Earth-Moon L2 Lagrangian point. Maneuvers will be required to transition MAP from it's initial highly elliptical orbit to a lunar encounter which will provide the remaining energy to send MAP out to a lissajous orbit about L2. Monte-Carlo analysis methods were used to evaluate the potential maneuver error sources and determine their effect of the fixed MAP propellant budget. This paper will discuss the results of the analyses on three separate phases of the MAP mission - recovering from launch vehicle errors, responding to phasing loop maneuver errors, and evaluating the effect of maneuver execution errors and orbit determination errors on stationkeeping maneuvers at L2.
On the use of log-transformation vs. nonlinear regression for analyzing biological power laws.
Xiao, Xiao; White, Ethan P; Hooten, Mevin B; Durham, Susan L
2011-10-01
Power-law relationships are among the most well-studied functional relationships in biology. Recently the common practice of fitting power laws using linear regression (LR) on log-transformed data has been criticized, calling into question the conclusions of hundreds of studies. It has been suggested that nonlinear regression (NLR) is preferable, but no rigorous comparison of these two methods has been conducted. Using Monte Carlo simulations, we demonstrate that the error distribution determines which method performs better, with NLR better characterizing data with additive, homoscedastic, normal error and LR better characterizing data with multiplicative, heteroscedastic, lognormal error. Analysis of 471 biological power laws shows that both forms of error occur in nature. While previous analyses based on log-transformation appear to be generally valid, future analyses should choose methods based on a combination of biological plausibility and analysis of the error distribution. We provide detailed guidelines and associated computer code for doing so, including a model averaging approach for cases where the error structure is uncertain.
Wensveen, Paul J; Thomas, Len; Miller, Patrick J O
2015-01-01
Detailed information about animal location and movement is often crucial in studies of natural behaviour and how animals respond to anthropogenic activities. Dead-reckoning can be used to infer such detailed information, but without additional positional data this method results in uncertainty that grows with time. Combining dead-reckoning with new Fastloc-GPS technology should provide good opportunities for reconstructing georeferenced fine-scale tracks, and should be particularly useful for marine animals that spend most of their time under water. We developed a computationally efficient, Bayesian state-space modelling technique to estimate humpback whale locations through time, integrating dead-reckoning using on-animal sensors with measurements of whale locations using on-animal Fastloc-GPS and visual observations. Positional observation models were based upon error measurements made during calibrations. High-resolution 3-dimensional movement tracks were produced for 13 whales using a simple process model in which errors caused by water current movements, non-location sensor errors, and other dead-reckoning errors were accumulated into a combined error term. Positional uncertainty quantified by the track reconstruction model was much greater for tracks with visual positions and few or no GPS positions, indicating a strong benefit to using Fastloc-GPS for track reconstruction. Compared to tracks derived only from position fixes, the inclusion of dead-reckoning data greatly improved the level of detail in the reconstructed tracks of humpback whales. Using cross-validation, a clear improvement in the predictability of out-of-set Fastloc-GPS data was observed compared to more conventional track reconstruction methods. Fastloc-GPS observation errors during calibrations were found to vary by number of GPS satellites received and by orthogonal dimension analysed; visual observation errors varied most by distance to the whale. By systematically accounting for the observation errors in the position fixes, our model provides a quantitative estimate of location uncertainty that can be appropriately incorporated into analyses of animal movement. This generic method has potential application for a wide range of marine animal species and data recording systems.
On the use of log-transformation vs. nonlinear regression for analyzing biological power laws
Xiao, X.; White, E.P.; Hooten, M.B.; Durham, S.L.
2011-01-01
Power-law relationships are among the most well-studied functional relationships in biology. Recently the common practice of fitting power laws using linear regression (LR) on log-transformed data has been criticized, calling into question the conclusions of hundreds of studies. It has been suggested that nonlinear regression (NLR) is preferable, but no rigorous comparison of these two methods has been conducted. Using Monte Carlo simulations, we demonstrate that the error distribution determines which method performs better, with NLR better characterizing data with additive, homoscedastic, normal error and LR better characterizing data with multiplicative, heteroscedastic, lognormal error. Analysis of 471 biological power laws shows that both forms of error occur in nature. While previous analyses based on log-transformation appear to be generally valid, future analyses should choose methods based on a combination of biological plausibility and analysis of the error distribution. We provide detailed guidelines and associated computer code for doing so, including a model averaging approach for cases where the error structure is uncertain. ?? 2011 by the Ecological Society of America.
NASA Astrophysics Data System (ADS)
Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen; Bai, Qing
2016-09-01
The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.
Imaging phased telescope array study
NASA Technical Reports Server (NTRS)
Harvey, James E.
1989-01-01
The problems encountered in obtaining a wide field-of-view with large, space-based direct imaging phased telescope arrays were considered. After defining some of the critical systems issues, previous relevant work in the literature was reviewed and summarized. An extensive list was made of potential error sources and the error sources were categorized in the form of an error budget tree including optical design errors, optical fabrication errors, assembly and alignment errors, and environmental errors. After choosing a top level image quality requirment as a goal, a preliminary tops-down error budget allocation was performed; then, based upon engineering experience, detailed analysis, or data from the literature, a bottoms-up error budget reallocation was performed in an attempt to achieve an equitable distribution of difficulty in satisfying the various allocations. This exercise provided a realistic allocation for residual off-axis optical design errors in the presence of state-of-the-art optical fabrication and alignment errors. Three different computational techniques were developed for computing the image degradation of phased telescope arrays due to aberrations of the individual telescopes. Parametric studies and sensitivity analyses were then performed for a variety of subaperture configurations and telescope design parameters in an attempt to determine how the off-axis performance of a phased telescope array varies as the telescopes are scaled up in size. The Air Force Weapons Laboratory (AFWL) multipurpose telescope testbed (MMTT) configuration was analyzed in detail with regard to image degradation due to field curvature and distortion of the individual telescopes as they are scaled up in size.
Mars Exploration Rover Potentiometer Problems, Failures and Lessons Learned
NASA Technical Reports Server (NTRS)
Balzer, Mark
2006-01-01
During qualification testing of three types of non-wire-wound precision potentiometers for the Mars Exploration Rover, a variety of problems and failures were encountered. This paper will describe some of the more interesting problems, detail their investigations and present their final solutions. The failures were found to be caused by design errors, manufacturing errors, improper handling, test errors, and carelessness. A trend of decreasing total resistance was noted, and a resistance histogram was used to identify an outlier. A gang fixture is described for simultaneously testing multiple pots, and real time X-ray imaging was used extensively to assist in the failure analyses. Lessons learned are provided.
Mars Exploration Rover potentiometer problems, failures and lessons learned
NASA Technical Reports Server (NTRS)
Balzer, Mark A.
2006-01-01
During qualification testing of three types of nonwire-wound precision potentiometers for the Mars Exploration Rover, a variety of problems and failures were encountered. This paper will describe some of the more interesting problems, detail their investigations and present their final solutions. The failures were found to be caused by design errors, manufacturing errors, improper handling, test errors, and carelessness. A trend of decreasing total resistance was noted, and a resistance histogram was used to identify an outlier. A gang fixture is described for simultaneously testing multiple pots, and real time X-ray imaging was used extensively to assist in the failure analyses. Lessons learned are provided.
NASA Astrophysics Data System (ADS)
Lange, Heiner; Craig, George
2014-05-01
This study uses the Local Ensemble Transform Kalman Filter (LETKF) to perform storm-scale Data Assimilation of simulated Doppler radar observations into the non-hydrostatic, convection-permitting COSMO model. In perfect model experiments (OSSEs), it is investigated how the limited predictability of convective storms affects precipitation forecasts. The study compares a fine analysis scheme with small RMS errors to a coarse scheme that allows for errors in position, shape and occurrence of storms in the ensemble. The coarse scheme uses superobservations, a coarser grid for analysis weights, a larger localization radius and larger observation error that allow a broadening of the Gaussian error statistics. Three hour forecasts of convective systems (with typical lifetimes exceeding 6 hours) from the detailed analyses of the fine scheme are found to be advantageous to those of the coarse scheme during the first 1-2 hours, with respect to the predicted storm positions. After 3 hours in the convective regime used here, the forecast quality of the two schemes appears indiscernible, judging by RMSE and verification methods for rain-fields and objects. It is concluded that, for operational assimilation systems, the analysis scheme might not necessarily need to be detailed to the grid scale of the model. Depending on the forecast lead time, and on the presence of orographic or synoptic forcing that enhance the predictability of storm occurrences, analyses from a coarser scheme might suffice.
Development of steady-state model for MSPT and detailed analyses of receiver
NASA Astrophysics Data System (ADS)
Yuasa, Minoru; Sonoda, Masanori; Hino, Koichi
2016-05-01
Molten salt parabolic trough system (MSPT) uses molten salt as heat transfer fluid (HTF) instead of synthetic oil. The demonstration plant of MSPT was constructed by Chiyoda Corporation and Archimede Solar Energy in Italy in 2013. Chiyoda Corporation developed a steady-state model for predicting the theoretical behavior of the demonstration plant. The model was designed to calculate the concentrated solar power and heat loss using ray tracing of incident solar light and finite element modeling of thermal energy transferred into the medium. This report describes the verification of the model using test data on the demonstration plant, detailed analyses on the relation between flow rate and temperature difference on the metal tube of receiver and the effect of defocus angle on concentrated power rate, for solar collector assembly (SCA) development. The model is accurate to an extent of 2.0% as systematic error and 4.2% as random error. The relationships between flow rate and temperature difference on metal tube and the effect of defocus angle on concentrated power rate are shown.
Advanced Microwave Radiometer (AMR) for SWOT mission
NASA Astrophysics Data System (ADS)
Chae, C. S.
2015-12-01
The objective of the SWOT (Surface Water & Ocean Topography) satellite mission is to measure wide-swath, high resolution ocean topography and terrestrial surface waters. Since main payload radar will use interferometric SAR technology, conventional microwave radiometer system which has single nadir look antenna beam (i.e., OSTM/Jason-2 AMR) is not ideally applicable for the mission for wet tropospheric delay correction. Therefore, SWOT AMR incorporates two antenna beams along cross track direction. In addition to the cross track design of the AMR radiometer, wet tropospheric error requirement is expressed in space frequency domain (in the sense of cy/km), in other words, power spectral density (PSD). Thus, instrument error allocation and design are being done in PSD which are not conventional approaches for microwave radiometer requirement allocation and design. A few of novel analyses include: 1. The effects of antenna beam size to PSD error and land/ocean contamination, 2. Receiver error allocation and the contributions of radiometric count averaging, NEDT, Gain variation, etc. 3. Effect of thermal design in the frequency domain. In the presentation, detailed AMR design and analyses results will be discussed.
Meurier, C E
2000-07-01
Human errors are common in clinical practice, but they are under-reported. As a result, very little is known of the types, antecedents and consequences of errors in nursing practice. This limits the potential to learn from errors and to make improvement in the quality and safety of nursing care. The aim of this study was to use an Organizational Accident Model to analyse critical incidents of errors in nursing. Twenty registered nurses were invited to produce a critical incident report of an error (which had led to an adverse event or potentially could have led to an adverse event) they had made in their professional practice and to write down their responses to the error using a structured format. Using Reason's Organizational Accident Model, supplemental information was then collected from five of the participants by means of an individual in-depth interview to explore further issues relating to the incidents they had reported. The detailed analysis of one of the incidents is discussed in this paper, demonstrating the effectiveness of this approach in providing insight into the chain of events which may lead to an adverse event. The case study approach using critical incidents of clinical errors was shown to provide relevant information regarding the interaction of organizational factors, local circumstances and active failures (errors) in producing an adverse or potentially adverse event. It is suggested that more use should be made of this approach to understand how errors are made in practice and to take appropriate preventative measures.
Error Analyses of the North Alabama Lightning Mapping Array (LMA)
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solokiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J. M.; Bailey, J. C.; Krider, E. P.; Bateman, M. G.; Boccippio, D. J.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA-MSFC and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results, except that the chi-squared theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
Error mechanism analyses of an ultra-precision stage for high speed scan motion over a large stroke
NASA Astrophysics Data System (ADS)
Wang, Shaokai; Tan, Jiubin; Cui, Jiwen
2015-02-01
Reticle Stage (RS) is designed to complete scan motion with high speed in nanometer-scale over a large stroke. Comparing with the allowable scan accuracy of a few nanometers, errors caused by any internal or external disturbances are critical and must not be ignored. In this paper, RS is firstly introduced in aspects of mechanical structure, forms of motion, and controlling method. Based on that, mechanisms of disturbances transferred to final servo-related error in scan direction are analyzed, including feedforward error, coupling between the large stroke stage (LS) and the short stroke stage (SS), and movement of measurement reference. Especially, different forms of coupling between SS and LS are discussed in detail. After theoretical analysis above, the contributions of these disturbances to final error are simulated numerically. The residual positioning error caused by feedforward error in acceleration process is about 2 nm after settling time, the coupling between SS and LS about 2.19 nm, and the movements of MF about 0.6 nm.
Bunt, C.M.; Castro-Santos, Theodore R.; Haro, Alexander
2016-01-01
Detailed re-examination of the datasets that were used for a meta-analysis of fishway attraction and passage revealed a number of errors that we addressed and corrected. We subsequently re-analysed the revised dataset, and results showed no significant changes in the primary conclusions of the original study; for most species, effective performance cannot be assured for any fishway type.
Doytchev, Doytchin E; Szwillus, Gerd
2009-11-01
Understanding the reasons for incident and accident occurrence is important for an organization's safety. Different methods have been developed to achieve this goal. To better understand the human behaviour in incident occurrence we propose an analysis concept that combines Fault Tree Analysis (FTA) and Task Analysis (TA). The former method identifies the root causes of an accident/incident, while the latter analyses the way people perform the tasks in their work environment and how they interact with machines or colleagues. These methods were complemented with the use of the Human Error Identification in System Tools (HEIST) methodology and the concept of Performance Shaping Factors (PSF) to deepen the insight into the error modes of an operator's behaviour. HEIST shows the external error modes that caused the human error and the factors that prompted the human to err. To show the validity of the approach, a case study at a Bulgarian Hydro power plant was carried out. An incident - the flooding of the plant's basement - was analysed by combining the afore-mentioned methods. The case study shows that Task Analysis in combination with other methods can be applied successfully to human error analysis, revealing details about erroneous actions in a realistic situation.
Park, Hame; Lueckmann, Jan-Matthis; von Kriegstein, Katharina; Bitzer, Sebastian; Kiebel, Stefan J.
2016-01-01
Decisions in everyday life are prone to error. Standard models typically assume that errors during perceptual decisions are due to noise. However, it is unclear how noise in the sensory input affects the decision. Here we show that there are experimental tasks for which one can analyse the exact spatio-temporal details of a dynamic sensory noise and better understand variability in human perceptual decisions. Using a new experimental visual tracking task and a novel Bayesian decision making model, we found that the spatio-temporal noise fluctuations in the input of single trials explain a significant part of the observed responses. Our results show that modelling the precise internal representations of human participants helps predict when perceptual decisions go wrong. Furthermore, by modelling precisely the stimuli at the single-trial level, we were able to identify the underlying mechanism of perceptual decision making in more detail than standard models. PMID:26752272
North Alabama Lightning Mapping Array (LMA): VHF Source Retrieval Algorithm and Error Analyses
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solakiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J.; Bailey, J.; Krider, E. P.; Bateman, M. G.; Boccippio, D.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA Marshall Space Flight Center (MSFC) and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix Theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50 ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results. However, for many source locations, the Curvature Matrix Theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
Read, Gemma J M; Lenné, Michael G; Moss, Simon A
2012-09-01
Rail accidents can be understood in terms of the systemic and individual contributions to their causation. The current study was undertaken to determine whether errors and violations are more often associated with different local and organisational factors that contribute to rail accidents. The Contributing Factors Framework (CFF), a tool developed for the collection and codification of data regarding rail accidents and incidents, was applied to a sample of investigation reports. In addition, a more detailed categorisation of errors was undertaken. Ninety-six investigation reports into Australian accidents and incidents occurring between 1999 and 2008 were analysed. Each report was coded independently by two experienced coders. Task demand factors were significantly more often associated with skill-based errors, knowledge and training deficiencies significantly associated with mistakes, and violations significantly linked to social environmental factors. Copyright © 2012 Elsevier Ltd. All rights reserved.
New Methods for Assessing and Reducing Uncertainty in Microgravity Studies
NASA Astrophysics Data System (ADS)
Giniaux, J. M.; Hooper, A. J.; Bagnardi, M.
2017-12-01
Microgravity surveying, also known as dynamic or 4D gravimetry is a time-dependent geophysical method used to detect mass fluctuations within the shallow crust, by analysing temporal changes in relative gravity measurements. We present here a detailed uncertainty analysis of temporal gravity measurements, considering for the first time all possible error sources, including tilt, error in drift estimations and timing errors. We find that some error sources that are actually ignored, can have a significant impact on the total error budget and it is therefore likely that some gravity signals may have been misinterpreted in previous studies. Our analysis leads to new methods for reducing some of the uncertainties associated with residual gravity estimation. In particular, we propose different approaches for drift estimation and free air correction depending on the survey set up. We also provide formulae to recalculate uncertainties for past studies and lay out a framework for best practice in future studies. We demonstrate our new approach on volcanic case studies, which include Kilauea in Hawaii and Askja in Iceland.
Matsuda, F; Lan, W C; Tanimura, R
1999-02-01
In Matsuda's 1996 study, 4- to 11-yr.-old children (N = 133) watched two cars running on two parallel tracks on a CRT display and judged whether their durations and distances were equal and, if not, which was larger. In the present paper, the relative contributions of the four critical stimulus attributes (whether temporal starting points, temporal stopping points, spatial starting points, and spatial stopping points were the same or different between two cars) to the production of errors were quantitatively estimated based on the data for rates of errors obtained by Matsuda. The present analyses made it possible not only to understand numerically the findings about qualitative characteristics of the critical attributes described by Matsuda, but also to add more detailed findings about them.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morcrette, C. J.; Van Weverberg, K.; Ma, H. -Y.
The Clouds Above the United States and Errors at the Surface (CAUSES) project is aimed at gaining a better understanding of the physical processes that are leading to the creation of warm screen-temperature biases over the American Midwest, which are seen in many numerical models. Here in Part 1, a series of 5-day hindcasts, each initialised from re-analyses and performed by 11 different models, are evaluated against screen-temperature observations. All the models have a warm bias over parts of the Midwest. Several ways of quantifying the impact of the initial conditions on the evolution of the simulations are presented, showingmore » that within a day or so all models have produced a warm bias that is representative of their bias after 5 days, and not closely tied to the conditions at the initial time. Although the surface temperature biases sometimes coincide with locations where the re-analyses themselves have a bias, there are many regions in each of the models where biases grow over the course of 5 days or are larger than the biases present in the reanalyses. At the Southern Great Plains site, the model biases are shown to not be confined to the surface, but extend several kilometres into the atmosphere. In most of the models, there is a strong diurnal cycle in the screen-temperature bias and in some models the biases are largest around midday, while in the others it is largest during the night. While the different physical processes that are contributing to a given model having a screen-temperature error will be discussed in more detail in the companion papers (Parts 2 and 3) the fact that there is a spatial coherence in the phase of the diurnal cycle of the error across wide regions and that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP suggest that the detailed evaluations of the role of different processes in contributing to errors at SGP will be representative of errors that are prevalent over a much larger spatial scale.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morcrette, Cyril J.; Van Weverberg, Kwinten; Ma, H
2018-02-16
The Clouds Above the United States and Errors at the Surface (CAUSES) project is aimed at gaining a better understanding of the physical processes that are leading to the creation of warm screen-temperature biases over the American Midwest, which are seen in many numerical models. Here in Part 1, a series of 5-day hindcasts, each initialised from re-analyses and performed by 11 different models, are evaluated against screen-temperature observations. All the models have a warm bias over parts of the Midwest. Several ways of quantifying the impact of the initial conditions on the evolution of the simulations are presented, showingmore » that within a day or so all models have produced a warm bias that is representative of their bias after 5 days, and not closely tied to the conditions at the initial time. Although the surface temperature biases sometimes coincide with locations where the re-analyses themselves have a bias, there are many regions in each of the models where biases grow over the course of 5 days or are larger than the biases present in the reanalyses. At the Southern Great Plains site, the model biases are shown to not be confined to the surface, but extend several kilometres into the atmosphere. In most of the models, there is a strong diurnal cycle in the screen-temperature bias and in some models the biases are largest around midday, while in the others it is largest during the night. While the different physical processes that are contributing to a given model having a screen-temperature error will be discussed in more detail in the companion papers (Parts 2 and 3) the fact that there is a spatial coherence in the phase of the diurnal cycle of the error across wide regions and that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP suggest that the detailed evaluations of the role of different processes in contributing to errors at SGP will be representative of errors that are prevalent over a much larger spatial scale.« less
Unfavourable results with distraction in craniofacial skeleton
Agarwal, Rajiv
2013-01-01
Distraction osteogenesis has revolutionised the management of craniofacial abnormalities. The technique however requires precise planning, patient selection, execution and follow-up to achieve consistent and positive results and to avoid unfavourable results. The unfavourable results with craniofacial distraction stem from many factors ranging from improper patient selection, planning and use of inappropriate distraction device and vector. The present study analyses the current standards and techniques of distraction and details in depth the various errors and complications that may occur due to this technique. The commonly observed complications of distraction have been detailed along with measures and suggestions to avoid them in clinical practice. PMID:24501455
Space shuttle Ku-band integrated rendezvous radar/communications system study
NASA Technical Reports Server (NTRS)
1976-01-01
The results are presented of work performed on the Space Shuttle Ku-Band Integrated Rendezvous Radar/Communications System Study. The recommendations and conclusions are included as well as the details explaining the results. The requirements upon which the study was based are presented along with the predicted performance of the recommended system configuration. In addition, shuttle orbiter vehicle constraints (e.g., size, weight, power, stowage space) are discussed. The tradeoffs considered and the operation of the recommended configuration are described for an optimized, integrated Ku-band radar/communications system. Basic system tradeoffs, communication design, radar design, antenna tradeoffs, antenna gimbal and drive design, antenna servo design, and deployed assembly packaging design are discussed. The communications and radar performance analyses necessary to support the system design effort are presented. Detailed derivations of the communications thermal noise error, the radar range, range rate, and angle tracking errors, and the communications transmitter distortion parameter effect on crosstalk between the unbalanced quadriphase signals are included.
Sensitivity of planetary cruise navigation to earth orientation calibration errors
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Folkner, W. M.
1995-01-01
A detailed analysis was conducted to determine the sensitivity of spacecraft navigation errors to the accuracy and timeliness of Earth orientation calibrations. Analyses based on simulated X-band (8.4-GHz) Doppler and ranging measurements acquired during the interplanetary cruise segment of the Mars Pathfinder heliocentric trajectory were completed for the nominal trajectory design and for an alternative trajectory with a longer transit time. Several error models were developed to characterize the effect of Earth orientation on navigational accuracy based on current and anticipated Deep Space Network calibration strategies. The navigational sensitivity of Mars Pathfinder to calibration errors in Earth orientation was computed for each candidate calibration strategy with the Earth orientation parameters included as estimated parameters in the navigation solution. In these cases, the calibration errors contributed 23 to 58% of the total navigation error budget, depending on the calibration strategy being assessed. Navigation sensitivity calculations were also performed for cases in which Earth orientation calibration errors were not adjusted in the navigation solution. In these cases, Earth orientation calibration errors contributed from 26 to as much as 227% of the total navigation error budget. The final analysis suggests that, not only is the method used to calibrate Earth orientation vitally important for precision navigation of Mars Pathfinder, but perhaps equally important is the method for inclusion of the calibration errors in the navigation solutions.
Huff, Mark J; Umanath, Sharda
2018-06-01
In 2 experiments, we assessed age-related suggestibility to additive and contradictory misinformation (i.e., remembering of false details from an external source). After reading a fictional story, participants answered questions containing misleading details that were either additive (misleading details that supplemented an original event) or contradictory (errors that changed original details). On a final test, suggestibility was greater for additive than contradictory misinformation, and older adults endorsed fewer false contradictory details than younger adults. To mitigate suggestibility in Experiment 2, participants were warned about potential errors, instructed to detect errors, or instructed to detect errors after exposure to examples of additive and contradictory details. Again, suggestibility to additive misinformation was greater than contradictory, and older adults endorsed less contradictory misinformation. Only after detection instructions with misinformation examples were younger adults able to reduce contradictory misinformation effects and reduced these effects to the level of older adults. Additive misinformation however, was immune to all warning and detection instructions. Thus, older adults were less susceptible to contradictory misinformation errors, and younger adults could match this misinformation rate when warning/detection instructions were strong. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Thyroid cancer following scalp irradiation: a reanalysis accounting for uncertainty in dosimetry.
Schafer, D W; Lubin, J H; Ron, E; Stovall, M; Carroll, R J
2001-09-01
In the 1940s and 1950s, over 20,000 children in Israel were treated for tinea capitis (scalp ringworm) by irradiation to induce epilation. Follow-up studies showed that the radiation exposure was associated with the development of malignant thyroid neoplasms. Despite this clear evidence of an effect, the magnitude of the dose-response relationship is much less clear because of probable errors in individual estimates of dose to the thyroid gland. Such errors have the potential to bias dose-response estimation, a potential that was not widely appreciated at the time of the original analyses. We revisit this issue, describing in detail how errors in dosimetry might occur, and we develop a new dose-response model that takes the uncertainties of the dosimetry into account. Our model for the uncertainty in dosimetry is a complex and new variant of the classical multiplicative Berkson error model, having components of classical multiplicative measurement error as well as missing data. Analysis of the tinea capitis data suggests that measurement error in the dosimetry has only a negligible effect on dose-response estimation and inference as well as on the modifying effect of age at exposure.
NASA Astrophysics Data System (ADS)
Murillo Feo, C. A.; Martnez Martinez, L. J.; Correa Muñoz, N. A.
2016-06-01
The accuracy of locating attributes on topographic surfaces when, using GPS in mountainous areas, is affected by obstacles to wave propagation. As part of this research on the semi-automatic detection of landslides, we evaluate the accuracy and spatial distribution of the horizontal error in GPS positioning in the tertiary road network of six municipalities located in mountainous areas in the department of Cauca, Colombia, using geo-referencing with GPS mapping equipment and static-fast and pseudo-kinematic methods. We obtained quality parameters for the GPS surveys with differential correction, using a post-processing method. The consolidated database underwent exploratory analyses to determine the statistical distribution, a multivariate analysis to establish relationships and partnerships between the variables, and an analysis of the spatial variability and calculus of accuracy, considering the effect of non-Gaussian distribution errors. The evaluation of the internal validity of the data provide metrics with a confidence level of 95% between 1.24 and 2.45 m in the static-fast mode and between 0.86 and 4.2 m in the pseudo-kinematic mode. The external validity had an absolute error of 4.69 m, indicating that this descriptor is more critical than precision. Based on the ASPRS standard, the scale obtained with the evaluated equipment was in the order of 1:20000, a level of detail expected in the landslide-mapping project. Modelling the spatial variability of the horizontal errors from the empirical semi-variogram analysis showed predictions errors close to the external validity of the devices.
Statistical analysis of the determinations of the Sun's Galactocentric distance
NASA Astrophysics Data System (ADS)
Malkin, Zinovy
2013-02-01
Based on several tens of R0 measurements made during the past two decades, several studies have been performed to derive the best estimate of R0. Some used just simple averaging to derive a result, whereas others provided comprehensive analyses of possible errors in published results. In either case, detailed statistical analyses of data used were not performed. However, a computation of the best estimates of the Galactic rotation constants is not only an astronomical but also a metrological task. Here we perform an analysis of 53 R0 measurements (published in the past 20 years) to assess the consistency of the data. Our analysis shows that they are internally consistent. It is also shown that any trend in the R0 estimates from the last 20 years is statistically negligible, which renders the presence of a bandwagon effect doubtful. On the other hand, the formal errors in the published R0 estimates improve significantly with time.
Joshi, Anuradha; Buch, Jatin; Kothari, Nitin; Shah, Nishal
2016-06-01
Prescription order is an important therapeutic transaction between physician and patient. A good quality prescription is an extremely important factor for minimizing errors in dispensing medication and it should be adherent to guidelines for prescription writing for benefit of the patient. To evaluate frequency and type of prescription errors in outpatient prescriptions and find whether prescription writing abides with WHO standards of prescription writing. A cross-sectional observational study was conducted at Anand city. Allopathic private practitioners practising at Anand city of different specialities were included in study. Collection of prescriptions was started a month after the consent to minimize bias in prescription writing. The prescriptions were collected from local pharmacy stores of Anand city over a period of six months. Prescriptions were analysed for errors in standard information, according to WHO guide to good prescribing. Descriptive analysis was performed to estimate frequency of errors, data were expressed as numbers and percentage. Total 749 (549 handwritten and 200 computerised) prescriptions were collected. Abundant omission errors were identified in handwritten prescriptions e.g., OPD number was mentioned in 6.19%, patient's age was mentioned in 25.50%, gender in 17.30%, address in 9.29% and weight of patient mentioned in 11.29%, while in drug items only 2.97% drugs were prescribed by generic name. Route and Dosage form was mentioned in 77.35%-78.15%, dose mentioned in 47.25%, unit in 13.91%, regimens were mentioned in 72.93% while signa (direction for drug use) in 62.35%. Total 4384 errors out of 549 handwritten prescriptions and 501 errors out of 200 computerized prescriptions were found in clinicians and patient details. While in drug item details, total number of errors identified were 5015 and 621 in handwritten and computerized prescriptions respectively. As compared to handwritten prescriptions, computerized prescriptions appeared to be associated with relatively lower rates of error. Since out-patient prescription errors are abundant and often occur in handwritten prescriptions, prescribers need to adapt themselves to computerized prescription order entry in their daily practice.
Buch, Jatin; Kothari, Nitin; Shah, Nishal
2016-01-01
Introduction Prescription order is an important therapeutic transaction between physician and patient. A good quality prescription is an extremely important factor for minimizing errors in dispensing medication and it should be adherent to guidelines for prescription writing for benefit of the patient. Aim To evaluate frequency and type of prescription errors in outpatient prescriptions and find whether prescription writing abides with WHO standards of prescription writing. Materials and Methods A cross-sectional observational study was conducted at Anand city. Allopathic private practitioners practising at Anand city of different specialities were included in study. Collection of prescriptions was started a month after the consent to minimize bias in prescription writing. The prescriptions were collected from local pharmacy stores of Anand city over a period of six months. Prescriptions were analysed for errors in standard information, according to WHO guide to good prescribing. Statistical Analysis Descriptive analysis was performed to estimate frequency of errors, data were expressed as numbers and percentage. Results Total 749 (549 handwritten and 200 computerised) prescriptions were collected. Abundant omission errors were identified in handwritten prescriptions e.g., OPD number was mentioned in 6.19%, patient’s age was mentioned in 25.50%, gender in 17.30%, address in 9.29% and weight of patient mentioned in 11.29%, while in drug items only 2.97% drugs were prescribed by generic name. Route and Dosage form was mentioned in 77.35%-78.15%, dose mentioned in 47.25%, unit in 13.91%, regimens were mentioned in 72.93% while signa (direction for drug use) in 62.35%. Total 4384 errors out of 549 handwritten prescriptions and 501 errors out of 200 computerized prescriptions were found in clinicians and patient details. While in drug item details, total number of errors identified were 5015 and 621 in handwritten and computerized prescriptions respectively. Conclusion As compared to handwritten prescriptions, computerized prescriptions appeared to be associated with relatively lower rates of error. Since out-patient prescription errors are abundant and often occur in handwritten prescriptions, prescribers need to adapt themselves to computerized prescription order entry in their daily practice. PMID:27504305
Reproducing American Sign Language sentences: cognitive scaffolding in working memory
Supalla, Ted; Hauser, Peter C.; Bavelier, Daphne
2014-01-01
The American Sign Language Sentence Reproduction Test (ASL-SRT) requires the precise reproduction of a series of ASL sentences increasing in complexity and length. Error analyses of such tasks provides insight into working memory and scaffolding processes. Data was collected from three groups expected to differ in fluency: deaf children, deaf adults and hearing adults, all users of ASL. Quantitative (correct/incorrect recall) and qualitative error analyses were performed. Percent correct on the reproduction task supports its sensitivity to fluency as test performance clearly differed across the three groups studied. A linguistic analysis of errors further documented differing strategies and bias across groups. Subjects' recall projected the affordance and constraints of deep linguistic representations to differing degrees, with subjects resorting to alternate processing strategies when they failed to recall the sentence correctly. A qualitative error analysis allows us to capture generalizations about the relationship between error pattern and the cognitive scaffolding, which governs the sentence reproduction process. Highly fluent signers and less-fluent signers share common chokepoints on particular words in sentences. However, they diverge in heuristic strategy. Fluent signers, when they make an error, tend to preserve semantic details while altering morpho-syntactic domains. They produce syntactically correct sentences with equivalent meaning to the to-be-reproduced one, but these are not verbatim reproductions of the original sentence. In contrast, less-fluent signers tend to use a more linear strategy, preserving lexical status and word ordering while omitting local inflections, and occasionally resorting to visuo-motoric imitation. Thus, whereas fluent signers readily use top-down scaffolding in their working memory, less fluent signers fail to do so. Implications for current models of working memory across spoken and signed modalities are considered. PMID:25152744
NASA Technical Reports Server (NTRS)
Wilmington, R. P.; Klute, Glenn K. (Editor); Carroll, Amy E. (Editor); Stuart, Mark A. (Editor); Poliner, Jeff (Editor); Rajulu, Sudhakar (Editor); Stanush, Julie (Editor)
1992-01-01
Kinematics, the study of motion exclusive of the influences of mass and force, is one of the primary methods used for the analysis of human biomechanical systems as well as other types of mechanical systems. The Anthropometry and Biomechanics Laboratory (ABL) in the Crew Interface Analysis section of the Man-Systems Division performs both human body kinematics as well as mechanical system kinematics using the Ariel Performance Analysis System (APAS). The APAS supports both analysis of analog signals (e.g. force plate data collection) as well as digitization and analysis of video data. The current evaluations address several methodology issues concerning the accuracy of the kinematic data collection and analysis used in the ABL. This document describes a series of evaluations performed to gain quantitative data pertaining to position and constant angular velocity movements under several operating conditions. Two-dimensional as well as three-dimensional data collection and analyses were completed in a controlled laboratory environment using typical hardware setups. In addition, an evaluation was performed to evaluate the accuracy impact due to a single axis camera offset. Segment length and positional data exhibited errors within 3 percent when using three-dimensional analysis and yielded errors within 8 percent through two-dimensional analysis (Direct Linear Software). Peak angular velocities displayed errors within 6 percent through three-dimensional analyses and exhibited errors of 12 percent when using two-dimensional analysis (Direct Linear Software). The specific results from this series of evaluations and their impacts on the methodology issues of kinematic data collection and analyses are presented in detail. The accuracy levels observed in these evaluations are also presented.
NASA Astrophysics Data System (ADS)
Cohen, W. B.; Yang, Z.; Stehman, S.; Huang, C.; Healey, S. P.
2013-12-01
Forest ecosystem process models require spatially and temporally detailed disturbance data to accurately predict fluxes of carbon or changes in biodiversity over time. A variety of new mapping algorithms using dense Landsat time series show great promise for providing disturbance characterizations at an annual time step. These algorithms provide unprecedented detail with respect to timing, magnitude, and duration of individual disturbance events, and causal agent. But all maps have error and disturbance maps in particular can have significant omission error because many disturbances are relatively subtle. Because disturbance, although ubiquitous, can be a relatively rare event spatially in any given year, omission errors can have a great impact on mapped rates. Using a high quality reference disturbance dataset, it is possible to not only characterize map errors but also to adjust mapped disturbance rates to provide unbiased rate estimates with confidence intervals. We present results from a national-level disturbance mapping project (the North American Forest Dynamics project) based on the Vegetation Change Tracker (VCT) with annual Landsat time series and uncertainty analyses that consist of three basic components: response design, statistical design, and analyses. The response design describes the reference data collection, in terms of the tool used (TimeSync), a formal description of interpretations, and the approach for data collection. The statistical design defines the selection of plot samples to be interpreted, whether stratification is used, and the sample size. Analyses involve derivation of standard agreement matrices between the map and the reference data, and use of inclusion probabilities and post-stratification to adjust mapped disturbance rates. Because for NAFD we use annual time series, both mapped and adjusted rates are provided at an annual time step from ~1985-present. Preliminary evaluations indicate that VCT captures most of the higher intensity disturbances, but that many of the lower intensity disturbances (thinnings, stress related to insects and disease, etc.) are missed. Because lower intensity disturbances are a large proportion of the total set of disturbances, adjusting mapped disturbance rates to include these can be important for inclusion in ecosystem process models. The described statistical disturbance rate adjustments are aspatial in nature, such that the basic underlying map is unchanged. For spatially explicit ecosystem modeling, such adjustments, although important, can be difficult to directly incorporate. One approach for improving the basic underlying map is an ensemble modeling approach that uses several different complementary maps, each derived from a different algorithm and having their own strengths and weaknesses relative to disturbance magnitude and causal agent of disturbance. We will present results from a pilot study associated with the Landscape Change Monitoring System (LCMS), an emerging national-level program that builds upon NAFD and the well-established Monitoring Trends in Burn Severity (MTBS) program.
Group sequential designs for stepped-wedge cluster randomised trials
Grayling, Michael J; Wason, James MS; Mander, Adrian P
2017-01-01
Background/Aims: The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Methods: Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. Results: We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial’s type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. Conclusion: The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into stepped-wedge cluster randomised trials according to the needs of the particular trial. PMID:28653550
Group sequential designs for stepped-wedge cluster randomised trials.
Grayling, Michael J; Wason, James Ms; Mander, Adrian P
2017-10-01
The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial's type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into stepped-wedge cluster randomised trials according to the needs of the particular trial.
Monitoring Building Deformation with InSAR: Experiments and Validation.
Yang, Kui; Yan, Li; Huang, Guoman; Chen, Chu; Wu, Zhengpeng
2016-12-20
Synthetic Aperture Radar Interferometry (InSAR) techniques are increasingly applied for monitoring land subsidence. The advantages of InSAR include high accuracy and the ability to cover large areas; nevertheless, research validating the use of InSAR on building deformation is limited. In this paper, we test the monitoring capability of the InSAR in experiments using two landmark buildings; the Bohai Building and the China Theater, located in Tianjin, China. They were selected as real examples to compare InSAR and leveling approaches for building deformation. Ten TerraSAR-X images spanning half a year were used in Permanent Scatterer InSAR processing. These extracted InSAR results were processed considering the diversity in both direction and spatial distribution, and were compared with true leveling values in both Ordinary Least Squares (OLS) regression and measurement of error analyses. The detailed experimental results for the Bohai Building and the China Theater showed a high correlation between InSAR results and the leveling values. At the same time, the two Root Mean Square Error (RMSE) indexes had values of approximately 1 mm. These analyses show that a millimeter level of accuracy can be achieved by means of InSAR technique when measuring building deformation. We discuss the differences in accuracy between OLS regression and measurement of error analyses, and compare the accuracy index of leveling in order to propose InSAR accuracy levels appropriate for monitoring buildings deformation. After assessing the advantages and limitations of InSAR techniques in monitoring buildings, further applications are evaluated.
Multidisciplinary optimization of an HSCT wing using a response surface methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giunta, A.A.; Grossman, B.; Mason, W.H.
1994-12-31
Aerospace vehicle design is traditionally divided into three phases: conceptual, preliminary, and detailed. Each of these design phases entails a particular level of accuracy and computational expense. While there are several computer programs which perform inexpensive conceptual-level aircraft multidisciplinary design optimization (MDO), aircraft MDO remains prohibitively expensive using preliminary- and detailed-level analysis tools. This occurs due to the expense of computational analyses and because gradient-based optimization requires the analysis of hundreds or thousands of aircraft configurations to estimate design sensitivity information. A further hindrance to aircraft MDO is the problem of numerical noise which occurs frequently in engineering computations. Computermore » models produce numerical noise as a result of the incomplete convergence of iterative processes, round-off errors, and modeling errors. Such numerical noise is typically manifested as a high frequency, low amplitude variation in the results obtained from the computer models. Optimization attempted using noisy computer models may result in the erroneous calculation of design sensitivities and may slow or prevent convergence to an optimal design.« less
BEATBOX v1.0: Background Error Analysis Testbed with Box Models
NASA Astrophysics Data System (ADS)
Knote, Christoph; Barré, Jérôme; Eckl, Max
2018-02-01
The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.
Good coupling for the multiscale patch scheme on systems with microscale heterogeneity
NASA Astrophysics Data System (ADS)
Bunder, J. E.; Roberts, A. J.; Kevrekidis, I. G.
2017-05-01
Computational simulation of microscale detailed systems is frequently only feasible over spatial domains much smaller than the macroscale of interest. The 'equation-free' methodology couples many small patches of microscale computations across space to empower efficient computational simulation over macroscale domains of interest. Motivated by molecular or agent simulations, we analyse the performance of various coupling schemes for patches when the microscale is inherently 'rough'. As a canonical problem in this universality class, we systematically analyse the case of heterogeneous diffusion on a lattice. Computer algebra explores how the dynamics of coupled patches predict the large scale emergent macroscale dynamics of the computational scheme. We determine good design for the coupling of patches by comparing the macroscale predictions from patch dynamics with the emergent macroscale on the entire domain, thus minimising the computational error of the multiscale modelling. The minimal error on the macroscale is obtained when the coupling utilises averaging regions which are between a third and a half of the patch. Moreover, when the symmetry of the inter-patch coupling matches that of the underlying microscale structure, patch dynamics predicts the desired macroscale dynamics to any specified order of error. The results confirm that the patch scheme is useful for macroscale computational simulation of a range of systems with microscale heterogeneity.
Frontal Theta Links Prediction Errors to Behavioral Adaptation in Reinforcement Learning
Cavanagh, James F.; Frank, Michael J.; Klein, Theresa J.; Allen, John J.B.
2009-01-01
Investigations into action monitoring have consistently detailed a fronto-central voltage deflection in the Event-Related Potential (ERP) following the presentation of negatively valenced feedback, sometimes termed the Feedback Related Negativity (FRN). The FRN has been proposed to reflect a neural response to prediction errors during reinforcement learning, yet the single trial relationship between neural activity and the quanta of expectation violation remains untested. Although ERP methods are not well suited to single trial analyses, the FRN has been associated with theta band oscillatory perturbations in the medial prefrontal cortex. Medio-frontal theta oscillations have been previously associated with expectation violation and behavioral adaptation and are well suited to single trial analysis. Here, we recorded EEG activity during a probabilistic reinforcement learning task and fit the performance data to an abstract computational model (Q-learning) for calculation of single-trial reward prediction errors. Single-trial theta oscillatory activities following feedback were investigated within the context of expectation (prediction error) and adaptation (subsequent reaction time change). Results indicate that interactive medial and lateral frontal theta activities reflect the degree of negative and positive reward prediction error in the service of behavioral adaptation. These different brain areas use prediction error calculations for different behavioral adaptations: with medial frontal theta reflecting the utilization of prediction errors for reaction time slowing (specifically following errors), but lateral frontal theta reflecting prediction errors leading to working memory-related reaction time speeding for the correct choice. PMID:19969093
A Unified Approach to Measurement Error and Missing Data: Details and Extensions
ERIC Educational Resources Information Center
Blackwell, Matthew; Honaker, James; King, Gary
2017-01-01
We extend a unified and easy-to-use approach to measurement error and missing data. In our companion article, Blackwell, Honaker, and King give an intuitive overview of the new technique, along with practical suggestions and empirical applications. Here, we offer more precise technical details, more sophisticated measurement error model…
Metrics to quantify the importance of mixing state for CCN activity
Ching, Joseph; Fast, Jerome; West, Matthew; ...
2017-06-21
It is commonly assumed that models are more prone to errors in predicted cloud condensation nuclei (CCN) concentrations when the aerosol populations are externally mixed. In this work we investigate this assumption by using the mixing state index ( χ) proposed by Riemer and West (2013) to quantify the degree of external and internal mixing of aerosol populations. We combine this metric with particle-resolved model simulations to quantify error in CCN predictions when mixing state information is neglected, exploring a range of scenarios that cover different conditions of aerosol aging. We show that mixing state information does indeed become unimportantmore » for more internally mixed populations, more precisely for populations with χ larger than 75 %. For more externally mixed populations ( χ below 20 %) the relationship of χ and the error in CCN predictions is not unique and ranges from lower than -40 % to about 150 %, depending on the underlying aerosol population and the environmental supersaturation. We explain the reasons for this behavior with detailed process analyses.« less
Metrics to quantify the importance of mixing state for CCN activity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ching, Joseph; Fast, Jerome; West, Matthew
It is commonly assumed that models are more prone to errors in predicted cloud condensation nuclei (CCN) concentrations when the aerosol populations are externally mixed. In this work we investigate this assumption by using the mixing state index ( χ) proposed by Riemer and West (2013) to quantify the degree of external and internal mixing of aerosol populations. We combine this metric with particle-resolved model simulations to quantify error in CCN predictions when mixing state information is neglected, exploring a range of scenarios that cover different conditions of aerosol aging. We show that mixing state information does indeed become unimportantmore » for more internally mixed populations, more precisely for populations with χ larger than 75 %. For more externally mixed populations ( χ below 20 %) the relationship of χ and the error in CCN predictions is not unique and ranges from lower than -40 % to about 150 %, depending on the underlying aerosol population and the environmental supersaturation. We explain the reasons for this behavior with detailed process analyses.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
FINSTERLE, STEFAN; JUNG, YOOJIN; KOWALSKY, MICHAEL
2016-09-15
iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional, multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. iTOUGH2 performs sensitivity analyses, data-worth analyses, parameter estimation, and uncertainty propagation analyses in geosciences and reservoir engineering and other application areas. iTOUGH2 supports a number of different combinations of fluids and components (equation-of-state (EOS) modules). In addition, the optimization routines implemented in iTOUGH2 can also be used for sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files using the PEST protocol. iTOUGH2 solves the inverse problem bymore » minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative-free, gradient-based, and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlo simulations for uncertainty propagation analyses. A detailed residual and error analysis is provided. This upgrade includes (a) global sensitivity analysis methods, (b) dynamic memory allocation (c) additional input features and output analyses, (d) increased forward simulation capabilities, (e) parallel execution on multicore PCs and Linux clusters, and (f) bug fixes. More details can be found at http://esd.lbl.gov/iTOUGH2.« less
Monitoring Building Deformation with InSAR: Experiments and Validation
Yang, Kui; Yan, Li; Huang, Guoman; Chen, Chu; Wu, Zhengpeng
2016-01-01
Synthetic Aperture Radar Interferometry (InSAR) techniques are increasingly applied for monitoring land subsidence. The advantages of InSAR include high accuracy and the ability to cover large areas; nevertheless, research validating the use of InSAR on building deformation is limited. In this paper, we test the monitoring capability of the InSAR in experiments using two landmark buildings; the Bohai Building and the China Theater, located in Tianjin, China. They were selected as real examples to compare InSAR and leveling approaches for building deformation. Ten TerraSAR-X images spanning half a year were used in Permanent Scatterer InSAR processing. These extracted InSAR results were processed considering the diversity in both direction and spatial distribution, and were compared with true leveling values in both Ordinary Least Squares (OLS) regression and measurement of error analyses. The detailed experimental results for the Bohai Building and the China Theater showed a high correlation between InSAR results and the leveling values. At the same time, the two Root Mean Square Error (RMSE) indexes had values of approximately 1 mm. These analyses show that a millimeter level of accuracy can be achieved by means of InSAR technique when measuring building deformation. We discuss the differences in accuracy between OLS regression and measurement of error analyses, and compare the accuracy index of leveling in order to propose InSAR accuracy levels appropriate for monitoring buildings deformation. After assessing the advantages and limitations of InSAR techniques in monitoring buildings, further applications are evaluated. PMID:27999403
A field evaluation of a piezo-optical dosimeter for environmental monitoring of nitrogen dioxide.
Wright, John D; Schillinger, Eric F J; Cazier, Fabrice; Nouali, Habiba; Mercier, Agnes; Beaugard, Charles
2004-06-01
Measurements of 8-hour time-weighted average NO(2) concentrations are reported at 7 different locations in the region of Dunkirk over 5 consecutive days using PiezOptic monitoring badges previously calibrated for the range 0-70 ppb together with data from chemiluminescent analysers in 5 sites (4 fixed and one mobile). The latter facilities also provided data on ozone and NO concentrations and meteorological conditions. Daily averages from the two pairs of badges in different types of sampling cover in each site have been compared with data from the chemiluminescent analysers, and found largely to agree within error margins of +/-30%. Although NO(2) and ozone concentrations were low, rendering detailed discussion impossible, the general features followed expected patterns.
Retrieving Storm Electric Fields From Aircraft Field Mill Data. Part 2; Applications
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Mach, D. M.; Christian, H. J.; Stewart, M. F.; Bateman, M. G.
2005-01-01
The Lagrange multiplier theory and "pitch down method" developed in Part I of this study are applied to complete the calibration of a Citation aircraft that is instrumented with six field mill sensors. When side constraints related to average fields are used, the method performs well in computer simulations. For mill measurement errors of 1 V/m and a 5 V/m error in the mean fair weather field function, the 3-D storm electric field is retrieved to within an error of about 12%. A side constraint that involves estimating the detailed structure of the fair weather field was also tested using computer simulations. For mill measurement errors of 1 V/m, the method retrieves the 3-D storm field to within an error of about 8% if the fair weather field estimate is typically within 1 V/m of the true fair weather field. Using this side constraint and data from fair weather field maneuvers taken on 29 June 2001, the Citation aircraft was calibrated. The resulting calibration matrix was then used to retrieve storm electric fields during a Citation flight on 2 June 2001. The storm field results are encouraging and agree favorably with the results obtained from earlier calibration analyses that were based on iterative techniques.
Interobserver error involved in independent attempts to measure cusp base areas of Pan M1s
Bailey, Shara E; Pilbrow, Varsha C; Wood, Bernard A
2004-01-01
Cusp base areas measured from digitized images increase the amount of detailed quantitative information one can collect from post-canine crown morphology. Although this method is gaining wide usage for taxonomic analyses of extant and extinct hominoids, the techniques for digitizing images and taking measurements differ between researchers. The aim of this study was to investigate interobserver error in order to help assess the reliability of cusp base area measurement within extant and extinct hominoid taxa. Two of the authors measured individual cusp base areas and total cusp base area of 23 maxillary first molars (M1) of Pan. From these, relative cusp base areas were calculated. No statistically significant interobserver differences were found for either absolute or relative cusp base areas. On average the hypocone and paracone showed the least interobserver error (< 1%) whereas the protocone and metacone showed the most (2.6–4.5%). We suggest that the larger measurement error in the metacone/protocone is due primarily to either weakly defined fissure patterns and/or the presence of accessory occlusal features. Overall, levels of interobserver error are similar to those found for intraobserver error. The results of our study suggest that if certain prescribed standards are employed then cusp and crown base areas measured by different individuals can be pooled into a single database. PMID:15447691
Haghighi, Mohammad Hosein Hayavi; Dehghani, Mohammad; Teshnizi, Saeid Hoseini; Mahmoodi, Hamid
2014-01-01
Accurate cause of death coding leads to organised and usable death information but there are some factors that influence documentation on death certificates and therefore affect the coding. We reviewed the role of documentation errors on the accuracy of death coding at Shahid Mohammadi Hospital (SMH), Bandar Abbas, Iran. We studied the death certificates of all deceased patients in SMH from October 2010 to March 2011. Researchers determined and coded the underlying cause of death on the death certificates according to the guidelines issued by the World Health Organization in Volume 2 of the International Statistical Classification of Diseases and Health Related Problems-10th revision (ICD-10). Necessary ICD coding rules (such as the General Principle, Rules 1-3, the modification rules and other instructions about death coding) were applied to select the underlying cause of death on each certificate. Demographic details and documentation errors were then extracted. Data were analysed with descriptive statistics and chi square tests. The accuracy rate of causes of death coding was 51.7%, demonstrating a statistically significant relationship (p=.001) with major errors but not such a relationship with minor errors. Factors that result in poor quality of Cause of Death coding in SMH are lack of coder training, documentation errors and the undesirable structure of death certificates.
NASA Astrophysics Data System (ADS)
Zhang, Ling; Cai, Yunlong; Li, Chunguang; de Lamare, Rodrigo C.
2017-12-01
In this work, we present low-complexity variable forgetting factor (VFF) techniques for diffusion recursive least squares (DRLS) algorithms. Particularly, we propose low-complexity VFF-DRLS algorithms for distributed parameter and spectrum estimation in sensor networks. For the proposed algorithms, they can adjust the forgetting factor automatically according to the posteriori error signal. We develop detailed analyses in terms of mean and mean square performance for the proposed algorithms and derive mathematical expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The simulation results show that the proposed low-complexity VFF-DRLS algorithms achieve superior performance to the existing DRLS algorithm with fixed forgetting factor when applied to scenarios of distributed parameter and spectrum estimation. Besides, the simulation results also demonstrate a good match for our proposed analytical expressions.
Structure of the Upper Troposphere-Lower Stratosphere (UTLS) in GEOS-5
NASA Technical Reports Server (NTRS)
Pawson, Steven
2011-01-01
This study examines the structure of the upper troposphere and lower stratosphere in the GEOS-5 data assimilation system. Near-real time analyses, with a horizontal resolution of one-half or one quarter degree and a vertical resolution of about 1km in the tropopause region are examined with an emphasis on spatial structures at and around the tropopause. The contributions of in-situ observations of temperature and microwave and infrared radiances to the analyses are discussed, with some focus on the interplay between these types of observations. For a historical analysis (Merra) performed with GEOS-5, the impacts of changing observations on the assimilation system are examined in some detail - this documents some aspects of the time dependence of analysis that must be taken into account in the isolation of true geophysical trends. Finally, some sensitivities of the ozone analyses to input data and correlated errors between temperature and ozone are discussed.
Quality control and conduct of genome-wide association meta-analyses.
Winkler, Thomas W; Day, Felix R; Croteau-Chonka, Damien C; Wood, Andrew R; Locke, Adam E; Mägi, Reedik; Ferreira, Teresa; Fall, Tove; Graff, Mariaelisa; Justice, Anne E; Luan, Jian'an; Gustafsson, Stefan; Randall, Joshua C; Vedantam, Sailaja; Workalemahu, Tsegaselassie; Kilpeläinen, Tuomas O; Scherag, André; Esko, Tonu; Kutalik, Zoltán; Heid, Iris M; Loos, Ruth J F
2014-05-01
Rigorous organization and quality control (QC) are necessary to facilitate successful genome-wide association meta-analyses (GWAMAs) of statistics aggregated across multiple genome-wide association studies. This protocol provides guidelines for (i) organizational aspects of GWAMAs, and for (ii) QC at the study file level, the meta-level across studies and the meta-analysis output level. Real-world examples highlight issues experienced and solutions developed by the GIANT Consortium that has conducted meta-analyses including data from 125 studies comprising more than 330,000 individuals. We provide a general protocol for conducting GWAMAs and carrying out QC to minimize errors and to guarantee maximum use of the data. We also include details for the use of a powerful and flexible software package called EasyQC. Precise timings will be greatly influenced by consortium size. For consortia of comparable size to the GIANT Consortium, this protocol takes a minimum of about 10 months to complete.
Quality control and conduct of genome-wide association meta-analyses
Winkler, Thomas W; Day, Felix R; Croteau-Chonka, Damien C; Wood, Andrew R; Locke, Adam E; Mägi, Reedik; Ferreira, Teresa; Fall, Tove; Graff, Mariaelisa; Justice, Anne E; Luan, Jian'an; Gustafsson, Stefan; Randall, Joshua C; Vedantam, Sailaja; Workalemahu, Tsegaselassie; Kilpeläinen, Tuomas O; Scherag, André; Esko, Tonu; Kutalik, Zoltán; Heid, Iris M; Loos, Ruth JF
2014-01-01
Rigorous organization and quality control (QC) are necessary to facilitate successful genome-wide association meta-analyses (GWAMAs) of statistics aggregated across multiple genome-wide association studies. This protocol provides guidelines for [1] organizational aspects of GWAMAs, and for [2] QC at the study file level, the meta-level across studies, and the meta-analysis output level. Real–world examples highlight issues experienced and solutions developed by the GIANT Consortium that has conducted meta-analyses including data from 125 studies comprising more than 330,000 individuals. We provide a general protocol for conducting GWAMAs and carrying out QC to minimize errors and to guarantee maximum use of the data. We also include details for use of a powerful and flexible software package called EasyQC. For consortia of comparable size to the GIANT consortium, the present protocol takes a minimum of about 10 months to complete. PMID:24762786
First order error corrections in common introductory physics experiments
NASA Astrophysics Data System (ADS)
Beckey, Jacob; Baker, Andrew; Aravind, Vasudeva; Clarion Team
As a part of introductory physics courses, students perform different standard lab experiments. Almost all of these experiments are prone to errors owing to factors like friction, misalignment of equipment, air drag, etc. Usually these types of errors are ignored by students and not much thought is paid to the source of these errors. However, paying attention to these factors that give rise to errors help students make better physics models and understand physical phenomena behind experiments in more detail. In this work, we explore common causes of errors in introductory physics experiment and suggest changes that will mitigate the errors, or suggest models that take the sources of these errors into consideration. This work helps students build better and refined physical models and understand physics concepts in greater detail. We thank Clarion University undergraduate student grant for financial support involving this project.
Factors that influence the generation of autobiographical memory conjunction errors
Devitt, Aleea L.; Monk-Fromont, Edwin; Schacter, Daniel L.; Addis, Donna Rose
2015-01-01
The constructive nature of memory is generally adaptive, allowing us to efficiently store, process and learn from life events, and simulate future scenarios to prepare ourselves for what may come. However, the cost of a flexibly constructive memory system is the occasional conjunction error, whereby the components of an event are authentic, but the combination of those components is false. Using a novel recombination paradigm, it was demonstrated that details from one autobiographical memory may be incorrectly incorporated into another, forming autobiographical memory conjunction errors that elude typical reality monitoring checks. The factors that contribute to the creation of these conjunction errors were examined across two experiments. Conjunction errors were more likely to occur when the corresponding details were partially rather than fully recombined, likely due to increased plausibility and ease of simulation of partially recombined scenarios. Brief periods of imagination increased conjunction error rates, in line with the imagination inflation effect. Subjective ratings suggest that this inflation is due to similarity of phenomenological experience between conjunction and authentic memories, consistent with a source monitoring perspective. Moreover, objective scoring of memory content indicates that increased perceptual detail may be particularly important for the formation of autobiographical memory conjunction errors. PMID:25611492
Factors that influence the generation of autobiographical memory conjunction errors.
Devitt, Aleea L; Monk-Fromont, Edwin; Schacter, Daniel L; Addis, Donna Rose
2016-01-01
The constructive nature of memory is generally adaptive, allowing us to efficiently store, process and learn from life events, and simulate future scenarios to prepare ourselves for what may come. However, the cost of a flexibly constructive memory system is the occasional conjunction error, whereby the components of an event are authentic, but the combination of those components is false. Using a novel recombination paradigm, it was demonstrated that details from one autobiographical memory (AM) may be incorrectly incorporated into another, forming AM conjunction errors that elude typical reality monitoring checks. The factors that contribute to the creation of these conjunction errors were examined across two experiments. Conjunction errors were more likely to occur when the corresponding details were partially rather than fully recombined, likely due to increased plausibility and ease of simulation of partially recombined scenarios. Brief periods of imagination increased conjunction error rates, in line with the imagination inflation effect. Subjective ratings suggest that this inflation is due to similarity of phenomenological experience between conjunction and authentic memories, consistent with a source monitoring perspective. Moreover, objective scoring of memory content indicates that increased perceptual detail may be particularly important for the formation of AM conjunction errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarazona, David; Berz, Martin; Hipple, Robert
The main goal of the Muon g-2 Experiment (g-2) at Fermilab is to measure the muon anomalous magnetic moment to unprecedented precision. This new measurement will allow to test the completeness of the Standard Model (SM) and to validate other theoretical models beyond the SM. The close interplay of the understanding of particle beam dynamics and the preparation of the beam properties with the experimental measurement is tantamount to the reduction of systematic errors in the determination of the muon anomalous magnetic moment. We describe progress in developing detailed calculations and modeling of the muon beam delivery system in ordermore » to obtain a better understanding of spin-orbit correlations, nonlinearities, and more realistic aspects that contribute to the systematic errors of the g-2 measurement. Our simulation is meant to provide statistical studies of error effects and quick analyses of running conditions for when g-2 is taking beam, among others. We are using COSY, a differential algebra solver developed at Michigan State University that will also serve as an alternative to compare results obtained by other simulation teams of the g-2 Collaboration.« less
[Analysis of causes of incorrect use of dose aerosols].
Petro, W; Gebert, P; Lauber, B
1994-03-01
Preparations administered by inhalation make relatively high demands on the skill and knowledge of the patient in handling this form of application, for the effectivity of the therapy is inseparably linked to its faultless application. The present article aims at analysing possible mistakes in handling and at finding the most effective way of avoiding them. Several groups of patients with different previous knowledge were analysed in respect of handling skill and the influence of training on an improvement of the same; the patients' self-assessment was analysed by questioning them. Most mistakes are committed by patients whose only information consists of the contents of the package circular. Written instructions alone cannot convey sufficient information especially on how to synchronize the release operations. Major mistakes are insufficient expiration before application in 85.6% of the patients and lack of synchronisation in 55.9%, while the lowest rate of errors in respect of handling was seen in patients who had undergone training and instruction. Training in application associated with demonstration and subsequent exercise reduces the error ratio to a tolerable level. Pulverizers free from propelling gas and preparations applied by means of a spacer are clearly superior to others in respect of a comparatively low error rate. 99.3% of all patients believe they are correctly following the instructions, but on going into the question more deeply it becomes apparent that 37.1% of them make incorrect statements. Hence, practical training in application should get top priority in the treatment of obstructive diseases of the airways. The individual steps of inhalation technique must be explained in detail and demonstrated by means of a placebo dosage aerosol.(ABSTRACT TRUNCATED AT 250 WORDS)
Retrieving Storm Electric Fields from Aircrfaft Field Mill Data: Part II: Applications
NASA Technical Reports Server (NTRS)
Koshak, William; Mach, D. M.; Christian H. J.; Stewart, M. F.; Bateman M. G.
2006-01-01
The Lagrange multiplier theory developed in Part I of this study is applied to complete a relative calibration of a Citation aircraft that is instrumented with six field mill sensors. When side constraints related to average fields are used, the Lagrange multiplier method performs well in computer simulations. For mill measurement errors of 1 V m(sup -1) and a 5 V m(sup -1) error in the mean fair-weather field function, the 3D storm electric field is retrieved to within an error of about 12%. A side constraint that involves estimating the detailed structure of the fair-weather field was also tested using computer simulations. For mill measurement errors of 1 V m(sup -l), the method retrieves the 3D storm field to within an error of about 8% if the fair-weather field estimate is typically within 1 V m(sup -1) of the true fair-weather field. Using this type of side constraint and data from fair-weather field maneuvers taken on 29 June 2001, the Citation aircraft was calibrated. Absolute calibration was completed using the pitch down method developed in Part I, and conventional analyses. The resulting calibration matrices were then used to retrieve storm electric fields during a Citation flight on 2 June 2001. The storm field results are encouraging and agree favorably in many respects with results derived from earlier (iterative) techniques of calibration.
NASA Astrophysics Data System (ADS)
Zarychta, R.; Zarychta, A.
2013-12-01
Extraction of mineral resources, including rocks, usually causes some significant changes of the landscape. Transformation of the relief which character and scale can be analysed by means of cartographic materials seems to be the most interesting. Reconstruction of the relief of the period prior to the exploitation is a starting point for such investigation. It can be done basing on archival cartographic materials which are difficult to obtain. However, too varied morphological material of the area can lead to erroneous conclusions which suggests interpretation of three - dimensional models of the relief. Hence, the paper deals with reconstruction and visualisation of the relief (in the period before the exploitation) of four sand fields of the old sand mine excavation "Siemonia". A geological map of Poland (Wojkowice sheet) has been used for the purpose. A geostatical analysis by means of the programmes Surfer 8 and ArcGIS 10.1. has been performed on the map. An estimation method called ordinary kriging, which is related to B.L.U.E. (best linear unbiased estimator), where the condition of the lack of weight of the measurement (the sum of weight is equal to 1) is fulfilled, has been applied. The calculated values of errors (mean error, mean squared error and mean squared standardised error) obtained as a result of application of the cross - validation procedure are, to a large extent, in agreement with predetermined values of errors given by numerous authors in the scientific literature. It confirms proper "manual" adjustment of two mathematic al models of spherical variograms and empirical variograms. The generated contour map of the investigated area (based on estimated points of sampling in nodes of the interpolation grid) together with its three - dimensional digital model are more adequate (due to significant marking of the relief) to the previous state of the investigated area than the two other presented types of cartographic visualisations made without application of the geostatistical methods. Hence, the graphic presentation of results, mentioned as the last one, can be only applied to visualise the relief without any detailed geomorphological interpretations due to its inaccuracy. It seems to be obvious that detailed analyses can be performed basing on a digital model of the terrain accompanied by its contour map obtained when reconstruction of the relief is made by means of geostatistical methods (especially ordinary kriging).
NASA Technical Reports Server (NTRS)
Shapiro, I. I.; Counselman, C. C., III
1975-01-01
The uses of radar observations of planets and very-long-baseline radio interferometric observations of extragalactic objects to test theories of gravitation are described in detail with special emphasis on sources of error. The accuracy achievable in these tests with data already obtained, can be summarized in terms of: retardation of signal propagation (radar), deflection of radio waves (interferometry), advance of planetary perihelia (radar), gravitational quadrupole moment of sun (radar), and time variation of gravitational constant (radar). The analyses completed to date have yielded no significant disagreement with the predictions of general relativity.
Improving designer productivity
NASA Technical Reports Server (NTRS)
Hill, Gary C.
1992-01-01
Designer and design team productivity improves with skill, experience, and the tools available. The design process involves numerous trials and errors, analyses, refinements, and addition of details. Computerized tools have greatly speeded the analysis, and now new theories and methods, emerging under the label Artificial Intelligence (AI), are being used to automate skill and experience. These tools improve designer productivity by capturing experience, emulating recognized skillful designers, and making the essence of complex programs easier to grasp. This paper outlines the aircraft design process in today's technology and business climate, presenting some of the challenges ahead and some of the promising AI methods for meeting those challenges.
Collaborative recall of details of an emotional film.
Wessel, Ineke; Zandstra, Anna Roos E; Hengeveld, Hester M E; Moulds, Michelle L
2015-01-01
Collaborative inhibition refers to the phenomenon that when several people work together to produce a single memory report, they typically produce fewer items than when the unique items in the individual reports of the same number of participants are combined (i.e., nominal recall). Yet, apart from this negative effect, collaboration may be beneficial in that group members remove errors from a collaborative report. Collaborative inhibition studies on memory for emotional stimuli are scarce. Therefore, the present study examined both collaborative inhibition and collaborative error reduction in the recall of the details of emotional material in a laboratory setting. Female undergraduates (n = 111) viewed a film clip of a fatal accident and subsequently engaged in either collaborative (n = 57) or individual recall (n = 54) in groups of three. The results show that, across several detail categories, collaborating groups recalled fewer details than nominal groups. However, overall, nominal recall produced more errors than collaborative recall. The present results extend earlier findings on both collaborative inhibition and error reduction to the recall of affectively laden material. These findings may have implications for the applied fields of forensic and clinical psychology.
Kim, ChungYun; Mazan, Jennifer L; Quiñones-Boex, Ana C
To determine pharmacists' attitudes and behaviors on medication errors and their disclosure and to compare community and hospital pharmacists on such views. An online questionnaire was developed from previous studies on physicians' disclosure of errors. Questionnaire items included demographics, environment, personal experiences, and attitudes on medication errors and the disclosure process. An invitation to participate along with the link to the questionnaire was electronically distributed to members of two Illinois pharmacy associations. A follow-up reminder was sent 4 weeks after the original message. Data were collected for 3 months, and statistical analyses were performed with the use of IBM SPSS version 22.0. The overall response rate was 23.3% (n = 422). The average employed respondent was a 51-year-old white woman with a BS Pharmacy degree working in a hospital pharmacy as a clinical staff member. Regardless of practice settings, pharmacist respondents agreed that medication errors were inevitable and that a disclosure process is necessary. Respondents from community and hospital settings were further analyzed to assess any differences. Community pharmacist respondents were more likely to agree that medication errors were inevitable and that pharmacists should address the patient's emotions when disclosing an error. Community pharmacist respondents were also more likely to agree that the health care professional most closely involved with the error should disclose the error to the patient and thought that it was the pharmacists' responsibility to disclose the error. Hospital pharmacist respondents were more likely to agree that it was important to include all details in a disclosure process and more likely to disagree on putting a "positive spin" on the event. Regardless of practice setting, responding pharmacists generally agreed that errors should be disclosed to patients. There were, however, significant differences in their attitudes and behaviors depending on their particular practice setting. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Davis, Matthew H.
2016-01-01
Successful perception depends on combining sensory input with prior knowledge. However, the underlying mechanism by which these two sources of information are combined is unknown. In speech perception, as in other domains, two functionally distinct coding schemes have been proposed for how expectations influence representation of sensory evidence. Traditional models suggest that expected features of the speech input are enhanced or sharpened via interactive activation (Sharpened Signals). Conversely, Predictive Coding suggests that expected features are suppressed so that unexpected features of the speech input (Prediction Errors) are processed further. The present work is aimed at distinguishing between these two accounts of how prior knowledge influences speech perception. By combining behavioural, univariate, and multivariate fMRI measures of how sensory detail and prior expectations influence speech perception with computational modelling, we provide evidence in favour of Prediction Error computations. Increased sensory detail and informative expectations have additive behavioural and univariate neural effects because they both improve the accuracy of word report and reduce the BOLD signal in lateral temporal lobe regions. However, sensory detail and informative expectations have interacting effects on speech representations shown by multivariate fMRI in the posterior superior temporal sulcus. When prior knowledge was absent, increased sensory detail enhanced the amount of speech information measured in superior temporal multivoxel patterns, but with informative expectations, increased sensory detail reduced the amount of measured information. Computational simulations of Sharpened Signals and Prediction Errors during speech perception could both explain these behavioural and univariate fMRI observations. However, the multivariate fMRI observations were uniquely simulated by a Prediction Error and not a Sharpened Signal model. The interaction between prior expectation and sensory detail provides evidence for a Predictive Coding account of speech perception. Our work establishes methods that can be used to distinguish representations of Prediction Error and Sharpened Signals in other perceptual domains. PMID:27846209
Standardising analysis of carbon monoxide rebreathing for application in anti-doping.
Alexander, Anthony C; Garvican, Laura A; Burge, Caroline M; Clark, Sally A; Plowman, James S; Gore, Christopher J
2011-03-01
Determination of total haemoglobin mass (Hbmass) via carbon monoxide (CO) depends critically on repeatable measurement of percent carboxyhaemoglobin (%HbCO) in blood with a hemoximeter. The main aim of this study was to determine, for an OSM3 hemoximeter, the number of replicate measures as well as the theoretical change in percent carboxyhaemoglobin required to yield a random error of analysis (Analyser Error) of ≤1%. Before and after inhalation of CO, nine participants provided a total of 576 blood samples that were each analysed five times for percent carboxyhaemoglobin on one of three OSM3 hemoximeters; with approximately one-third of blood samples analysed on each OSM3. The Analyser Error was calculated for the first two (duplicate), first three (triplicate) and first four (quadruplicate) measures on each OSM3, as well as for all five measures (quintuplicates). Two methods of CO-rebreathing, a 2-min and 10-min procedure, were evaluated for Analyser Error. For duplicate analyses of blood, the Analyser Error for the 2-min method was 3.7, 4.0 and 5.0% for the three OSM3s when the percent carboxyhaemoglobin increased by two above resting values. With quintuplicate analyses of blood, the corresponding errors reduced to .8, .9 and 1.0% for the 2-min method when the percent carboxyhaemoglobin increased by 5.5 above resting values. In summary, to minimise the Analyser Error to ∼≤1% on an OSM3 hemoximeter, researchers should make ≥5 replicates of percent carboxyhaemoglobin and the volume of CO administered should be sufficient increase percent carboxyhaemoglobin by ≥5.5 above baseline levels. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.
PID-based error signal modeling
NASA Astrophysics Data System (ADS)
Yohannes, Tesfay
1997-10-01
This paper introduces a PID based signal error modeling. The error modeling is based on the betterment process. The resulting iterative learning algorithm is introduced and a detailed proof is provided for both linear and nonlinear systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lon N. Haney; David I. Gertman
2003-04-01
Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human errormore » analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.« less
Prevalence of refractive errors in children in India: a systematic review.
Sheeladevi, Sethu; Seelam, Bharani; Nukella, Phanindra B; Modi, Aditi; Ali, Rahul; Keay, Lisa
2018-04-22
Uncorrected refractive error is an avoidable cause of visual impairment which affects children in India. The objective of this review is to estimate the prevalence of refractive errors in children ≤ 15 years of age. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines were followed in this review. A detailed literature search was performed to include all population and school-based studies published from India between January 1990 and January 2017, using the Cochrane Library, Medline and Embase. The quality of the included studies was assessed based on a critical appraisal tool developed for systematic reviews of prevalence studies. Four population-based studies and eight school-based studies were included. The overall prevalence of refractive error per 100 children was 8.0 (CI: 7.4-8.1) and in schools it was 10.8 (CI: 10.5-11.2). The population-based prevalence of myopia, hyperopia (≥ +2.00 D) and astigmatism was 5.3 per cent, 4.0 per cent and 5.4 per cent, respectively. Combined refractive error and myopia alone were higher in urban areas compared to rural areas (odds ratio [OR]: 2.27 [CI: 2.09-2.45]) and (OR: 2.12 [CI: 1.79-2.50]), respectively. The prevalence of combined refractive errors and myopia alone in schools was higher among girls than boys (OR: 1.2 [CI: 1.1-1.3] and OR: 1.1 [CI: 1.1-1.2]), respectively. However, hyperopia was more prevalent among boys than girls in schools (OR: 2.1 [CI: 1.8-2.4]). Refractive error in children in India is a major public health problem and requires concerted efforts from various stakeholders including the health care workforce, education professionals and parents, to manage this issue. © 2018 Optometry Australia.
Li, Zhijian; Xu, Keke; Wu, Shubin; Lv, Jia; Jin, Di; Song, Zhen; Wang, Zhongliang; Liu, Ping
2014-01-01
The prevalence of refractive error in the north of China is unknown. The study aimed to estimate the prevalence and associated factors of refractive error in school-aged children in a rural area of northern China. Cross-sectional study. The cluster random sampling method was used to select the sample. A total of 1700 subjects of 5 to 18 years of age were examined. All participants underwent ophthalmic evaluation. Refraction was performed under cycloplegia. Association of refractive errors with age, sex, and education was analysed. The main outcome measure was prevalence rates of refractive error among school-aged children. Of the 1700 responders, 1675 were eligible. The prevalence of uncorrected, presenting, and best-corrected visual acuity of 20/40 or worse in the better eye was 6.3%, 3.0% and 1.2%, respectively. The prevalence of myopia was 5.0% (84/1675, 95% CI, 4.8%-5.4%) and of hyperopia was 1.6% (27/1675, 95% CI, 1.0%-2.2%). Astigmatism was evident in 2.0% of the subjects. Myopia increased with increasing age, whereas hyperopia and astigmatism were associated with younger age. Myopia, hyperopia and astigmatism were more common in females. We also found that prevalence of refractive error were associated with education. Myopia and astigmatism were more common in those with higher degrees of education. This report has provided details of the refractive status in a rural school-aged population. Although the prevalence of refractive errors is lower in the population, the unmet need for spectacle correction remains a significant challenge for refractive eye-care services. © 2013 Royal Australian and New Zealand College of Ophthalmologists.
C-band radar pulse Doppler error: Its discovery, modeling, and elimination
NASA Technical Reports Server (NTRS)
Krabill, W. B.; Dempsey, D. J.
1978-01-01
The discovery of a C Band radar pulse Doppler error is discussed and use of the GEOS 3 satellite's coherent transponder to isolate the error source is described. An analysis of the pulse Doppler tracking loop is presented and a mathematical model for the error was developed. Error correction techniques were developed and are described including implementation details.
Convergence of methods for coupling of microscopic and mesoscopic reaction-diffusion simulations
NASA Astrophysics Data System (ADS)
Flegg, Mark B.; Hellander, Stefan; Erban, Radek
2015-05-01
In this paper, three multiscale methods for coupling of mesoscopic (compartment-based) and microscopic (molecular-based) stochastic reaction-diffusion simulations are investigated. Two of the three methods that will be discussed in detail have been previously reported in the literature; the two-regime method (TRM) and the compartment-placement method (CPM). The third method that is introduced and analysed in this paper is called the ghost cell method (GCM), since it works by constructing a "ghost cell" in which molecules can disappear and jump into the compartment-based simulation. Presented is a comparison of sources of error. The convergent properties of this error are studied as the time step Δt (for updating the molecular-based part of the model) approaches zero. It is found that the error behaviour depends on another fundamental computational parameter h, the compartment size in the mesoscopic part of the model. Two important limiting cases, which appear in applications, are considered: Δt → 0 and h is fixed; Δt → 0 and h → 0 such that √{ Δt } / h is fixed. The error for previously developed approaches (the TRM and CPM) converges to zero only in the limiting case (ii), but not in case (i). It is shown that the error of the GCM converges in the limiting case (i). Thus the GCM is superior to previous coupling techniques if the mesoscopic description is much coarser than the microscopic part of the model.
Distorting the Historical Record: One Detailed Example from the Albert Shanker Institute's Report
ERIC Educational Resources Information Center
American Educator, 2012
2012-01-01
This article presents a detailed example from the Albert Shanker Institute's report that shows the error of U.S. history textbooks and how it is distorting the historical record. One of the most glaring errors in textbooks is the treatment of the role that unions and labor activists played as key participants in the civil rights movement. The…
Temperature dependence of emission measure in solar X-ray plasmas. 1: Non-flaring active regions
NASA Technical Reports Server (NTRS)
Phillips, K. J. H.
1974-01-01
X-ray and ultraviolet line emission from hot, optically thin material forming coronal active regions on the sun may be described in terms of an emission measure distribution function, Phi (T). A relationship is developed between line flux and Phi (T), a theory which assumes that the electron density is a single-valued function of temperature. The sources of error involved in deriving Phi (T) from a set of line fluxes are examined in some detail. These include errors in atomic data (collisional excitation rates, assessment of other mechanisms for populating excited states of transitions, element abundances, ion concentrations, oscillator strengths) and errors in observed line fluxes arising from poorly - known instrumental responses. Two previous analyses are discussed in which Phi (T) for a non-flaring active region is derived. A least squares method of Batstone uses X-ray data of low statistical significance, a fact which appears to influence the results considerably. Two methods for finding Phi (T) ab initio are developed. The coefficients are evaluated by least squares. These two methods should have application not only to active-region plasmas, but also to hot, flare-produced plasmas.
NASA Technical Reports Server (NTRS)
1974-01-01
Field measurements performed simultaneous with Skylab overpass in order to provide comparative calibration and performance evaluation measurements for the EREP sensors are presented. Wavelength region covered include: solar radiation (400 to 1300 nanometer), and thermal radiation (8 to 14 micrometer). Measurements consisted of general conditions and near surface meteorology, atmospheric temperature and humidity vs altitude, the thermal brightness temperature, total and diffuse solar radiation, direct solar radiation (subsequently analyzed for optical depth/transmittance), and target reflectivity/radiance. The particular instruments used are discussed along with analyses performed. Detailed instrument operation, calibrations, techniques, and errors are given.
Improving designer productivity. [artificial intelligence
NASA Technical Reports Server (NTRS)
Hill, Gary C.
1992-01-01
Designer and design team productivity improves with skill, experience, and the tools available. The design process involves numerous trials and errors, analyses, refinements, and addition of details. Computerized tools have greatly speeded the analysis, and now new theories and methods, emerging under the label Artificial Intelligence (AI), are being used to automate skill and experience. These tools improve designer productivity by capturing experience, emulating recognized skillful designers, and making the essence of complex programs easier to grasp. This paper outlines the aircraft design process in today's technology and business climate, presenting some of the challenges ahead and some of the promising AI methods for meeting these challenges.
Han, Kyunghwa; Jung, Inkyung
2018-05-01
This review article presents an assessment of trends in statistical methods and an evaluation of their appropriateness in articles published in the Archives of Plastic Surgery (APS) from 2012 to 2017. We reviewed 388 original articles published in APS between 2012 and 2017. We categorized the articles that used statistical methods according to the type of statistical method, the number of statistical methods, and the type of statistical software used. We checked whether there were errors in the description of statistical methods and results. A total of 230 articles (59.3%) published in APS between 2012 and 2017 used one or more statistical method. Within these articles, there were 261 applications of statistical methods with continuous or ordinal outcomes, and 139 applications of statistical methods with categorical outcome. The Pearson chi-square test (17.4%) and the Mann-Whitney U test (14.4%) were the most frequently used methods. Errors in describing statistical methods and results were found in 133 of the 230 articles (57.8%). Inadequate description of P-values was the most common error (39.1%). Among the 230 articles that used statistical methods, 71.7% provided details about the statistical software programs used for the analyses. SPSS was predominantly used in the articles that presented statistical analyses. We found that the use of statistical methods in APS has increased over the last 6 years. It seems that researchers have been paying more attention to the proper use of statistics in recent years. It is expected that these positive trends will continue in APS.
NASA Astrophysics Data System (ADS)
DeLong, S. B.; Avdievitch, N. N.
2014-12-01
As high-resolution topographic data become increasingly available, comparison of multitemporal and disparate datasets (e.g. airborne and terrestrial lidar) enable high-accuracy quantification of landscape change and detailed mapping of surface processes. However, if these data are not properly managed and aligned with maximum precision, results may be spurious. Often this is due to slight differences in coordinate systems that require complex geographic transformations and systematic error that is difficult to diagnose and correct. Here we present an analysis of four airborne and three terrestrial lidar datasets collected between 2003 and 2014 that we use to quantify change at an active earthflow in Mill Gulch, Sonoma County, California. We first identify and address systematic error internal to each dataset, such as registration offset between flight lines or scan positions. We then use a variant of an iterative closest point (ICP) algorithm to align point cloud data by maximizing use of stable portions of the landscape with minimal internal error. Using products derived from the aligned point clouds, we make our geomorphic analyses. These methods may be especially useful for change detection analyses in which accurate georeferencing is unavailable, as is often the case with some terrestrial lidar or "structure from motion" data. Our results show that the Mill Gulch earthflow has been active throughout the study period. We see continuous downslope flow, ongoing incorporation of new hillslope material into the flow, sediment loss from hillslopes, episodic fluvial erosion of the earthflow toe, and an indication of increased activity during periods of high precipitation.
The GRAPE aerosol retrieval algorithm
NASA Astrophysics Data System (ADS)
Thomas, G. E.; Poulsen, C. A.; Sayer, A. M.; Marsh, S. H.; Dean, S. M.; Carboni, E.; Siddans, R.; Grainger, R. G.; Lawrence, B. N.
2009-11-01
The aerosol component of the Oxford-Rutherford Aerosol and Cloud (ORAC) combined cloud and aerosol retrieval scheme is described and the theoretical performance of the algorithm is analysed. ORAC is an optimal estimation retrieval scheme for deriving cloud and aerosol properties from measurements made by imaging satellite radiometers and, when applied to cloud free radiances, provides estimates of aerosol optical depth at a wavelength of 550 nm, aerosol effective radius and surface reflectance at 550 nm. The aerosol retrieval component of ORAC has several incarnations - this paper addresses the version which operates in conjunction with the cloud retrieval component of ORAC (described by Watts et al., 1998), as applied in producing the Global Retrieval of ATSR Cloud Parameters and Evaluation (GRAPE) data-set. The algorithm is described in detail and its performance examined. This includes a discussion of errors resulting from the formulation of the forward model, sensitivity of the retrieval to the measurements and a priori constraints, and errors resulting from assumptions made about the atmospheric/surface state.
The GRAPE aerosol retrieval algorithm
NASA Astrophysics Data System (ADS)
Thomas, G. E.; Poulsen, C. A.; Sayer, A. M.; Marsh, S. H.; Dean, S. M.; Carboni, E.; Siddans, R.; Grainger, R. G.; Lawrence, B. N.
2009-04-01
The aerosol component of the Oxford-Rutherford Aerosol and Cloud (ORAC) combined cloud and aerosol retrieval scheme is described and the theoretical performance of the algorithm is analysed. ORAC is an optimal estimation retrieval scheme for deriving cloud and aerosol properties from measurements made by imaging satellite radiometers and, when applied to cloud free radiances, provides estimates of aerosol optical depth at a wavelength of 550 nm, aerosol effective radius and surface reflectance at 550 nm. The aerosol retrieval component of ORAC has several incarnations - this paper addresses the version which operates in conjunction with the cloud retrieval component of ORAC (described by Watts et al., 1998), as applied in producing the Global Retrieval of ATSR Cloud Parameters and Evaluation (GRAPE) data-set. The algorithm is described in detail and its performance examined. This includes a discussion of errors resulting from the formulation of the forward model, sensitivity of the retrieval to the measurements and a priori constraints, and errors resulting from assumptions made about the atmospheric/surface state.
Error Analysis of Brailled Instructional Materials Produced by Public School Personnel in Texas
ERIC Educational Resources Information Center
Herzberg, Tina
2010-01-01
In this study, a detailed error analysis was performed to determine if patterns of errors existed in braille transcriptions. The most frequently occurring errors were the insertion of letters or words that were not contained in the original print material; the incorrect usage of the emphasis indicator; and the incorrect formatting of titles,…
ITOUGH2(UNIX). Inverse Modeling for TOUGH2 Family of Multiphase Flow Simulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finsterle, S.
1999-03-01
ITOUGH2 provides inverse modeling capabilities for the TOUGH2 family of numerical simulators for non-isothermal multiphase flows in fractured-porous media. The ITOUGH2 can be used for estimating parameters by automatic modeling calibration, for sensitivity analyses, and for uncertainity propagation analyses (linear and Monte Carlo simulations). Any input parameter to the TOUGH2 simulator can be estimated based on any type of observation for which a corresponding TOUGH2 output is calculated. ITOUGH2 solves a non-linear least-squares problem using direct or gradient-based minimization algorithms. A detailed residual and error analysis is performed, which includes the evaluation of model identification criteria. ITOUGH2 can also bemore » run in forward mode, solving subsurface flow problems related to nuclear waste isolation, oil, gas, and geothermal resevoir engineering, and vadose zone hydrology.« less
Matsunaga, Hiroko; Goto, Mari; Arikawa, Koji; Shirai, Masataka; Tsunoda, Hiroyuki; Huang, Huan; Kambara, Hideki
2015-02-15
Analyses of gene expressions in single cells are important for understanding detailed biological phenomena. Here, a highly sensitive and accurate method by sequencing (called "bead-seq") to obtain a whole gene expression profile for a single cell is proposed. A key feature of the method is to use a complementary DNA (cDNA) library on magnetic beads, which enables adding washing steps to remove residual reagents in a sample preparation process. By adding the washing steps, the next steps can be carried out under the optimal conditions without losing cDNAs. Error sources were carefully evaluated to conclude that the first several steps were the key steps. It is demonstrated that bead-seq is superior to the conventional methods for single-cell gene expression analyses in terms of reproducibility, quantitative accuracy, and biases caused during sample preparation and sequencing processes. Copyright © 2014 Elsevier Inc. All rights reserved.
DOT National Transportation Integrated Search
1998-08-01
The purpose of this study was to identify the factors that contribute to pilot-controller communication errors. Resports submitted to the Aviation Safety Reporting System (ASRS) offer detailed accounts of specific types of errors and a great deal of ...
An error analysis perspective for patient alignment systems.
Figl, Michael; Kaar, Marcus; Hoffman, Rainer; Kratochwil, Alfred; Hummel, Johann
2013-09-01
This paper analyses the effects of error sources which can be found in patient alignment systems. As an example, an ultrasound (US) repositioning system and its transformation chain are assessed. The findings of this concept can also be applied to any navigation system. In a first step, all error sources were identified and where applicable, corresponding target registration errors were computed. By applying error propagation calculations on these commonly used registration/calibration and tracking errors, we were able to analyse the components of the overall error. Furthermore, we defined a special situation where the whole registration chain reduces to the error caused by the tracking system. Additionally, we used a phantom to evaluate the errors arising from the image-to-image registration procedure, depending on the image metric used. We have also discussed how this analysis can be applied to other positioning systems such as Cone Beam CT-based systems or Brainlab's ExacTrac. The estimates found by our error propagation analysis are in good agreement with the numbers found in the phantom study but significantly smaller than results from patient evaluations. We probably underestimated human influences such as the US scan head positioning by the operator and tissue deformation. Rotational errors of the tracking system can multiply these errors, depending on the relative position of tracker and probe. We were able to analyse the components of the overall error of a typical patient positioning system. We consider this to be a contribution to the optimization of the positioning accuracy for computer guidance systems.
Rabøl, Louise Isager; Andersen, Mette Lehmann; Østergaard, Doris; Bjørn, Brian; Lilja, Beth; Mogensen, Torben
2011-03-01
Poor teamwork and communication between healthcare staff are correlated to patient safety incidents. However, the organisational factors responsible for these issues are unexplored. Root cause analyses (RCA) use human factors thinking to analyse the systems behind severe patient safety incidents. The objective of this study is to review RCA reports (RCAR) for characteristics of verbal communication errors between hospital staff in an organisational perspective. Two independent raters analysed 84 RCARs, conducted in six Danish hospitals between 2004 and 2006, for descriptions and characteristics of verbal communication errors such as handover errors and error during teamwork. Raters found description of verbal communication errors in 44 reports (52%). These included handover errors (35 (86%)), communication errors between different staff groups (19 (43%)), misunderstandings (13 (30%)), communication errors between junior and senior staff members (11 (25%)), hesitance in speaking up (10 (23%)) and communication errors during teamwork (8 (18%)). The kappa values were 0.44-0.78. Unproceduralized communication and information exchange via telephone, related to transfer between units and consults from other specialties, were particularly vulnerable processes. With the risk of bias in mind, it is concluded that more than half of the RCARs described erroneous verbal communication between staff members as root causes of or contributing factors of severe patient safety incidents. The RCARs rich descriptions of the incidents revealed the organisational factors and needs related to these errors.
Bedini, José Luis; Wallace, Jane F; Pardo, Scott; Petruschke, Thorsten
2015-10-07
Blood glucose monitoring is an essential component of diabetes management. Inaccurate blood glucose measurements can severely impact patients' health. This study evaluated the performance of 3 blood glucose monitoring systems (BGMS), Contour® Next USB, FreeStyle InsuLinx®, and OneTouch® Verio™ IQ, under routine hospital conditions. Venous blood samples (N = 236) obtained for routine laboratory procedures were collected at a Spanish hospital, and blood glucose (BG) concentrations were measured with each BGMS and with the available reference (hexokinase) method. Accuracy of the 3 BGMS was compared according to ISO 15197:2013 accuracy limit criteria, by mean absolute relative difference (MARD), consensus error grid (CEG) and surveillance error grid (SEG) analyses, and an insulin dosing error model. All BGMS met the accuracy limit criteria defined by ISO 15197:2013. While all measurements of the 3 BGMS were within low-risk zones in both error grid analyses, the Contour Next USB showed significantly smaller MARDs between reference values compared to the other 2 BGMS. Insulin dosing errors were lowest for the Contour Next USB than compared to the other systems. All BGMS fulfilled ISO 15197:2013 accuracy limit criteria and CEG criterion. However, taking together all analyses, differences in performance of potential clinical relevance may be observed. Results showed that Contour Next USB had lowest MARD values across the tested glucose range, as compared with the 2 other BGMS. CEG and SEG analyses as well as calculation of the hypothetical bolus insulin dosing error suggest a high accuracy of the Contour Next USB. © 2015 Diabetes Technology Society.
Subthreshold muscle twitches dissociate oscillatory neural signatures of conflicts from errors.
Cohen, Michael X; van Gaal, Simon
2014-02-01
We investigated the neural systems underlying conflict detection and error monitoring during rapid online error correction/monitoring mechanisms. We combined data from four separate cognitive tasks and 64 subjects in which EEG and EMG (muscle activity from the thumb used to respond) were recorded. In typical neuroscience experiments, behavioral responses are classified as "error" or "correct"; however, closer inspection of our data revealed that correct responses were often accompanied by "partial errors" - a muscle twitch of the incorrect hand ("mixed correct trials," ~13% of the trials). We found that these muscle twitches dissociated conflicts from errors in time-frequency domain analyses of EEG data. In particular, both mixed-correct trials and full error trials were associated with enhanced theta-band power (4-9Hz) compared to correct trials. However, full errors were additionally associated with power and frontal-parietal synchrony in the delta band. Single-trial robust multiple regression analyses revealed a significant modulation of theta power as a function of partial error correction time, thus linking trial-to-trial fluctuations in power to conflict. Furthermore, single-trial correlation analyses revealed a qualitative dissociation between conflict and error processing, such that mixed correct trials were associated with positive theta-RT correlations whereas full error trials were associated with negative delta-RT correlations. These findings shed new light on the local and global network mechanisms of conflict monitoring and error detection, and their relationship to online action adjustment. © 2013.
Analyzing contentious relationships and outlier genes in phylogenomics.
Walker, Joseph F; Brown, Joseph W; Smith, Stephen A
2018-06-08
Recent studies have demonstrated that conflict is common among gene trees in phylogenomic studies, and that less than one percent of genes may ultimately drive species tree inference in supermatrix analyses. Here, we examined two datasets where supermatrix and coalescent-based species trees conflict. We identified two highly influential "outlier" genes in each dataset. When removed from each dataset, the inferred supermatrix trees matched the topologies obtained from coalescent analyses. We also demonstrate that, while the outlier genes in the vertebrate dataset have been shown in a previous study to be the result of errors in orthology detection, the outlier genes from a plant dataset did not exhibit any obvious systematic error and therefore may be the result of some biological process yet to be determined. While topological comparisons among a small set of alternate topologies can be helpful in discovering outlier genes, they can be limited in several ways, such as assuming all genes share the same topology. Coalescent species tree methods relax this assumption but do not explicitly facilitate the examination of specific edges. Coalescent methods often also assume that conflict is the result of incomplete lineage sorting (ILS). Here we explored a framework that allows for quickly examining alternative edges and support for large phylogenomic datasets that does not assume a single topology for all genes. For both datasets, these analyses provided detailed results confirming the support for coalescent-based topologies. This framework suggests that we can improve our understanding of the underlying signal in phylogenomic datasets by asking more targeted edge-based questions.
Strategies for Detecting and Correcting Errors in Accounting Problems.
ERIC Educational Resources Information Center
James, Marianne L.
2003-01-01
Reviews common errors in accounting tests that students commit resulting from deficiencies in fundamental prior knowledge, ineffective test taking, and inattention to detail and provides solutions to the problems. (JOW)
Wanner, Adrian A; Genoud, Christel; Friedrich, Rainer W
2016-11-08
Large-scale reconstructions of neuronal populations are critical for structural analyses of neuronal cell types and circuits. Dense reconstructions of neurons from image data require ultrastructural resolution throughout large volumes, which can be achieved by automated volumetric electron microscopy (EM) techniques. We used serial block face scanning EM (SBEM) and conductive sample embedding to acquire an image stack from an olfactory bulb (OB) of a zebrafish larva at a voxel resolution of 9.25×9.25×25 nm 3 . Skeletons of 1,022 neurons, 98% of all neurons in the OB, were reconstructed by manual tracing and efficient error correction procedures. An ergonomic software package, PyKNOSSOS, was created in Python for data browsing, neuron tracing, synapse annotation, and visualization. The reconstructions allow for detailed analyses of morphology, projections and subcellular features of different neuron types. The high density of reconstructions enables geometrical and topological analyses of the OB circuitry. Image data can be accessed and viewed through the neurodata web services (http://www.neurodata.io). Raw data and reconstructions can be visualized in PyKNOSSOS.
Wanner, Adrian A.; Genoud, Christel; Friedrich, Rainer W.
2016-01-01
Large-scale reconstructions of neuronal populations are critical for structural analyses of neuronal cell types and circuits. Dense reconstructions of neurons from image data require ultrastructural resolution throughout large volumes, which can be achieved by automated volumetric electron microscopy (EM) techniques. We used serial block face scanning EM (SBEM) and conductive sample embedding to acquire an image stack from an olfactory bulb (OB) of a zebrafish larva at a voxel resolution of 9.25×9.25×25 nm3. Skeletons of 1,022 neurons, 98% of all neurons in the OB, were reconstructed by manual tracing and efficient error correction procedures. An ergonomic software package, PyKNOSSOS, was created in Python for data browsing, neuron tracing, synapse annotation, and visualization. The reconstructions allow for detailed analyses of morphology, projections and subcellular features of different neuron types. The high density of reconstructions enables geometrical and topological analyses of the OB circuitry. Image data can be accessed and viewed through the neurodata web services (http://www.neurodata.io). Raw data and reconstructions can be visualized in PyKNOSSOS. PMID:27824337
COCAP: a carbon dioxide analyser for small unmanned aircraft systems
NASA Astrophysics Data System (ADS)
Kunz, Martin; Lavric, Jost V.; Gerbig, Christoph; Tans, Pieter; Neff, Don; Hummelgård, Christine; Martin, Hans; Rödjegård, Henrik; Wrenger, Burkhard; Heimann, Martin
2018-03-01
Unmanned aircraft systems (UASs) could provide a cost-effective way to close gaps in the observation of the carbon cycle, provided that small yet accurate analysers are available. We have developed a COmpact Carbon dioxide analyser for Airborne Platforms (COCAP). The accuracy of COCAP's carbon dioxide (CO2) measurements is ensured by calibration in an environmental chamber, regular calibration in the field and by chemical drying of sampled air. In addition, the package contains a lightweight thermal stabilisation system that reduces the influence of ambient temperature changes on the CO2 sensor by 2 orders of magnitude. During validation of COCAP's CO2 measurements in simulated and real flights we found a measurement error of 1.2 µmol mol-1 or better with no indication of bias. COCAP is a self-contained package that has proven well suited for the operation on board small UASs. Besides carbon dioxide dry air mole fraction it also measures air temperature, humidity and pressure. We describe the measurement system and our calibration strategy in detail to support others in tapping the potential of UASs for atmospheric trace gas measurements.
GPS Data Filtration Method for Drive Cycle Analysis Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duran, A.; Earleywine, M.
2013-02-01
When employing GPS data acquisition systems to capture vehicle drive-cycle information, a number of errors often appear in the raw data samples, such as sudden signal loss, extraneous or outlying data points, speed drifting, and signal white noise, all of which limit the quality of field data for use in downstream applications. Unaddressed, these errors significantly impact the reliability of source data and limit the effectiveness of traditional drive-cycle analysis approaches and vehicle simulation software. Without reliable speed and time information, the validity of derived metrics for drive cycles, such as acceleration, power, and distance, become questionable. This study exploresmore » some of the common sources of error present in raw onboard GPS data and presents a detailed filtering process designed to correct for these issues. Test data from both light and medium/heavy duty applications are examined to illustrate the effectiveness of the proposed filtration process across the range of vehicle vocations. Graphical comparisons of raw and filtered cycles are presented, and statistical analyses are performed to determine the effects of the proposed filtration process on raw data. Finally, an evaluation of the overall benefits of data filtration on raw GPS data and present potential areas for continued research is presented.« less
The ASSET intercomparison of stratosphere and lower mesosphere humidity analyses
NASA Astrophysics Data System (ADS)
Thornton, H. E.; Jackson, D. R.; Bekki, S.; Bormann, N.; Errera, Q.; Geer, A. J.; Lahoz, W. A.; Rharmili, S.
2008-07-01
This paper presents results from the first detailed intercomparison of stratosphere-lower mesosphere water vapour analyses; it builds on earlier results from the "Assimilation of ENVISAT Data" (ASSET) project. With the availability of high resolution, good quality Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) water vapour profiles, the ability of four different atmospheric models to assimilate these data is tested. MIPAS data have been assimilated over September 2003 into the models of the European Centre for Medium Range Weather Forecasts (ECMWF), the Belgian Institute for Space and Aeronomy (BIRA-IASB), the French Service d'Aéronomie (SA-IPSL) and the UK Met Office. The resultant middle atmosphere humidity analyses are compared against independent satellite data from the Halogen Occultation Experiment (HALOE), the Polar Ozone and Aerosol Measurement (POAM III) and the Stratospheric Aerosol and Gas Experiment (SAGE II). The MIPAS water vapour profiles are generally well assimilated in the ECMWF, BIRA-IASB and SA systems, producing stratosphere-mesosphere water vapour fields where the main features compare favourably with the independent observations. However, the models are less capable of assimilating the MIPAS data where water vapour values are locally extreme or in regions of strong humidity gradients, such as the Southern Hemisphere lower stratosphere polar vortex. Differences in the analyses can be attributed to the choice of humidity control variable, how the background error covariance matrix is generated, the model resolution and its complexity, the degree of quality control of the observations and the use of observations near the model boundaries. Due to the poor performance of the Met Office analyses the results are not included in the intercomparison, but are discussed separately. The Met Office results highlight the pitfalls in humidity assimilation, and provide lessons that should be learnt by developers of stratospheric humidity assimilation systems. In particular, they underline the importance of the background error covariances in generating a realistic troposphere to mesosphere water vapour analysis.
Quadratic electro-optic effects and electro-absorption process in multilayer nanoshells
NASA Astrophysics Data System (ADS)
Bahari, Ali; Rahimi Moghadam, Fereshteh
2011-07-01
In this corrigendum, the authors would like to report typographic errors in the first name of the second author and in equation 7. The details of these errors can be found in the PDF. The authors would like to express their sincere apologies for these errors in the article.
Moments of inclination error distribution computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1981-01-01
A FORTRAN coded computer program is described which calculates orbital inclination error statistics using a closed-form solution. This solution uses a data base of trajectory errors from actual flights to predict the orbital inclination error statistics. The Scott flight history data base consists of orbit insertion errors in the trajectory parameters - altitude, velocity, flight path angle, flight azimuth, latitude and longitude. The methods used to generate the error statistics are of general interest since they have other applications. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included.
Retrieval Failure Contributes to Gist-Based False Recognition
Guerin, Scott A.; Robbins, Clifford A.; Gilmore, Adrian W.; Schacter, Daniel L.
2011-01-01
People often falsely recognize items that are similar to previously encountered items. This robust memory error is referred to as gist-based false recognition. A widely held view is that this error occurs because the details fade rapidly from our memory. Contrary to this view, an initial experiment revealed that, following the same encoding conditions that produce high rates of gist-based false recognition, participants overwhelmingly chose the correct target rather than its related foil when given the option to do so. A second experiment showed that this result is due to increased access to stored details provided by reinstatement of the originally encoded photograph, rather than to increased attention to the details. Collectively, these results suggest that details needed for accurate recognition are, to a large extent, still stored in memory and that a critical factor determining whether false recognition will occur is whether these details can be accessed during retrieval. PMID:22125357
Buhay, W.M.; Simpson, S.; Thorleifson, H.; Lewis, M.; King, J.; Telka, A.; Wilkinson, Philip M.; Babb, J.; Timsic, S.; Bailey, D.
2009-01-01
A short sediment core (162 cm), covering the period AD 920-1999, was sampled from the south basin of Lake Winnipeg for a suite of multi-proxy analyses leading towards a detailed characterisation of the recent millennial lake environment and hydroclimate of southern Manitoba, Canada. Information on the frequency and duration of major dry periods in southern Manitoba, in light of the changes that are likely to occur as a result of an increasingly warming atmosphere, is of specific interest in this study. Intervals of relatively enriched lake sediment cellulose oxygen isotope values (??18Ocellulose) were found to occur from AD 1180 to 1230 (error range: AD 1104-1231 to 1160-1280), 1610-1640 (error range: AD 1571-1634 to 1603-1662), 1670-1720 (error range: AD 1643-1697 to 1692-1738) and 1750-1780 (error range: AD 1724-1766 to 1756-1794). Regional water balance, inferred from calculated Lake Winnipeg water oxygen isotope values (??18Oinf-lw), suggest that the ratio of lake evaporation to catchment input may have been 25-40% higher during these isotopically distinct periods. Associated with the enriched d??18Ocellulose intervals are some depleted carbon isotope values associated with more abundantly preserved sediment organic matter (d??13COM). These suggest reduced microbial oxidation of terrestrially derived organic matter and/or subdued lake productivity during periods of minimised input of nutrients from the catchment area. With reference to other corroborating evidence, it is suggested that the AD 1180-1230, 1610-1640, 1670-1720 and 1750-1780 intervals represent four distinctly drier periods (droughts) in southern Manitoba, Canada. Additionally, lower-magnitude and duration dry periods may have also occurred from 1320 to 1340 (error range: AD 1257-1363), 1530-1540 (error range: AD 1490-1565 to 1498-1572) and 1570-1580 (error range: AD 1531-1599 to 1539-1606). ?? 2009 John Wiley & Sons, Ltd.
Extended and refined multi sensor reanalysis of total ozone for the period 1970-2012
NASA Astrophysics Data System (ADS)
van der A, R. J.; Allaart, M. A. F.; Eskes, H. J.
2015-07-01
The ozone multi-sensor reanalysis (MSR) is a multi-decadal ozone column data record constructed using all available ozone column satellite data sets, surface Brewer and Dobson observations and a data assimilation technique with detailed error modelling. The result is a high-resolution time series of 6-hourly global ozone column fields and forecast error fields that may be used for ozone trend analyses as well as detailed case studies. The ozone MSR is produced in two steps. First, the latest reprocessed versions of all available ozone column satellite data sets are collected and then are corrected for biases as a function of solar zenith angle (SZA), viewing zenith angle (VZA), time (trend), and stratospheric temperature using surface observations of the ozone column from Brewer and Dobson spectrophotometers from the World Ozone and Ultraviolet Radiation Data Centre (WOUDC). Subsequently the de-biased satellite observations are assimilated within the ozone chemistry and data assimilation model TMDAM. The MSR2 (MSR version 2) reanalysis upgrade described in this paper consists of an ozone record for the 43-year period 1970-2012. The chemistry transport model and data assimilation system have been adapted to improve the resolution, error modelling and processing speed. Backscatter ultraviolet (BUV) satellite observations have been included for the period 1970-1977. The total record is extended by 13 years compared to the first version of the ozone multi sensor reanalysis, the MSR1. The latest total ozone retrievals of 15 satellite instruments are used: BUV-Nimbus4, TOMS-Nimbus7, TOMS-EP, SBUV-7, -9, -11, -14, -16, -17, -18, -19, GOME, SCIAMACHY, OMI and GOME-2. The resolution of the model runs, assimilation and output is increased from 2° × 3° to 1° × 1°. The analysis is driven by 3-hourly meteorology from the ERA-Interim reanalysis of the European Centre for Medium-Range Weather Forecasts (ECMWF) starting from 1979, and ERA-40 before that date. The chemistry parameterization has been updated. The performance of the MSR2 analysis is studied with the help of observation-minus-forecast (OmF) departures from the data assimilation, by comparisons with the individual station observations and with ozone sondes. The OmF statistics show that the mean bias of the MSR2 analyses is less than 1 % with respect to de-biased satellite observations after 1979.
Using snowball sampling method with nurses to understand medication administration errors.
Sheu, Shuh-Jen; Wei, Ien-Lan; Chen, Ching-Huey; Yu, Shu; Tang, Fu-In
2009-02-01
We aimed to encourage nurses to release information about drug administration errors to increase understanding of error-related circumstances and to identify high-alert situations. Drug administration errors represent the majority of medication errors, but errors are underreported. Effective ways are lacking to encourage nurses to actively report errors. Snowball sampling was conducted to recruit participants. A semi-structured questionnaire was used to record types of error, hospital and nurse backgrounds, patient consequences, error discovery mechanisms and reporting rates. Eighty-five nurses participated, reporting 328 administration errors (259 actual, 69 near misses). Most errors occurred in medical surgical wards of teaching hospitals, during day shifts, committed by nurses working fewer than two years. Leading errors were wrong drugs and doses, each accounting for about one-third of total errors. Among 259 actual errors, 83.8% resulted in no adverse effects; among remaining 16.2%, 6.6% had mild consequences and 9.6% had serious consequences (severe reaction, coma, death). Actual errors and near misses were discovered mainly through double-check procedures by colleagues and nurses responsible for errors; reporting rates were 62.5% (162/259) vs. 50.7% (35/69) and only 3.5% (9/259) vs. 0% (0/69) were disclosed to patients and families. High-alert situations included administration of 15% KCl, insulin and Pitocin; using intravenous pumps; and implementation of cardiopulmonary resuscitation (CPR). Snowball sampling proved to be an effective way to encourage nurses to release details concerning medication errors. Using empirical data, we identified high-alert situations. Strategies for reducing drug administration errors by nurses are suggested. Survey results suggest that nurses should double check medication administration in known high-alert situations. Nursing management can use snowball sampling to gather error details from nurses in a non-reprimanding atmosphere, helping to establish standard operational procedures for known high-alert situations.
Koutstaal, Wilma
2003-03-01
Investigations of memory deficits in older individuals have concentrated on their increased likelihood of forgetting events or details of events that were actually encountered (errors of omission). However, mounting evidence demonstrates that normal cognitive aging also is associated with an increased propensity for errors of commission--shown in false alarms or false recognition. The present study examined the origins of this age difference. Older and younger adults each performed three types of memory tasks in which details of encountered items might influence performance. Although older adults showed greater false recognition of related lures on a standard (identical) old/new episodic recognition task, older and younger adults showed parallel effects of detail on repetition priming and meaning-based episodic recognition (decreased priming and decreased meaning-based recognition for different relative to same exemplars). The results suggest that the older adults encoded details but used them less effectively than the younger adults in the recognition context requiring their deliberate, controlled use.
ERIC Educational Resources Information Center
Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.
2010-01-01
This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…
NASA Astrophysics Data System (ADS)
Flanagan, S.; Schachter, J. M.; Schissel, D. P.
2001-10-01
A Data Analysis Monitoring (DAM) system has been developed to monitor between pulse physics analysis at the DIII-D National Fusion Facility. The system allows for rapid detection of discrepancies in diagnostic measurements or the results from physics analysis codes. This enables problems to be detected and possibly fixed between pulses as opposed to after the experimental run has concluded thus increasing the efficiency of experimental time. An example of a consistency check is comparing the stored energy from integrating the measured kinetic profiles to that calculated from magnetic measurements by EFIT. This new system also tracks the progress of MDSplus dispatching of software for data analysis and the loading of analyzed data into MDSplus. DAM uses a Java Servlet to receive messages, Clips to implement expert system logic, and displays its results to multiple web clients via HTML. If an error is detected by DAM, users can view more detailed information so that steps can be taken to eliminate the error for the next pulse. A demonstration of this system including a simulated DIII-D pulse cycle will be presented.
Soil sampling strategies for site assessments in petroleum-contaminated areas.
Kim, Geonha; Chowdhury, Saikat; Lin, Yen-Min; Lu, Chih-Jen
2017-04-01
Environmental site assessments are frequently executed for monitoring and remediation performance evaluation purposes, especially in total petroleum hydrocarbon (TPH)-contaminated areas, such as gas stations. As a key issue, reproducibility of the assessment results must be ensured, especially if attempts are made to compare results between different institutions. Although it is widely known that uncertainties associated with soil sampling are much higher than those with chemical analyses, field guides or protocols to deal with these uncertainties are not stipulated in detail in the relevant regulations, causing serious errors and distortion of the reliability of environmental site assessments. In this research, uncertainties associated with soil sampling and sample reduction for chemical analysis were quantified using laboratory-scale experiments and the theory of sampling. The research results showed that the TPH mass assessed by sampling tends to be overestimated and sampling errors are high, especially for the low range of TPH concentrations. Homogenization of soil was found to be an efficient method to suppress uncertainty, but high-resolution sampling could be an essential way to minimize this.
Analysis of DSN software anomalies
NASA Technical Reports Server (NTRS)
Galorath, D. D.; Hecht, H.; Hecht, M.; Reifer, D. J.
1981-01-01
A categorized data base of software errors which were discovered during the various stages of development and operational use of the Deep Space Network DSN/Mark 3 System was developed. A study team identified several existing error classification schemes (taxonomies), prepared a detailed annotated bibliography of the error taxonomy literature, and produced a new classification scheme which was tuned to the DSN anomaly reporting system and encapsulated the work of others. Based upon the DSN/RCI error taxonomy, error data on approximately 1000 reported DSN/Mark 3 anomalies were analyzed, interpreted and classified. Next, error data are summarized and histograms were produced highlighting key tendencies.
NASA Astrophysics Data System (ADS)
Yu, H.; Russell, A. G.; Mulholland, J. A.
2017-12-01
In air pollution epidemiologic studies with spatially resolved air pollution data, exposures are often estimated using the home locations of individual subjects. Due primarily to lack of data or logistic difficulties, the spatiotemporal mobility of subjects are mostly neglected, which are expected to result in exposure misclassification errors. In this study, we applied detailed cell phone location data to characterize potential exposure misclassification errors associated with home-based exposure estimation of air pollution. The cell phone data sample consists of 9,886 unique simcard IDs collected on one mid-week day in October, 2013 from Shenzhen, China. The Community Multi-scale Air Quality model was used to simulate hourly ambient concentrations of six chosen pollutants at 3 km spatial resolution, which were then fused with observational data to correct for potential modeling biases and errors. Air pollution exposure for each simcard ID was estimated by matching hourly pollutant concentrations with detailed location data for corresponding IDs. Finally, the results were compared with exposure estimates obtained using the home location method to assess potential exposure misclassification errors. Our results show that the home-based method is likely to have substantial exposure misclassification errors, over-estimating exposures for subjects with higher exposure levels and under-estimating exposures for those with lower exposure levels. This has the potential to lead to a bias-to-the-null in the health effect estimates. Our findings suggest that the use of cell phone data has the potential for improving the characterization of exposure and exposure misclassification in air pollution epidemiology studies.
Error Generation in CATS-Based Agents
NASA Technical Reports Server (NTRS)
Callantine, Todd
2003-01-01
This research presents a methodology for generating errors from a model of nominally preferred correct operator activities, given a particular operational context, and maintaining an explicit link to the erroneous contextual information to support analyses. It uses the Crew Activity Tracking System (CATS) model as the basis for error generation. This report describes how the process works, and how it may be useful for supporting agent-based system safety analyses. The report presents results obtained by applying the error-generation process and discusses implementation issues. The research is supported by the System-Wide Accident Prevention Element of the NASA Aviation Safety Program.
Grün, Rainer; Spooner, Nigel; Magee, John; Thorne, Alan; Simpson, John; Yan, Ge; Mortimer, Graham
2011-05-01
We present a detailed description of the geological setting of the burial site of the WLH 50 human remains along with attempts to constrain the age of this important human fossil. Freshwater shells collected at the surface of Unit 3, which is most closely associated with the human remains, and a carbonate sample that encrusted the human bone were analysed. Gamma spectrometry was carried out on the WLH 50 calvaria and TIMS U-series analysis on a small post-cranial bone fragment. OSL dating was applied to a sample from Unit 3 at a level from which the WLH 50 remains may have eroded, as well as from the underlying sediments. Considering the geochemistry of the samples analysed, as well as the possibility of reworking or burial from younger layers, the age of the WLH 50 remains lies between 12.2 ± 1.8 and 32.8 ± 4.6 ka (2-σ errors). Copyright © 2010 Elsevier Ltd. All rights reserved.
Characterisation of the PXIE Allison-type emittance scanner
D'Arcy, R.; Alvarez, M.; Gaynier, J.; ...
2016-01-26
An Allison-type emittance scanner has been designed for PXIE at FNAL with the goal of providing fast and accurate phase space reconstruction. The device has been modified from previous LBNL/SNS designs to operate in both pulsed and DC modes with the addition of water-cooled front slits. Extensive calibration techniques and error analysis allowed confinement of uncertainty to the <5% level (with known caveats). With a 16-bit, 1 MHz electronics scheme the device is able to analyse a pulse with a resolution of 1 μs, allowing for analysis of neutralisation effects. As a result, this paper describes a detailed breakdown ofmore » the R&D, as well as post-run analysis techniques.« less
Field errors in hybrid insertion devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlueter, R.D.
1995-02-01
Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed.
Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario
2016-01-01
The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052
Solomon, Keith R; Stephenson, Gladys L
2017-01-01
A quantitative weight of evidence (QWoE) methodology was developed and used to assess many higher-tier studies on the effects of three neonicotinoid insecticides: clothianidin (CTD), imidacloprid (IMI), and thiamethoxam (TMX) on honeybees. A general problem formulation, a conceptual model for exposures of honeybees, and an analysis plan were developed. A QWoE methodology was used to characterize the quality of the available studies from the literature and unpublished reports of studies conducted by or for the registrants. These higher-tier studies focused on the exposures of honeybees to neonicotinoids via several matrices as measured in the field as well as the effects in experimentally controlled field studies. Reports provided by Bayer Crop Protection and Syngenta Crop Protection and papers from the open literature were assessed in detail, using predefined criteria for quality and relevance to develop scores (on a relative scale of 0-4) to separate the higher-quality from lower-quality studies and those relevant from less-relevant results. The scores from the QWoEs were summarized graphically to illustrate the overall quality of the studies and their relevance. Through mean and standard errors, this method provided graphical and numerical indications of the quality and relevance of the responses observed in the studies and the uncertainty associated with these two metrics. All analyses were conducted transparently and the derivations of the scores were fully documented. The results of these analyses are presented in three companion papers and the QWoE analyses for each insecticide are presented in detailed supplemental information (SI) in these papers.
Supersymmetry without prejudice at the LHC
NASA Astrophysics Data System (ADS)
Conley, John A.; Gainer, James S.; Hewett, JoAnne L.; Le, My Phuong; Rizzo, Thomas G.
2011-07-01
The discovery and exploration of Supersymmetry in a model-independent fashion will be a daunting task due to the large number of soft-breaking parameters in the MSSM. In this paper, we explore the capability of the ATLAS detector at the LHC (sqrt{s}=14 TeV, 1 fb-1) to find SUSY within the 19-dimensional pMSSM subspace of the MSSM using their standard transverse missing energy and long-lived particle searches that were essentially designed for mSUGRA. To this end, we employ a set of ˜71k previously generated model points in the 19-dimensional parameter space that satisfy all of the existing experimental and theoretical constraints. Employing ATLAS-generated SM backgrounds and following their approach in each of 11 missing energy analyses as closely as possible, we explore all of these 71k model points for a possible SUSY signal. To test our analysis procedure, we first verify that we faithfully reproduce the published ATLAS results for the signal distributions for their benchmark mSUGRA model points. We then show that, requiring all sparticle masses to lie below 1(3) TeV, almost all (two-thirds) of the pMSSM model points are discovered with a significance S>5 in at least one of these 11 analyses assuming a 50% systematic error on the SM background. If this systematic error can be reduced to only 20% then this parameter space coverage is increased. These results are indicative that the ATLAS SUSY search strategy is robust under a broad class of Supersymmetric models. We then explore in detail the properties of the kinematically accessible model points which remain unobservable by these search analyses in order to ascertain problematic cases which may arise in general SUSY searches.
Small, J R
1993-01-01
This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434
Berwid, Olga G.; Halperin, Jeffrey M.; Johnson, Ray E.; Marks, David J.
2013-01-01
Background Attention-Deficit/Hyperactivity Disorder has been associated with deficits in self-regulatory cognitive processes, some of which are thought to lie at the heart of the disorder. Slowing of reaction times (RTs) for correct responses following errors made during decision tasks has been interpreted as an indication of intact self-regulatory functioning and has been shown to be attenuated in school-aged children with ADHD. This study attempted to examine whether ADHD symptoms are associated with an early-emerging deficit in post-error slowing. Method A computerized two-choice RT task was administered to an ethnically diverse sample of preschool-aged children classified as either ‘control’ (n = 120) or ‘hyperactive/inattentive’ (HI; n = 148) using parent- and teacher-rated ADHD symptoms. Analyses were conducted to determine whether HI preschoolers exhibit a deficit in this self-regulatory ability. Results HI children exhibited reduced post-error slowing relative to controls on the trials selected for analysis. Supplementary analyses indicated that this may have been due to a reduced proportion of trials following errors on which HI children slowed rather than to a reduction in the absolute magnitude of slowing on all trials following errors. Conclusions High levels of ADHD symptoms in preschoolers may be associated with a deficit in error processing as indicated by post-error slowing. The results of supplementary analyses suggest that this deficit is perhaps more a result of failures to perceive errors than of difficulties with executive control. PMID:23387525
Devil in the details? Developmental dyslexia and visual long-term memory for details.
Huestegge, Lynn; Rohrßen, Julia; van Ermingen-Marbach, Muna; Pape-Neumann, Julia; Heim, Stefan
2014-01-01
Cognitive theories on causes of developmental dyslexia can be divided into language-specific and general accounts. While the former assume that words are special in that associated processing problems are rooted in language-related cognition (e.g., phonology) deficits, the latter propose that dyslexia is rather rooted in a general impairment of cognitive (e.g., visual and/or auditory) processing streams. In the present study, we examined to what extent dyslexia (typically characterized by poor orthographic representations) may be associated with a general deficit in visual long-term memory (LTM) for details. We compared object- and detail-related visual LTM performance (and phonological skills) between dyslexic primary school children and IQ-, age-, and gender-matched controls. The results revealed that while the overall amount of LTM errors was comparable between groups, dyslexic children exhibited a greater portion of detail-related errors. The results suggest that not only phonological, but also general visual resolution deficits in LTM may play an important role in developmental dyslexia.
New dimension analyses with error analysis for quaking aspen and black spruce
NASA Technical Reports Server (NTRS)
Woods, K. D.; Botkin, D. B.; Feiveson, A. H.
1987-01-01
Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.
Joint modelling of repeated measurement and time-to-event data: an introductory tutorial.
Asar, Özgür; Ritchie, James; Kalra, Philip A; Diggle, Peter J
2015-02-01
The term 'joint modelling' is used in the statistical literature to refer to methods for simultaneously analysing longitudinal measurement outcomes, also called repeated measurement data, and time-to-event outcomes, also called survival data. A typical example from nephrology is a study in which the data from each participant consist of repeated estimated glomerular filtration rate (eGFR) measurements and time to initiation of renal replacement therapy (RRT). Joint models typically combine linear mixed effects models for repeated measurements and Cox models for censored survival outcomes. Our aim in this paper is to present an introductory tutorial on joint modelling methods, with a case study in nephrology. We describe the development of the joint modelling framework and compare the results with those obtained by the more widely used approaches of conducting separate analyses of the repeated measurements and survival times based on a linear mixed effects model and a Cox model, respectively. Our case study concerns a data set from the Chronic Renal Insufficiency Standards Implementation Study (CRISIS). We also provide details of our open-source software implementation to allow others to replicate and/or modify our analysis. The results for the conventional linear mixed effects model and the longitudinal component of the joint models were found to be similar. However, there were considerable differences between the results for the Cox model with time-varying covariate and the time-to-event component of the joint model. For example, the relationship between kidney function as measured by eGFR and the hazard for initiation of RRT was significantly underestimated by the Cox model that treats eGFR as a time-varying covariate, because the Cox model does not take measurement error in eGFR into account. Joint models should be preferred for simultaneous analyses of repeated measurement and survival data, especially when the former is measured with error and the association between the underlying error-free measurement process and the hazard for survival is of scientific interest. © The Author 2015; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.
2010-03-15
Swiss cheese model of human error causation. ................................................................... 3 2. Results for the classification of...based on Reason’s “ Swiss cheese ” model of human error (1990). Figure 1 describes how an accident is likely to occur when all of the errors, or “holes...align. A detailed description of HFACS can be found in Wiegmann and Shappell (2003). Figure 1. The Swiss cheese model of human error
Quantification of the Uncertainties for the Ares I A106 Ascent Aerodynamic Database
NASA Technical Reports Server (NTRS)
Houlden, Heather P.; Favaregh, Amber L.
2010-01-01
A detailed description of the quantification of uncertainties for the Ares I ascent aero 6-DOF wind tunnel database is presented. The database was constructed from wind tunnel test data and CFD results. The experimental data came from tests conducted in the Boeing Polysonic Wind Tunnel in St. Louis and the Unitary Plan Wind Tunnel at NASA Langley Research Center. The major sources of error for this database were: experimental error (repeatability), database modeling errors, and database interpolation errors.
Implementation errors in the GingerALE Software: Description and recommendations.
Eickhoff, Simon B; Laird, Angela R; Fox, P Mickle; Lancaster, Jack L; Fox, Peter T
2017-01-01
Neuroscience imaging is a burgeoning, highly sophisticated field the growth of which has been fostered by grant-funded, freely distributed software libraries that perform voxel-wise analyses in anatomically standardized three-dimensional space on multi-subject, whole-brain, primary datasets. Despite the ongoing advances made using these non-commercial computational tools, the replicability of individual studies is an acknowledged limitation. Coordinate-based meta-analysis offers a practical solution to this limitation and, consequently, plays an important role in filtering and consolidating the enormous corpus of functional and structural neuroimaging results reported in the peer-reviewed literature. In both primary data and meta-analytic neuroimaging analyses, correction for multiple comparisons is a complex but critical step for ensuring statistical rigor. Reports of errors in multiple-comparison corrections in primary-data analyses have recently appeared. Here, we report two such errors in GingerALE, a widely used, US National Institutes of Health (NIH)-funded, freely distributed software package for coordinate-based meta-analysis. These errors have given rise to published reports with more liberal statistical inferences than were specified by the authors. The intent of this technical report is threefold. First, we inform authors who used GingerALE of these errors so that they can take appropriate actions including re-analyses and corrective publications. Second, we seek to exemplify and promote an open approach to error management. Third, we discuss the implications of these and similar errors in a scientific environment dependent on third-party software. Hum Brain Mapp 38:7-11, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Using cell phone location to assess misclassification errors in air pollution exposure estimation.
Yu, Haofei; Russell, Armistead; Mulholland, James; Huang, Zhijiong
2018-02-01
Air pollution epidemiologic and health impact studies often rely on home addresses to estimate individual subject's pollution exposure. In this study, we used detailed cell phone location data, the call detail record (CDR), to account for the impact of spatiotemporal subject mobility on estimates of ambient air pollutant exposure. This approach was applied on a sample with 9886 unique simcard IDs in Shenzhen, China, on one mid-week day in October 2013. Hourly ambient concentrations of six chosen pollutants were simulated by the Community Multi-scale Air Quality model fused with observational data, and matched with detailed location data for these IDs. The results were compared with exposure estimates using home addresses to assess potential exposure misclassification errors. We found the misclassifications errors are likely to be substantial when home location alone is applied. The CDR based approach indicates that the home based approach tends to over-estimate exposures for subjects with higher exposure levels and under-estimate exposures for those with lower exposure levels. Our results show that the cell phone location based approach can be used to assess exposure misclassification error and has the potential for improving exposure estimates in air pollution epidemiology studies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Attitude Determination Error Analysis System (ADEAS) mathematical specifications document
NASA Technical Reports Server (NTRS)
Nicholson, Mark; Markley, F.; Seidewitz, E.
1988-01-01
The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.
A median filter approach for correcting errors in a vector field
NASA Technical Reports Server (NTRS)
Schultz, H.
1985-01-01
Techniques are presented for detecting and correcting errors in a vector field. These methods employ median filters which are frequently used in image processing to enhance edges and remove noise. A detailed example is given for wind field maps produced by a spaceborne scatterometer. The error detection and replacement algorithm was tested with simulation data from the NASA Scatterometer (NSCAT) project.
2014-01-01
Background The Health Information Technology for Economic and Clinical Health (HITECH) Act subsidizes implementation by hospitals of electronic health records with computerized provider order entry (CPOE), which may reduce patient injuries caused by medication errors (preventable adverse drug events, pADEs). Effects on pADEs have not been rigorously quantified, and effects on medication errors have been variable. The objectives of this analysis were to assess the effectiveness of CPOE at reducing pADEs in hospital-related settings, and examine reasons for heterogeneous effects on medication errors. Methods Articles were identified using MEDLINE, Cochrane Library, Econlit, web-based databases, and bibliographies of previous systematic reviews (September 2013). Eligible studies compared CPOE with paper-order entry in acute care hospitals, and examined diverse pADEs or medication errors. Studies on children or with limited event-detection methods were excluded. Two investigators extracted data on events and factors potentially associated with effectiveness. We used random effects models to pool data. Results Sixteen studies addressing medication errors met pooling criteria; six also addressed pADEs. Thirteen studies used pre-post designs. Compared with paper-order entry, CPOE was associated with half as many pADEs (pooled risk ratio (RR) = 0.47, 95% CI 0.31 to 0.71) and medication errors (RR = 0.46, 95% CI 0.35 to 0.60). Regarding reasons for heterogeneous effects on medication errors, five intervention factors and two contextual factors were sufficiently reported to support subgroup analyses or meta-regression. Differences between commercial versus homegrown systems, presence and sophistication of clinical decision support, hospital-wide versus limited implementation, and US versus non-US studies were not significant, nor was timing of publication. Higher baseline rates of medication errors predicted greater reductions (P < 0.001). Other context and implementation variables were seldom reported. Conclusions In hospital-related settings, implementing CPOE is associated with a greater than 50% decline in pADEs, although the studies used weak designs. Decreases in medication errors are similar and robust to variations in important aspects of intervention design and context. This suggests that CPOE implementation, as subsidized under the HITECH Act, may benefit public health. More detailed reporting of the context and process of implementation could shed light on factors associated with greater effectiveness. PMID:24894078
Berndl, K; von Cranach, M; Grüsser, O J
1986-01-01
The perception and recognition of faces, mimic expression and gestures were investigated in normal subjects and schizophrenic patients by means of a movie test described in a previous report (Berndl et al. 1986). The error scores were compared with results from a semi-quantitative evaluation of psychopathological symptoms and with some data from the case histories. The overall error scores found in the three groups of schizophrenic patients (paranoic, hebephrenic, schizo-affective) were significantly increased (7-fold) over those of normals. No significant difference in the distribution of the error scores in the three different patient groups was found. In 10 different sub-tests following the movie the deficiencies found in the schizophrenic patients were analysed in detail. The error score for the averbal test was on average higher in paranoic patients than in the two other groups of patients, while the opposite was true for the error scores found in the verbal tests. Age and sex had some impact on the test results. In normals, female subjects were somewhat better than male. In schizophrenic patients the reverse was true. Thus female patients were more affected by the disease than male patients with respect to the task performance. The correlation between duration of the disease and error score was small; less than 10% of the error scores could be attributed to factors related to the duration of illness. Evaluation of psychopathological symptoms indicated that the stronger the schizophrenic defect, the higher the error score, but again this relationship was responsible for not more than 10% of the errors. The estimated degree of acute psychosis and overall sum of psychopathological abnormalities as scored in a semi-quantitative exploration did not correlate with the error score, but with each other. Similarly, treatment with psychopharmaceuticals, previous misuse of drugs or of alcohol had practically no effect on the outcome of the test data. The analysis of performance and test data of schizophrenic patients indicated that our findings are most likely not due to a "non-specific" impairment of cognitive function in schizophrenia, but point to a fairly selective defect in elementary cognitive visual functions necessary for averbal social communication. Some possible explanations of the data are discussed in relation to neuropsychological and neurophysiological findings on "face-specific" cortical areas located in the primate temporal lobe.
Bayesian statistics applied to the location of the source of explosions at Stromboli Volcano, Italy
Saccorotti, G.; Chouet, B.; Martini, M.; Scarpa, R.
1998-01-01
We present a method for determining the location and spatial extent of the source of explosions at Stromboli Volcano, Italy, based on a Bayesian inversion of the slowness vector derived from frequency-slowness analyses of array data. The method searches for source locations that minimize the error between the expected and observed slowness vectors. For a given set of model parameters, the conditional probability density function of slowness vectors is approximated by a Gaussian distribution of expected errors. The method is tested with synthetics using a five-layer velocity model derived for the north flank of Stromboli and a smoothed velocity model derived from a power-law approximation of the layered structure. Application to data from Stromboli allows for a detailed examination of uncertainties in source location due to experimental errors and incomplete knowledge of the Earth model. Although the solutions are not constrained in the radial direction, excellent resolution is achieved in both transverse and depth directions. Under the assumption that the horizontal extent of the source does not exceed the crater dimension, the 90% confidence region in the estimate of the explosive source location corresponds to a small volume extending from a depth of about 100 m to a maximum depth of about 300 m beneath the active vents, with a maximum likelihood source region located in the 120- to 180-m-depth interval.
Visual symptoms associated with refractive errors among Thangka artists of Kathmandu valley.
Dhungel, Deepa; Shrestha, Gauri Shankar
2017-12-21
Prolong near work, especially among people with uncorrected refractive error is considered a potential source of visual symptoms. The present study aims to determine the visual symptoms and the association of those with refractive errors among Thangka artists. In a descriptive cross-sectional study, 242 (46.1%) participants of 525 thangka artists examined, with age ranged between 16 years to 39 years which comprised of 112 participants with significant refractive errors and 130 absolutely emmetropic participants, were enrolled from six Thangka painting schools. The visual symptoms were assessed using a structured questionnaire consisting of nine items and scoring from 0 to 6 consecutive scales. The eye examination included detailed anterior and posterior segment examination, objective and subjective refraction, and assessment of heterophoria, vergence and accommodation. Symptoms were presented in percentage and median. Variation in distribution of participants and symptoms was analysed using the Kruskal Wallis test for mean, and the correlation with the Pearson correlation coefficient. A significance level of 0.05 was applied for 95% confidence interval. The majority of participants (65.1%) among refractive error group (REG) were above the age of 30 years, with a male predominance (61.6%), compared to the participants in the normal cohort group (NCG), where majority of them (72.3%) were below 30 years of age (72.3%) and female (51.5%). Overall, the visual symptoms are high among Thangka artists. However, blurred vision (p = 0.003) and dry eye (p = 0.004) are higher among the REG than the NCG. Females have slightly higher symptoms than males. Most of the symptoms, such as sore/aching eye (p = 0.003), feeling dry (p = 0.005) and blurred vision (p = 0.02) are significantly associated with astigmatism. Thangka artists present with significant proportion of refractive error and visual symptoms, especially among females. The most commonly reported symptoms are blurred vision, dry eye and watering of the eye. The visual symptoms are more correlated with astigmatism.
NASA Astrophysics Data System (ADS)
Lamouroux, Julien; Testut, Charles-Emmanuel; Lellouche, Jean-Michel; Perruche, Coralie; Paul, Julien
2017-04-01
The operational production of data-assimilated biogeochemical state of the ocean is one of the challenging core projects of the Copernicus Marine Environment Monitoring Service. In that framework - and with the April 2018 CMEMS V4 release as a target - Mercator Ocean is in charge of improving the realism of its global ¼° BIOMER coupled physical-biogeochemical (NEMO/PISCES) simulations, analyses and re-analyses, and to develop an effective capacity to routinely estimate the biogeochemical state of the ocean, through the implementation of biogeochemical data assimilation. Primary objectives are to enhance the time representation of the seasonal cycle in the real time and reanalysis systems, and to provide a better control of the production in the equatorial regions. The assimilation of BGC data will rely on a simplified version of the SEEK filter, where the error statistics do not evolve with the model dynamics. The associated forecast error covariances are based on the statistics of a collection of 3D ocean state anomalies. The anomalies are computed from a multi-year numerical experiment (free run without assimilation) with respect to a running mean in order to estimate the 7-day scale error on the ocean state at a given period of the year. These forecast error covariances rely thus on a fixed-basis seasonally variable ensemble of anomalies. This methodology, which is currently implemented in the "blue" component of the CMEMS operational forecast system, is now under adaptation to be applied to the biogeochemical part of the operational system. Regarding observations - and as a first step - the system shall rely on the CMEMS GlobColour Global Ocean surface chlorophyll concentration products, delivered in NRT. The objective of this poster is to provide a detailed overview of the implementation of the aforementioned data assimilation methodology in the CMEMS BIOMER forecasting system. Focus shall be put on (1) the assessment of the capabilities of this data assimilation methodology to provide satisfying statistics of the model variability errors (through space-time analysis of dedicated representers of satellite surface Chla observations), (2) the dedicated features of the data assimilation configuration that have been implemented so far (e.g. log-transformation of the analysis state, multivariate Chlorophyll-Nutrient control vector, etc.) and (3) the assessment of the performances of this future operational data assimilation configuration.
Geometric Accuracy Analysis of Worlddem in Relation to AW3D30, Srtm and Aster GDEM2
NASA Astrophysics Data System (ADS)
Bayburt, S.; Kurtak, A. B.; Büyüksalih, G.; Jacobsen, K.
2017-05-01
In a project area close to Istanbul the quality of WorldDEM, AW3D30, SRTM DSM and ASTER GDEM2 have been analyzed in relation to a reference aerial LiDAR DEM and to each other. The random and the systematic height errors have been separated. The absolute offset for all height models in X, Y and Z is within the expectation. The shifts have been respected in advance for a satisfying estimation of the random error component. All height models are influenced by some tilts, different in size. In addition systematic deformations can be seen not influencing the standard deviation too much. The delivery of WorldDEM includes information about the height error map which is based on the interferometric phase errors, and the number and location of coverage's from different orbits. A dependency of the height accuracy from the height error map information and the number of coverage's can be seen, but it is smaller as expected. WorldDEM is more accurate as the other investigated height models and with 10 m point spacing it includes more morphologic details, visible at contour lines. The morphologic details are close to the details based on the LiDAR digital surface model (DSM). As usual a dependency of the accuracy from the terrain slope can be seen. In forest areas the canopy definition of InSAR X- and C-band height models as well as for the height models based on optical satellite images is not the same as the height definition by LiDAR. In addition the interferometric phase uncertainty over forest areas is larger. Both effects lead to lower height accuracy in forest areas, also visible in the height error map.
Space shuttle post-entry and landing analysis. Volume 2: Appendices
NASA Technical Reports Server (NTRS)
Crawford, B. S.; Duiven, E. M.
1973-01-01
Four candidate navigation systems for the space shuttle orbiter approach and landing phase are evaluated in detail. These include three conventional navaid systems and a single-station one-way Doppler system. In each case, a Kalman filter is assumed to be mechanized in the onboard computer, blending the navaid data with IMU and altimeter data. Filter state dimensions ranging from 6 to 24 are involved in the candidate systems. Comprehensive truth models with state dimensions ranging from 63 to 82 are formulated and used to generate detailed error budgets and sensitivity curves illustrating the effect of variations in the size of individual error sources on touchdown accuracy. The projected overall performance of each system is shown in the form of time histories of position and velocity error components.
Power considerations for λ inflation factor in meta-analyses of genome-wide association studies.
Georgiopoulos, Georgios; Evangelou, Evangelos
2016-05-19
The genomic control (GC) approach is extensively used to effectively control false positive signals due to population stratification in genome-wide association studies (GWAS). However, GC affects the statistical power of GWAS. The loss of power depends on the magnitude of the inflation factor (λ) that is used for GC. We simulated meta-analyses of different GWAS. Minor allele frequency (MAF) ranged from 0·001 to 0·5 and λ was sampled from two scenarios: (i) random scenario (empirically-derived distribution of real λ values) and (ii) selected scenario from simulation parameter modification. Adjustment for λ was considered under single correction (within study corrected standard errors) and double correction (additional λ corrected summary estimate). MAF was a pivotal determinant of observed power. In random λ scenario, double correction induced a symmetric power reduction in comparison to single correction. For MAF 1·2 and MAF >5%. Our results provide a quick but detailed index for power considerations of future meta-analyses of GWAS that enables a more flexible design from early steps based on the number of studies accumulated in different groups and the λ values observed in the single studies.
Transfer path analysis: Current practice, trade-offs and consideration of damping
NASA Astrophysics Data System (ADS)
Oktav, Akın; Yılmaz, Çetin; Anlaş, Günay
2017-02-01
Current practice of experimental transfer path analysis is discussed in the context of trade-offs between accuracy and time cost. An overview of methods, which propose solutions for structure borne noise, is given, where assumptions, drawbacks and advantages of methods are stated theoretically. Applicability of methods is also investigated, where an engine induced structure borne noise of an automobile is taken as a reference problem. Depending on this particular problem, sources of measurement errors, processing operations that affect results and physical obstacles faced in the application are analysed. While an operational measurement is common in all stated methods, when it comes to removal of source, or the need for an external excitation, discrepancies are present. Depending on the chosen method, promised outcomes like independent characterisation of the source, or getting information about mounts also differ. Although many aspects of the problem are reported in the literature, damping and its effects are not considered. Damping effect is embedded in the measured complex frequency response functions, and it is needed to be analysed in the post processing step. Effects of damping, reasons and methods to analyse them are discussed in detail. In this regard, a new procedure, which increases the accuracy of results, is also proposed.
ERIC Educational Resources Information Center
Tulis, Maria; Steuer, Gabriele; Dresel, Markus
2018-01-01
Research on learning from errors gives reason to assume that errors provide a high potential to facilitate deep learning if students are willing and able to take these learning opportunities. The first aim of this study was to analyse whether beliefs about errors as learning opportunities can be theoretically and empirically distinguished from…
Wienke, B R; O'Leary, T R
2008-05-01
Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application.
Measuring diagnoses: ICD code accuracy.
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-10-01
To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.
Counting OCR errors in typeset text
NASA Astrophysics Data System (ADS)
Sandberg, Jonathan S.
1995-03-01
Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.
ERIC Educational Resources Information Center
Jeptarus, Kipsamo E.; Ngene, Patrick K.
2016-01-01
The purpose of this research was to study the Lexico-semantic errors of the Keiyo-speaking standard seven primary school learners of English as a Second Language (ESL) in Keiyo District, Kenya. This study was guided by two related theories: Error Analysis Theory/Approach by Corder (1971) which approaches L2 learning through a detailed analysis of…
Zhu, Ling-Ling; Lv, Na; Zhou, Quan
2016-12-01
We read, with great interest, the study by Baldwin and Rodriguez (2016), which described the role of the verification nurse and details the verification process in identifying errors related to chemotherapy orders. We strongly agree with their findings that a verification nurse, collaborating closely with the prescribing physician, pharmacist, and treating nurse, can better identify errors and maintain safety during chemotherapy administration.
A circadian rhythm in skill-based errors in aviation maintenance.
Hobbs, Alan; Williamson, Ann; Van Dongen, Hans P A
2010-07-01
In workplaces where activity continues around the clock, human error has been observed to exhibit a circadian rhythm, with a characteristic peak in the early hours of the morning. Errors are commonly distinguished by the nature of the underlying cognitive failure, particularly the level of intentionality involved in the erroneous action. The Skill-Rule-Knowledge (SRK) framework of Rasmussen is used widely in the study of industrial errors and accidents. The SRK framework describes three fundamental types of error, according to whether behavior is under the control of practiced sensori-motor skill routines with minimal conscious awareness; is guided by implicit or explicit rules or expertise; or where the planning of actions requires the conscious application of domain knowledge. Up to now, examinations of circadian patterns of industrial errors have not distinguished between different types of error. Consequently, it is not clear whether all types of error exhibit the same circadian rhythm. A survey was distributed to aircraft maintenance personnel in Australia. Personnel were invited to anonymously report a safety incident and were prompted to describe, in detail, the human involvement (if any) that contributed to it. A total of 402 airline maintenance personnel reported an incident, providing 369 descriptions of human error in which the time of the incident was reported and sufficient detail was available to analyze the error. Errors were categorized using a modified version of the SRK framework, in which errors are categorized as skill-based, rule-based, or knowledge-based, or as procedure violations. An independent check confirmed that the SRK framework had been applied with sufficient consistency and reliability. Skill-based errors were the most common form of error, followed by procedure violations, rule-based errors, and knowledge-based errors. The frequency of errors was adjusted for the estimated proportion of workers present at work/each hour of the day, and the 24 h pattern of each error type was examined. Skill-based errors exhibited a significant circadian rhythm, being most prevalent in the early hours of the morning. Variation in the frequency of rule-based errors, knowledge-based errors, and procedure violations over the 24 h did not reach statistical significance. The results suggest that during the early hours of the morning, maintenance technicians are at heightened risk of "absent minded" errors involving failures to execute action plans as intended.
Atmospheric refraction effects on baseline error in satellite laser ranging systems
NASA Technical Reports Server (NTRS)
Im, K. E.; Gardner, C. S.
1982-01-01
Because of the mathematical complexities involved in exact analyses of baseline errors, it is not easy to isolate atmospheric refraction effects; however, by making certain simplifying assumptions about the ranging system geometry, relatively simple expressions can be derived which relate the baseline errors directly to the refraction errors. The results indicate that even in the absence of other errors, the baseline error for intercontinental baselines can be more than an order of magnitude larger than the refraction error.
Shulman, Rob; Singer, Mervyn; Goldstone, John; Bellingan, Geoff
2005-10-05
The study aimed to compare the impact of computerised physician order entry (CPOE) without decision support with hand-written prescribing (HWP) on the frequency, type and outcome of medication errors (MEs) in the intensive care unit. Details of MEs were collected before, and at several time points after, the change from HWP to CPOE. The study was conducted in a London teaching hospital's 22-bedded general ICU. The sampling periods were 28 weeks before and 2, 10, 25 and 37 weeks after introduction of CPOE. The unit pharmacist prospectively recorded details of MEs and the total number of drugs prescribed daily during the data collection periods, during the course of his normal chart review. The total proportion of MEs was significantly lower with CPOE (117 errors from 2429 prescriptions, 4.8%) than with HWP (69 errors from 1036 prescriptions, 6.7%) (p < 0.04). The proportion of errors reduced with time following the introduction of CPOE (p < 0.001). Two errors with CPOE led to patient harm requiring an increase in length of stay and, if administered, three prescriptions with CPOE could potentially have led to permanent harm or death. Differences in the types of error between systems were noted. There was a reduction in major/moderate patient outcomes with CPOE when non-intercepted and intercepted errors were combined (p = 0.01). The mean baseline APACHE II score did not differ significantly between the HWP and the CPOE periods (19.4 versus 20.0, respectively, p = 0.71). Introduction of CPOE was associated with a reduction in the proportion of MEs and an improvement in the overall patient outcome score (if intercepted errors were included). Moderate and major errors, however, remain a significant concern with CPOE.
Developing Performance Estimates for High Precision Astrometry with TMT
NASA Astrophysics Data System (ADS)
Schoeck, Matthias; Do, Tuan; Ellerbroek, Brent; Herriot, Glen; Meyer, Leo; Suzuki, Ryuji; Wang, Lianqi; Yelda, Sylvana
2013-12-01
Adaptive optics on Extremely Large Telescopes will open up many new science cases or expand existing science into regimes unattainable with the current generation of telescopes. One example of this is high-precision astrometry, which has requirements in the range from 10 to 50 micro-arc-seconds for some instruments and science cases. Achieving these requirements imposes stringent constraints on the design of the entire observatory, but also on the calibration procedures, observing sequences and the data analysis techniques. This paper summarizes our efforts to develop a top down astrometry error budget for TMT. It is predominantly developed for the first-light AO system, NFIRAOS, and the IRIS instrument, but many terms are applicable to other configurations as well. Astrometry error sources are divided into 5 categories: Reference source and catalog errors, atmospheric refraction correction errors, other residual atmospheric effects, opto-mechanical errors and focal plane measurement errors. Results are developed in parametric form whenever possible. However, almost every error term in the error budget depends on the details of the astrometry observations, such as whether absolute or differential astrometry is the goal, whether one observes a sparse or crowded field, what the time scales of interest are, etc. Thus, it is not possible to develop a single error budget that applies to all science cases and separate budgets are developed and detailed for key astrometric observations. Our error budget is consistent with the requirements for differential astrometry of tens of micro-arc-seconds for certain science cases. While no show stoppers have been found, the work has resulted in several modifications to the NFIRAOS optical surface specifications and reference source design that will help improve the achievable astrometry precision even further.
Teaching Statistics Online Using "Excel"
ERIC Educational Resources Information Center
Jerome, Lawrence
2011-01-01
As anyone who has taught or taken a statistics course knows, statistical calculations can be tedious and error-prone, with the details of a calculation sometimes distracting students from understanding the larger concepts. Traditional statistics courses typically use scientific calculators, which can relieve some of the tedium and errors but…
NASA Astrophysics Data System (ADS)
Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.
2018-05-01
Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.
Extraction of Data from a Hospital Information System to Perform Process Mining.
Neira, Ricardo Alfredo Quintano; de Vries, Gert-Jan; Caffarel, Jennifer; Stretton, Erin
2017-01-01
The aim of this work is to share our experience in relevant data extraction from a hospital information system in preparation for a research study using process mining techniques. The steps performed were: research definition, mapping the normative processes, identification of tables and fields names of the database, and extraction of data. We then offer lessons learned during data extraction phase. Any errors made in the extraction phase will propagate and have implications on subsequent analyses. Thus, it is essential to take the time needed and devote sufficient attention to detail to perform all activities with the goal of ensuring high quality of the extracted data. We hope this work will be informative for other researchers to plan and execute extraction of data for process mining research studies.
How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?
West, Brady T; Sakshaug, Joseph W; Aurelien, Guy Alain S
2016-01-01
Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data.
How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?
West, Brady T.; Sakshaug, Joseph W.; Aurelien, Guy Alain S.
2016-01-01
Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data. PMID:27355817
Error and Error Mitigation in Low-Coverage Genome Assemblies
Hubisz, Melissa J.; Lin, Michael F.; Kellis, Manolis; Siepel, Adam
2011-01-01
The recent release of twenty-two new genome sequences has dramatically increased the data available for mammalian comparative genomics, but twenty of these new sequences are currently limited to ∼2× coverage. Here we examine the extent of sequencing error in these 2× assemblies, and its potential impact in downstream analyses. By comparing 2× assemblies with high-quality sequences from the ENCODE regions, we estimate the rate of sequencing error to be 1–4 errors per kilobase. While this error rate is fairly modest, sequencing error can still have surprising effects. For example, an apparent lineage-specific insertion in a coding region is more likely to reflect sequencing error than a true biological event, and the length distribution of coding indels is strongly distorted by error. We find that most errors are contributed by a small fraction of bases with low quality scores, in particular, by the ends of reads in regions of single-read coverage in the assembly. We explore several approaches for automatic sequencing error mitigation (SEM), making use of the localized nature of sequencing error, the fact that it is well predicted by quality scores, and information about errors that comes from comparisons across species. Our automatic methods for error mitigation cannot replace the need for additional sequencing, but they do allow substantial fractions of errors to be masked or eliminated at the cost of modest amounts of over-correction, and they can reduce the impact of error in downstream phylogenomic analyses. Our error-mitigated alignments are available for download. PMID:21340033
The ASSET intercomparison of stratosphere and lower mesosphere humidity analyses
NASA Astrophysics Data System (ADS)
Thornton, H. E.; Jackson, D. R.; Bekki, S.; Bormann, N.; Errera, Q.; Geer, A. J.; Lahoz, W. A.; Rharmili, S.
2009-02-01
This paper presents results from the first detailed intercomparison of stratosphere-lower mesosphere water vapour analyses; it builds on earlier results from the EU funded framework V "Assimilation of ENVISAT Data" (ASSET) project. Stratospheric water vapour plays an important role in many key atmospheric processes and therefore an improved understanding of its daily variability is desirable. With the availability of high resolution, good quality Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) water vapour profiles, the ability of four different atmospheric models to assimilate these data is tested. MIPAS data have been assimilated over September 2003 into the models of the European Centre for Medium Range Weather Forecasts (ECMWF), the Belgian Institute for Space and Aeronomy (BIRA-IASB), the French Service d'Aéronomie (SA-IPSL) and the UK Met Office. The resultant middle atmosphere humidity analyses are compared against independent satellite data from the Halogen Occultation Experiment (HALOE), the Polar Ozone and Aerosol Measurement (POAM III) and the Stratospheric Aerosol and Gas Experiment (SAGE II). The MIPAS water vapour profiles are generally well assimilated in the ECMWF, BIRA-IASB and SA systems, producing stratosphere-mesosphere water vapour fields where the main features compare favourably with the independent observations. However, the models are less capable of assimilating the MIPAS data where water vapour values are locally extreme or in regions of strong humidity gradients, such as the southern hemisphere lower stratosphere polar vortex. Differences in the analyses can be attributed to the choice of humidity control variable, how the background error covariance matrix is generated, the model resolution and its complexity, the degree of quality control of the observations and the use of observations near the model boundaries. Due to the poor performance of the Met Office analyses the results are not included in the intercomparison, but are discussed separately. The Met Office results highlight the pitfalls in humidity assimilation, and provide lessons that should be learnt by developers of stratospheric humidity assimilation systems. In particular, they underline the importance of the background error covariances in generating a realistic troposphere to mesosphere water vapour analysis.
NASA Technical Reports Server (NTRS)
Tubbs, Eldred F.
1986-01-01
A two-step approach to wavefront sensing for the Large Deployable Reflector (LDR) was examined as part of an effort to define wavefront-sensing requirements and to determine particular areas for more detailed study. A Hartmann test for coarse alignment, particularly segment tilt, seems feasible if LDR can operate at 5 microns or less. The direct measurement of the point spread function in the diffraction limited region may be a way to determine piston error, but this can only be answered by a detailed software model of the optical system. The question of suitable astronomical sources for either test must also be addressed.
Articulation in schoolchildren and adults with neurofibromatosis type 1.
Cosyns, Marjan; Mortier, Geert; Janssens, Sandra; Bogaert, Famke; D'Hondt, Stephanie; Van Borsel, John
2012-01-01
Several authors mentioned the occurrence of articulation problems in the neurofibromatosis type 1 (NF1) population. However, few studies have undertaken a detailed analysis of the articulation skills of NF1 patients, especially in schoolchildren and adults. Therefore, the aim of the present study was to examine in depth the articulation skills of NF1 schoolchildren and adults, both phonetically and phonologically. Speech samples were collected from 43 Flemish NF1 patients (14 children and 29 adults), ranging in age between 7 and 53 years, using a standardized speech test in which all Flemish single speech sounds and most clusters occur in all their permissible syllable positions. Analyses concentrated on consonants only and included a phonetic inventory, a phonetic, and a phonological analysis. It was shown that phonetic inventories were incomplete in 16.28% (7/43) of participants, in which totally correct realizations of the sibilants /ʃ/ and/or /ʒ/ were missing. Phonetic analysis revealed that distortions were the predominant phonetic error type. Sigmatismus stridens, multiple ad- or interdentality, and, in children, rhotacismus non vibrans were frequently observed. From a phonological perspective, the most common error types were substitution and syllable structure errors. Particularly, devoicing, cluster simplification, and, in children, deletion of the final consonant of words were perceived. Further, it was demonstrated that significantly more men than women presented with an incomplete phonetic inventory, and that girls tended to display more articulation errors than boys. Additionally, children exhibited significantly more articulation errors than adults, suggesting that although the articulation skills of NF1 patients evolve positively with age, articulation problems do not resolve completely from childhood to adulthood. As such, the articulation errors made by NF1 adults may be regarded as residual articulation disorders. It can be concluded that the speech of NF1 patients is characterized by mild articulation disorders at an age where this is no longer expected. Readers will be able to describe neurofibromatosis type 1 (NF1) and explain the articulation errors displayed by schoolchildren and adults with this genetic syndrome. © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Baron, J.; Campbell, W. C.; DeMille, D.; Doyle, J. M.; Gabrielse, G.; Gurevich, Y. V.; Hess, P. W.; Hutzler, N. R.; Kirilov, E.; Kozyryev, I.; O'Leary, B. R.; Panda, C. D.; Parsons, M. F.; Spaun, B.; Vutha, A. C.; West, A. D.; West, E. P.; ACME Collaboration
2017-07-01
We recently set a new limit on the electric dipole moment of the electron (eEDM) (J Baron et al and ACME collaboration 2014 Science 343 269-272), which represented an order-of-magnitude improvement on the previous limit and placed more stringent constraints on many charge-parity-violating extensions to the standard model. In this paper we discuss the measurement in detail. The experimental method and associated apparatus are described, together with the techniques used to isolate the eEDM signal. In particular, we detail the way experimental switches were used to suppress effects that can mimic the signal of interest. The methods used to search for systematic errors, and models explaining observed systematic errors, are also described. We briefly discuss possible improvements to the experiment.
Kaltenthaler, Eva; Carroll, Christopher; Hill-McManus, Daniel; Scope, Alison; Holmes, Michael; Rice, Stephen; Rose, Micah; Tappenden, Paul; Woolacott, Nerys
2016-04-01
As part of the National Institute for Health and Care Excellence (NICE) single technology appraisal (STA) process, independent Evidence Review Groups (ERGs) critically appraise the company submission. During the critical appraisal process the ERG may undertake analyses to explore uncertainties around the company's model and their implications for decision-making. The ERG reports are a central component of the evidence considered by the NICE Technology Appraisal Committees (ACs) in their deliberations. The aim of this research was to develop an understanding of the number and type of exploratory analyses undertaken by the ERGs within the STA process and to understand how these analyses are used by the NICE ACs in their decision-making. The 100 most recently completed STAs with published guidance were selected for inclusion in the analysis. The documents considered were ERG reports, clarification letters, the first appraisal consultation document and the final appraisal determination. Over 400 documents were assessed in this study. The categories of types of exploratory analyses included fixing errors, fixing violations, addressing matters of judgement and the ERG-preferred base case. A content analysis of documents (documentary analysis) was undertaken to identify and extract relevant data, and narrative synthesis was then used to rationalise and present these data. The level and type of detail in ERG reports and clarification letters varied considerably. The vast majority (93%) of ERG reports reported one or more exploratory analyses. The most frequently reported type of analysis in these 93 ERG reports related to the category 'matters of judgement', which was reported in 83 (89%) reports. The category 'ERG base-case/preferred analysis' was reported in 45 (48%) reports, the category 'fixing errors' was reported in 33 (35%) reports and the category 'fixing violations' was reported in 17 (18%) reports. The exploratory analyses performed were the result of issues raised by an ERG in its critique of the submitted economic evidence. These analyses had more influence on recommendations earlier in the STA process than later on in the process. The descriptions of analyses undertaken were often highly specific to a particular STA and could be inconsistent across ERG reports and thus difficult to interpret. Evidence Review Groups frequently conduct exploratory analyses to test or improve the economic evaluations submitted by companies as part of the STA process. ERG exploratory analyses often have an influence on the recommendations produced by the ACs. More in-depth analysis is needed to understand how ERGs make decisions regarding which exploratory analyses should be undertaken. More research is also needed to fully understand which types of exploratory analyses are most useful to ACs in their decision-making. The National Institute for Health Research Health Technology Assessment programme.
Introduction to Forward-Error-Correcting Coding
NASA Technical Reports Server (NTRS)
Freeman, Jon C.
1996-01-01
This reference publication introduces forward error correcting (FEC) and stresses definitions and basic calculations for use by engineers. The seven chapters include 41 example problems, worked in detail to illustrate points. A glossary of terms is included, as well as an appendix on the Q function. Block and convolutional codes are covered.
SU-F-T-243: Major Risks in Radiotherapy. A Review Based On Risk Analysis Literature
DOE Office of Scientific and Technical Information (OSTI.GOV)
López-Tarjuelo, J; Guasp-Tortajada, M; Iglesias-Montenegro, N
Purpose: We present a literature review of risk analyses in radiotherapy to highlight the most reported risks and facilitate the spread of this valuable information so that professionals can be aware of these major threats before performing their own studies. Methods: We considered studies with at least an estimation of the probability of occurrence of an adverse event (O) and its associated severity (S). They cover external beam radiotherapy, brachytherapy, intraoperative radiotherapy, and stereotactic techniques. We selected only the works containing a detailed ranked series of elements or failure modes and focused on the first fully reported quartile as much.more » Afterward, we sorted the risk elements according to a regular radiotherapy procedure so that the resulting groups were cited in several works and be ranked in this way. Results: 29 references published between 2007 and February 2016 were studied. Publication trend has been generally rising. The most employed analysis has been the Failure mode and effect analysis (FMEA). Among references, we selected 20 works listing 258 ranked risk elements. They were sorted into 31 groups appearing at least in two different works. 11 groups appeared in at least 5 references and 5 groups did it in 7 or more papers. These last sets of risks where choosing another set of images or plan for planning or treating, errors related with contours, errors in patient positioning for treatment, human mistakes when programming treatments, and planning errors. Conclusion: There is a sufficient amount and variety of references for identifying which failure modes or elements should be addressed in a radiotherapy department before attempting a specific analysis. FMEA prevailed, but other studies such as “risk matrix” or “occurrence × severity” analyses can also lead professionals’ efforts. Risk associated with human actions ranks very high; therefore, they should be automated or at least peer-reviewed.« less
Supersonic Retropropulsion Experimental Results from the NASA Langley Unitary Plan Wind Tunnel
NASA Technical Reports Server (NTRS)
Berry, Scott A.; Rhode, Matthew N.; Edquist, Karl T.; Player, Charles J.
2011-01-01
A new supersonic retropropulsion experimental effort, intended to provide code validation data, was recently completed in the Langley Research Center Unitary Plan Wind Tunnel Test Section 2 over the Mach number range from 2.4 to 4.6. The experimental model was designed using insights gained from pre-test computations, which were instrumental for sizing and refining the model to minimize tunnel wall interference and internal flow separation concerns. A 5-in diameter 70-deg sphere-cone forebody with a roughly 10-in long cylindrical aftbody was the baseline configuration selected for this study. The forebody was designed to accommodate up to four 4:1 area ratio supersonic nozzles. Primary measurements for this model were a large number of surface pressures on the forebody and aftbody. Supplemental data included high-speed Schlieren video and internal pressures and temperatures. The run matrix was developed to allow for the quantification of various sources of experimental uncertainty, such as random errors due to run-to-run variations and bias errors due to flow field or model misalignments. Preliminary results and observations from the test are presented, while detailed data and uncertainty analyses are ongoing.
Generalized fourier analyses of the advection-diffusion equation - Part II: two-dimensional domains
NASA Astrophysics Data System (ADS)
Voth, Thomas E.; Martinez, Mario J.; Christon, Mark A.
2004-07-01
Part I of this work presents a detailed multi-methods comparison of the spatial errors associated with the one-dimensional finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. In Part II we extend the analysis to two-dimensional domains and also consider the effects of wave propagation direction and grid aspect ratio on the phase speed, and the discrete and artificial diffusivities. The observed dependence of dispersive and diffusive behaviour on propagation direction makes comparison of methods more difficult relative to the one-dimensional results. For this reason, integrated (over propagation direction and wave number) error and anisotropy metrics are introduced to facilitate comparison among the various methods. With respect to these metrics, the consistent mass Galerkin and consistent mass control-volume finite element methods, and their streamline upwind derivatives, exhibit comparable accuracy, and generally out-perform their lumped mass counterparts and finite-difference based schemes. While this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common mathematical framework. Published in 2004 by John Wiley & Sons, Ltd.
Intelligent voltage control strategy for three-phase UPS inverters with output LC filter
NASA Astrophysics Data System (ADS)
Jung, J. W.; Leu, V. Q.; Dang, D. Q.; Do, T. D.; Mwasilu, F.; Choi, H. H.
2015-08-01
This paper presents a supervisory fuzzy neural network control (SFNNC) method for a three-phase inverter of uninterruptible power supplies (UPSs). The proposed voltage controller is comprised of a fuzzy neural network control (FNNC) term and a supervisory control term. The FNNC term is deliberately employed to estimate the uncertain terms, and the supervisory control term is designed based on the sliding mode technique to stabilise the system dynamic errors. To improve the learning capability, the FNNC term incorporates an online parameter training methodology, using the gradient descent method and Lyapunov stability theory. Besides, a linear load current observer that estimates the load currents is used to exclude the load current sensors. The proposed SFNN controller and the observer are robust to the filter inductance variations, and their stability analyses are described in detail. The experimental results obtained on a prototype UPS test bed with a TMS320F28335 DSP are presented to validate the feasibility of the proposed scheme. Verification results demonstrate that the proposed control strategy can achieve smaller steady-state error and lower total harmonic distortion when subjected to nonlinear or unbalanced loads compared to the conventional sliding mode control method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
FLANAGAN,A; SCHACHTER,J.M; SCHISSEL,D.P
2003-02-01
A Data Analysis Monitoring (DAM) system has been developed to monitor between pulse physics analysis at the DIII-D National Fusion Facility (http://nssrv1.gat.com:8000/dam). The system allows for rapid detection of discrepancies in diagnostic measurements or the results from physics analysis codes. This enables problems to be detected and possibly fixed between pulses as opposed to after the experimental run has concluded thus increasing the efficiency of experimental time. An example of a consistency check is comparing the experimentally measured neutron rate and the expected neutron emission, RDD0D. A significant difference between these two values could indicate a problem with one ormore » more diagnostics, or the presence of unanticipated phenomena in the plasma. This new system also tracks the progress of MDSplus dispatched data analysis software and the loading of analyzed data into MDSplus. DAM uses a Java Servlet to receive messages, CLIPS to implement expert system logic, and displays its results to multiple web clients via HTML. If an error is detected by DAM, users can view more detailed information so that steps can be taken to eliminate the error for the next pulse.« less
System to monitor data analyses and results of physics data validation between pulses at DIII-D
NASA Astrophysics Data System (ADS)
Flanagan, S.; Schachter, J. M.; Schissel, D. P.
2004-06-01
A data analysis monitoring (DAM) system has been developed to monitor between pulse physics analysis at the DIII-D National Fusion Facility (http://nssrv1.gat.com:8000/dam). The system allows for rapid detection of discrepancies in diagnostic measurements or the results from physics analysis codes. This enables problems to be detected and possibly fixed between pulses as opposed to after the experimental run has concluded, thus increasing the efficiency of experimental time. An example of a consistency check is comparing the experimentally measured neutron rate and the expected neutron emission, RDD0D. A significant difference between these two values could indicate a problem with one or more diagnostics, or the presence of unanticipated phenomena in the plasma. This system also tracks the progress of MDSplus dispatched data analysis software and the loading of analyzed data into MDSplus. DAM uses a Java Servlet to receive messages, C Language Integrated Production System to implement expert system logic, and displays its results to multiple web clients via Hypertext Markup Language. If an error is detected by DAM, users can view more detailed information so that steps can be taken to eliminate the error for the next pulse.
NASA Astrophysics Data System (ADS)
Camattari, Riccardo
2017-09-01
Focusing a hard x-ray beam would represent an innovative technique for tumour treatment, since such a beam may deliver a dose to a tumour located at a given depth under the skin, sparing the surrounding healthy cells. A detailed study of a focusing system for hard x-ray aimed at radiotherapy is presented here. Such a focusing system, named Laue lens, exploits x-ray diffraction and consists of a series of crystals disposed as concentric rings capable of concentrating a flux of x-rays towards a focusing point. A feasibility study regarding the positioning tolerances of the crystalline optical elements has been carried out. It is shown that a Laue lens can effectively be used in the context of radiotherapy for tumour treatments provided that the mounting errors are below certain values, which are reachable in the modern micromechanics. An extended survey based on an analytical approach and on simulations is presented for precisely estimating all the contributions of each mounting error, analysing their effect on the focal spot of the Laue lens. Finally, a simulation for evaluating the released dose in a water phantom is shown.
Performance Evaluation of Three Blood Glucose Monitoring Systems Using ISO 15197
Bedini, José Luis; Wallace, Jane F.; Pardo, Scott; Petruschke, Thorsten
2015-01-01
Background: Blood glucose monitoring is an essential component of diabetes management. Inaccurate blood glucose measurements can severely impact patients’ health. This study evaluated the performance of 3 blood glucose monitoring systems (BGMS), Contour® Next USB, FreeStyle InsuLinx®, and OneTouch® Verio™ IQ, under routine hospital conditions. Methods: Venous blood samples (N = 236) obtained for routine laboratory procedures were collected at a Spanish hospital, and blood glucose (BG) concentrations were measured with each BGMS and with the available reference (hexokinase) method. Accuracy of the 3 BGMS was compared according to ISO 15197:2013 accuracy limit criteria, by mean absolute relative difference (MARD), consensus error grid (CEG) and surveillance error grid (SEG) analyses, and an insulin dosing error model. Results: All BGMS met the accuracy limit criteria defined by ISO 15197:2013. While all measurements of the 3 BGMS were within low-risk zones in both error grid analyses, the Contour Next USB showed significantly smaller MARDs between reference values compared to the other 2 BGMS. Insulin dosing errors were lowest for the Contour Next USB than compared to the other systems. Conclusions: All BGMS fulfilled ISO 15197:2013 accuracy limit criteria and CEG criterion. However, taking together all analyses, differences in performance of potential clinical relevance may be observed. Results showed that Contour Next USB had lowest MARD values across the tested glucose range, as compared with the 2 other BGMS. CEG and SEG analyses as well as calculation of the hypothetical bolus insulin dosing error suggest a high accuracy of the Contour Next USB. PMID:26445813
Bayesian Analysis of Silica Exposure and Lung Cancer Using Human and Animal Studies.
Bartell, Scott M; Hamra, Ghassan Badri; Steenland, Kyle
2017-03-01
Bayesian methods can be used to incorporate external information into epidemiologic exposure-response analyses of silica and lung cancer. We used data from a pooled mortality analysis of silica and lung cancer (n = 65,980), using untransformed and log-transformed cumulative exposure. Animal data came from chronic silica inhalation studies using rats. We conducted Bayesian analyses with informative priors based on the animal data and different cross-species extrapolation factors. We also conducted analyses with exposure measurement error corrections in the absence of a gold standard, assuming Berkson-type error that increased with increasing exposure. The pooled animal data exposure-response coefficient was markedly higher (log exposure) or lower (untransformed exposure) than the coefficient for the pooled human data. With 10-fold uncertainty, the animal prior had little effect on results for pooled analyses and only modest effects in some individual studies. One-fold uncertainty produced markedly different results for both pooled and individual studies. Measurement error correction had little effect in pooled analyses using log exposure. Using untransformed exposure, measurement error correction caused a 5% decrease in the exposure-response coefficient for the pooled analysis and marked changes in some individual studies. The animal prior had more impact for smaller human studies and for one-fold versus three- or 10-fold uncertainty. Adjustment for Berkson error using Bayesian methods had little effect on the exposure-response coefficient when exposure was log transformed or when the sample size was large. See video abstract at, http://links.lww.com/EDE/B160.
Estes, Lyndon; Chen, Peng; Debats, Stephanie; Evans, Tom; Ferreira, Stefanus; Kuemmerle, Tobias; Ragazzo, Gabrielle; Sheffield, Justin; Wolf, Adam; Wood, Eric; Caylor, Kelly
2018-01-01
Land cover maps increasingly underlie research into socioeconomic and environmental patterns and processes, including global change. It is known that map errors impact our understanding of these phenomena, but quantifying these impacts is difficult because many areas lack adequate reference data. We used a highly accurate, high-resolution map of South African cropland to assess (1) the magnitude of error in several current generation land cover maps, and (2) how these errors propagate in downstream studies. We first quantified pixel-wise errors in the cropland classes of four widely used land cover maps at resolutions ranging from 1 to 100 km, and then calculated errors in several representative "downstream" (map-based) analyses, including assessments of vegetative carbon stocks, evapotranspiration, crop production, and household food security. We also evaluated maps' spatial accuracy based on how precisely they could be used to locate specific landscape features. We found that cropland maps can have substantial biases and poor accuracy at all resolutions (e.g., at 1 km resolution, up to ∼45% underestimates of cropland (bias) and nearly 50% mean absolute error (MAE, describing accuracy); at 100 km, up to 15% underestimates and nearly 20% MAE). National-scale maps derived from higher-resolution imagery were most accurate, followed by multi-map fusion products. Constraining mapped values to match survey statistics may be effective at minimizing bias (provided the statistics are accurate). Errors in downstream analyses could be substantially amplified or muted, depending on the values ascribed to cropland-adjacent covers (e.g., with forest as adjacent cover, carbon map error was 200%-500% greater than in input cropland maps, but ∼40% less for sparse cover types). The average locational error was 6 km (600%). These findings provide deeper insight into the causes and potential consequences of land cover map error, and suggest several recommendations for land cover map users. © 2017 John Wiley & Sons Ltd.
Measuring Diagnoses: ICD Code Accuracy
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-01-01
Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999
Distortion of Digital Image Correlation (DIC) Displacements and Strains from Heat Waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, E. M. C.; Reu, P. L.
“Heat waves” is a colloquial term used to describe convective currents in air formed when different objects in an area are at different temperatures. In the context of Digital Image Correlation (DIC) and other optical-based image processing techniques, imaging an object of interest through heat waves can significantly distort the apparent location and shape of the object. We present that there are many potential heat sources in DIC experiments, including but not limited to lights, cameras, hot ovens, and sunlight, yet error caused by heat waves is often overlooked. This paper first briefly presents three practical situations in which heatmore » waves contributed significant error to DIC measurements to motivate the investigation of heat waves in more detail. Then the theoretical background of how light is refracted through heat waves is presented, and the effects of heat waves on displacements and strains computed from DIC are characterized in detail. Finally, different filtering methods are investigated to reduce the displacement and strain errors caused by imaging through heat waves. The overarching conclusions from this work are that errors caused by heat waves are significantly higher than typical noise floors for DIC measurements, and that the errors are difficult to filter because the temporal and spatial frequencies of the errors are in the same range as those of typical signals of interest. In conclusion, eliminating or mitigating the effects of heat sources in a DIC experiment is the best solution to minimizing errors caused by heat waves.« less
Distortion of Digital Image Correlation (DIC) Displacements and Strains from Heat Waves
Jones, E. M. C.; Reu, P. L.
2017-11-28
“Heat waves” is a colloquial term used to describe convective currents in air formed when different objects in an area are at different temperatures. In the context of Digital Image Correlation (DIC) and other optical-based image processing techniques, imaging an object of interest through heat waves can significantly distort the apparent location and shape of the object. We present that there are many potential heat sources in DIC experiments, including but not limited to lights, cameras, hot ovens, and sunlight, yet error caused by heat waves is often overlooked. This paper first briefly presents three practical situations in which heatmore » waves contributed significant error to DIC measurements to motivate the investigation of heat waves in more detail. Then the theoretical background of how light is refracted through heat waves is presented, and the effects of heat waves on displacements and strains computed from DIC are characterized in detail. Finally, different filtering methods are investigated to reduce the displacement and strain errors caused by imaging through heat waves. The overarching conclusions from this work are that errors caused by heat waves are significantly higher than typical noise floors for DIC measurements, and that the errors are difficult to filter because the temporal and spatial frequencies of the errors are in the same range as those of typical signals of interest. In conclusion, eliminating or mitigating the effects of heat sources in a DIC experiment is the best solution to minimizing errors caused by heat waves.« less
TOWARD ERROR ANALYSIS OF LARGE-SCALE FOREST CARBON BUDGETS
Quantification of forest carbon sources and sinks is an important part of national inventories of net greenhouse gas emissions. Several such forest carbon budgets have been constructed, but little effort has been made to analyse the sources of error and how these errors propagate...
Fusing metabolomics data sets with heterogeneous measurement errors
Waaijenborg, Sandra; Korobko, Oksana; Willems van Dijk, Ko; Lips, Mirjam; Hankemeier, Thomas; Wilderjans, Tom F.; Smilde, Age K.
2018-01-01
Combining different metabolomics platforms can contribute significantly to the discovery of complementary processes expressed under different conditions. However, analysing the fused data might be hampered by the difference in their quality. In metabolomics data, one often observes that measurement errors increase with increasing measurement level and that different platforms have different measurement error variance. In this paper we compare three different approaches to correct for the measurement error heterogeneity, by transformation of the raw data, by weighted filtering before modelling and by a modelling approach using a weighted sum of residuals. For an illustration of these different approaches we analyse data from healthy obese and diabetic obese individuals, obtained from two metabolomics platforms. Concluding, the filtering and modelling approaches that both estimate a model of the measurement error did not outperform the data transformation approaches for this application. This is probably due to the limited difference in measurement error and the fact that estimation of measurement error models is unstable due to the small number of repeats available. A transformation of the data improves the classification of the two groups. PMID:29698490
Error protection capability of space shuttle data bus designs
NASA Technical Reports Server (NTRS)
Proch, G. E.
1974-01-01
Error protection assurance in the reliability of digital data communications is discussed. The need for error protection on the space shuttle data bus system has been recognized and specified as a hardware requirement. The error protection techniques of particular concern are those designed into the Shuttle Main Engine Interface (MEI) and the Orbiter Multiplex Interface Adapter (MIA). The techniques and circuit design details proposed for these hardware are analyzed in this report to determine their error protection capability. The capability is calculated in terms of the probability of an undetected word error. Calculated results are reported for a noise environment that ranges from the nominal noise level stated in the hardware specifications to burst levels which may occur in extreme or anomalous conditions.
VizieR Online Data Catalog: 5 Galactic GC proper motions from Gaia DR1 (Watkins+, 2017)
NASA Astrophysics Data System (ADS)
Watkins, L. L.; van der Marel, R. P.
2017-11-01
We present a pilot study of Galactic globular cluster (GC) proper motion (PM) determinations using Gaia data. We search for GC stars in the Tycho-Gaia Astrometric Solution (TGAS) catalog from Gaia Data Release 1 (DR1), and identify five members of NGC 104 (47 Tucanae), one member of NGC 5272 (M3), five members of NGC 6121 (M4), seven members of NGC 6397, and two members of NGC 6656 (M22). By taking a weighted average of member stars, fully accounting for the correlations between parameters, we estimate the parallax (and, hence, distance) and PM of the GCs. This provides a homogeneous PM study of multiple GCs based on an astrometric catalog with small and well-controlled systematic errors and yields random PM errors similar to existing measurements. Detailed comparison to the available Hubble Space Telescope (HST) measurements generally shows excellent agreement, validating the astrometric quality of both TGAS and HST. By contrast, comparison to ground-based measurements shows that some of those must have systematic errors exceeding the random errors. Our parallax estimates have uncertainties an order of magnitude larger than previous studies, but nevertheless imply distances consistent with previous estimates. By combining our PM measurements with literature positions, distances, and radial velocities, we measure Galactocentric space motions for the clusters and find that these also agree well with previous analyses. Our analysis provides a framework for determining more accurate distances and PMs of Galactic GCs using future Gaia data releases. This will provide crucial constraints on the near end of the cosmic distance ladder and provide accurate GC orbital histories. (4 data files).
Goldrick, Matthew; Keshet, Joseph; Gustafson, Erin; Heller, Jordana; Needle, Jeremy
2016-04-01
Traces of the cognitive mechanisms underlying speaking can be found within subtle variations in how we pronounce sounds. While speech errors have traditionally been seen as categorical substitutions of one sound for another, acoustic/articulatory analyses show they partially reflect the intended sound. When "pig" is mispronounced as "big," the resulting /b/ sound differs from correct productions of "big," moving towards intended "pig"-revealing the role of graded sound representations in speech production. Investigating the origins of such phenomena requires detailed estimation of speech sound distributions; this has been hampered by reliance on subjective, labor-intensive manual annotation. Computational methods can address these issues by providing for objective, automatic measurements. We develop a novel high-precision computational approach, based on a set of machine learning algorithms, for measurement of elicited speech. The algorithms are trained on existing manually labeled data to detect and locate linguistically relevant acoustic properties with high accuracy. Our approach is robust, is designed to handle mis-productions, and overall matches the performance of expert coders. It allows us to analyze a very large dataset of speech errors (containing far more errors than the total in the existing literature), illuminating properties of speech sound distributions previously impossible to reliably observe. We argue that this provides novel evidence that two sources both contribute to deviations in speech errors: planning processes specifying the targets of articulation and articulatory processes specifying the motor movements that execute this plan. These findings illustrate how a much richer picture of speech provides an opportunity to gain novel insights into language processing. Copyright © 2016 Elsevier B.V. All rights reserved.
Tycho- Gaia Astrometric Solution Parallaxes and Proper Motions for Five Galactic Globular Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watkins, Laura L.; Van der Marel, Roeland P., E-mail: lwatkins@stsci.edu
2017-04-20
We present a pilot study of Galactic globular cluster (GC) proper motion (PM) determinations using Gaia data. We search for GC stars in the Tycho- Gaia Astrometric Solution (TGAS) catalog from Gaia Data Release 1 (DR1), and identify five members of NGC 104 (47 Tucanae), one member of NGC 5272 (M3), five members of NGC 6121 (M4), seven members of NGC 6397, and two members of NGC 6656 (M22). By taking a weighted average of member stars, fully accounting for the correlations between parameters, we estimate the parallax (and, hence, distance) and PM of the GCs. This provides a homogeneousmore » PM study of multiple GCs based on an astrometric catalog with small and well-controlled systematic errors and yields random PM errors similar to existing measurements. Detailed comparison to the available Hubble Space Telescope ( HST ) measurements generally shows excellent agreement, validating the astrometric quality of both TGAS and HST . By contrast, comparison to ground-based measurements shows that some of those must have systematic errors exceeding the random errors. Our parallax estimates have uncertainties an order of magnitude larger than previous studies, but nevertheless imply distances consistent with previous estimates. By combining our PM measurements with literature positions, distances, and radial velocities, we measure Galactocentric space motions for the clusters and find that these also agree well with previous analyses. Our analysis provides a framework for determining more accurate distances and PMs of Galactic GCs using future Gaia data releases. This will provide crucial constraints on the near end of the cosmic distance ladder and provide accurate GC orbital histories.« less
Keers, R N; Williams, S D; Vattakatuchery, J J; Brown, P; Miller, J; Prescott, L; Ashcroft, D M
2015-12-01
When compared to general hospitals, relatively little is known about the quality and safety of discharge prescriptions from specialist mental health settings. We aimed to investigate the quality and safety of discharge prescriptions written at mental health hospitals. This study was undertaken on acute adult and later life inpatient units at three National Health Service (NHS) mental health trusts. Trained pharmacy teams prospectively reviewed all newly written discharge prescriptions over a 6-week period, recording the number of prescribing errors, clerical errors and errors involving lack of communication about medicines stopped during hospital admission. All prescribing errors were reviewed and validated by a multidisciplinary panel. Main outcome measures were the prevalence (95% CI) of prescribing errors, clerical errors and errors involving a lack of details about medicines stopped. Risk factors for prescribing and clerical errors were examined via logistic regression and results presented as odds ratios (OR) with corresponding 95% CI. Of 274 discharge prescriptions, 259 contained a total of 1456 individually prescribed items. One in five [20·8% (95%CI 15·9-25·8%)] eligible discharge prescriptions and one in twenty [5·1% (95%CI 4·0-6·2%)] prescribed or omitted items were affected by at least one prescribing error. One or more clerical errors were found in 71·9% (95%CI 66·5-77·3%) of discharge prescriptions, and more than two-thirds [68·8% (95%CI 56·6-78·8%)] of eligible discharge prescriptions erroneously lacked information on medicines discontinued during hospital admission. Logistic regression analyses revealed that middle-grade [whole discharge prescription level OR 3·28 (3·03-3·56)] and senior [whole discharge OR 1·43 (1·04-1·96)] prescribers as well as electronic discharge prescription pro formas [whole discharge OR 2·43 (2·08-2·83)] were all associated with significantly higher risks of prescribing errors than junior prescribers and handwritten discharges, respectively. Similar findings were reported at the individually prescribed item level. Middle-grade prescribers were also more likely to make both non-psychotropic and psychotropic prescribing errors than their junior colleagues [individual item OR 4·24 (2·14-8·40) and OR 1·70 (1·16-2·48), respectively]. Discharge prescriptions issued by mental health NHS hospitals are affected by high levels of prescribing, clerical and communication errors. Important targets for intervention have been identified to improve medication safety problems at care transfer. © 2015 John Wiley & Sons Ltd.
A revised 5 minute gravimetric geoid and associated errors for the North Atlantic calibration area
NASA Technical Reports Server (NTRS)
Mader, G. L.
1979-01-01
A revised 5 minute gravimetric geoid and its errors were computed for the North Atlantic calibration area using GEM-8 potential coefficients and the latest gravity data available from the Defense Mapping Agency. This effort was prompted by a number of inconsistencies and small errors found in previous calculations of this geoid. The computational method and constants used are given in detail to serve as a reference for future work.
The Effects of Observation Errors on the Attack Vulnerability of Complex Networks
2012-11-01
more detail, to construct a true network we select a topology (erdos- renyi (Erdos & Renyi , 1959), scale-free (Barabási & Albert, 1999), small world...Efficiency of Scale-Free Networks: Error and Attack Tolerance. Physica A, Volume 320, pp. 622-642. 6. Erdos, P. & Renyi , A., 1959. On Random Graphs, I
Error Patterns in Young German Children's "Wh"-Questions
ERIC Educational Resources Information Center
Schmerse, Daniel; Lieven, Elena; Tomasello, Michael
2013-01-01
In this article we report two studies: a detailed longitudinal analysis of errors in "wh"-questions from six German-learning children (age 2 ; 0-3 ; 0) and an analysis of the prosodic characteristics of "wh"-questions in German child-directed speech. The results of the first study demonstrate that German-learning children…
Space shuttle entry and landing navigation analysis
NASA Technical Reports Server (NTRS)
Jones, H. L.; Crawford, B. S.
1974-01-01
A navigation system for the entry phase of a Space Shuttle mission which is an aided-inertial system which uses a Kalman filter to mix IMU data with data derived from external navigation aids is evaluated. A drag pseudo-measurement used during radio blackout is treated as an additional external aid. A comprehensive truth model with 101 states is formulated and used to generate detailed error budgets at several significant time points -- end-of-blackout, start of final approach, over runway threshold, and touchdown. Sensitivity curves illustrating the effect of variations in the size of individual error sources on navigation accuracy are presented. The sensitivity of the navigation system performance to filter modifications is analyzed. The projected overall performance is shown in the form of time histories of position and velocity error components. The detailed results are summarized and interpreted, and suggestions are made concerning possible software improvements.
Recognizing and managing errors of cognitive underspecification.
Duthie, Elizabeth A
2014-03-01
James Reason describes cognitive underspecification as incomplete communication that creates a knowledge gap. Errors occur when an information mismatch occurs in bridging that gap with a resulting lack of shared mental models during the communication process. There is a paucity of studies in health care examining this cognitive error and the role it plays in patient harm. The goal of the following case analyses is to facilitate accurate recognition, identify how it contributes to patient harm, and suggest appropriate management strategies. Reason's human error theory is applied in case analyses of errors of cognitive underspecification. Sidney Dekker's theory of human incident investigation is applied to event investigation to facilitate identification of this little recognized error. Contributory factors leading to errors of cognitive underspecification include workload demands, interruptions, inexperienced practitioners, and lack of a shared mental model. Detecting errors of cognitive underspecification relies on blame-free listening and timely incident investigation. Strategies for interception include two-way interactive communication, standardization of communication processes, and technological support to ensure timely access to documented clinical information. Although errors of cognitive underspecification arise at the sharp end with the care provider, effective management is dependent upon system redesign that mitigates the latent contributory factors. Cognitive underspecification is ubiquitous whenever communication occurs. Accurate identification is essential if effective system redesign is to occur.
Monte Carlo errors with less errors
NASA Astrophysics Data System (ADS)
Wolff, Ulli; Alpha Collaboration
2004-01-01
We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.
NASA Technical Reports Server (NTRS)
O'Bryan, Thomas C; Danforth, Edward C B; Johnston, J Ford
1955-01-01
The magnitude and variation of the static-pressure error for various distances ahead of sharp-nose bodies and open-nose air inlets and for a distance of 1 chord ahead of the wing tip of a swept wing are defined by a combination of experiment and theory. The mechanism of the error is discussed in some detail to show the contributing factors that make up the error. The information presented provides a useful means for choosing a proper location for measurement of static pressure for most purposes.
Criticality of Adaptive Control Dynamics
NASA Astrophysics Data System (ADS)
Patzelt, Felix; Pawelzik, Klaus
2011-12-01
We show, that stabilization of a dynamical system can annihilate observable information about its structure. This mechanism induces critical points as attractors in locally adaptive control. It also reveals, that previously reported criticality in simple controllers is caused by adaptation and not by other controller details. We apply these results to a real-system example: human balancing behavior. A model of predictive adaptive closed-loop control subject to some realistic constraints is introduced and shown to reproduce experimental observations in unprecedented detail. Our results suggests, that observed error distributions in between the Lévy and Gaussian regimes may reflect a nearly optimal compromise between the elimination of random local trends and rare large errors.
Detailed Uncertainty Analysis for Ares I Ascent Aerodynamics Wind Tunnel Database
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.; Hanke, Jeremy L.; Walker, Eric L.; Houlden, Heather P.
2008-01-01
A detailed uncertainty analysis for the Ares I ascent aero 6-DOF wind tunnel database is described. While the database itself is determined using only the test results for the latest configuration, the data used for the uncertainty analysis comes from four tests on two different configurations at the Boeing Polysonic Wind Tunnel in St. Louis and the Unitary Plan Wind Tunnel at NASA Langley Research Center. Four major error sources are considered: (1) systematic errors from the balance calibration curve fits and model + balance installation, (2) run-to-run repeatability, (3) boundary-layer transition fixing, and (4) tunnel-to-tunnel reproducibility.
Classification and reduction of pilot error
NASA Technical Reports Server (NTRS)
Rogers, W. H.; Logan, A. L.; Boley, G. D.
1989-01-01
Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.
Explaining Errors in Children's Questions
ERIC Educational Resources Information Center
Rowland, Caroline F.
2007-01-01
The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that,…
USDA-ARS?s Scientific Manuscript database
Agronomic and Environmental research experiments result in data that are analyzed using statistical methods. These data are unavoidably accompanied by uncertainty. Decisions about hypotheses, based on statistical analyses of these data are therefore subject to error. This error is of three types,...
NASA Astrophysics Data System (ADS)
Seo, Seung Beom
Although water is one of the most essential natural resources, human activities have been exerting pressure on water resources. In order to reduce these stresses on water resources, two key issues threatening water resources sustainability - interaction between surface water and groundwater resources and groundwater withdrawal impacts of streamflow depletion - were investigated in this study. First, a systematic decomposition procedure was proposed for quantifying the errors arising from various sources in the model chain in projecting the changes in hydrologic attributes using near-term climate change projections. Apart from the unexplained changes by GCMs, the process of customizing GCM projections to watershed scale through a model chain - spatial downscaling, temporal disaggregation and hydrologic model - also introduces errors, thereby limiting the ability to explain the observed changes in hydrologic variability. Towards this, we first propose metrics for quantifying the errors arising from different steps in the model chain in explaining the observed changes in hydrologic variables (streamflow, groundwater). The proposed metrics are then evaluated using a detailed retrospective analyses in projecting the changes in streamflow and groundwater attributes in four target basins that span across a diverse hydroclimatic regimes over the US Sunbelt. Our analyses focused on quantifying the dominant sources of errors in projecting the changes in eight hydrologic variables - mean and variability of seasonal streamflow, mean and variability of 3-day peak seasonal streamflow, mean and variability of 7-day low seasonal streamflow and mean and standard deviation of groundwater depth - over four target basins using an Penn state Integrated Hydrologic Model (PIHM) between the period 1956-1980 and 1981-2005. Retrospective analyses show that small/humid (large/arid) basins show increased (reduced) uncertainty in projecting the changes in hydrologic attributes. Further, changes in error due to GCMs primarily account for the unexplained changes in mean and variability of seasonal streamflow. On the other hand, the changes in error due to temporal disaggregation and hydrologic model account for the inability to explain the observed changes in mean and variability of seasonal extremes. Thus, the proposed metrics provide insights on how the error in explaining the observed changes being propagated through the model under different hydroclimatic regimes. To understand interaction between surface water and groundwater resources, transient pumping impacts on streamflow and groundwater level were analyzed by imposing shortterm pumping scenarios under historic drought conditions. Since surface water and groundwater systems are fully coupled and integrated systems, increased groundwater withdrawal during drought may reduce baseflow into the stream and prolong both systems' recovery from drought. Towards this, we proposed an uncertainty framework to understand the resiliency of groundwater and surface water systems using a fully-coupled hydrologic model under transient pumping. Using this framework, we quantified the restoration time of surface water and groundwater systems and also estimated the changes in the state variables after pumping. Groundwater pumping impacts over the watershed were also analyzed under different pumping volumes and different potential climate scenarios. Our analyses show that groundwater restoration time is more sensitive to changes in pumping volumes as opposed to changes in climate. After the cessation of pumping, streamflow recovers quickly in comparison to groundwater. Pumping impacts on other state variables are also discussed. Given that surface water and groundwater are inter-connected, optimal management of the both resources should be considered to improve the watershed resiliency under drought. Subsequently, conjunctive use of surface water and groundwater has been considered as an effective approach to mitigate water shortage problems that are primarily caused by a drought. It is found that appropriate use of groundwater withdrawal was able to reduce water scarcity in surface water resources in drought condition. Besides, recovery time constraint was embedded in the management model so that trade-off between minimizing water scarcity and maximizing sustainability on groundwater was successfully addressed.
Learning from Errors at Work: A Replication Study in Elder Care Nursing
ERIC Educational Resources Information Center
Leicher, Veronika; Mulder, Regina H.; Bauer, Johannes
2013-01-01
Learning from errors is an important way of learning at work. In this article, we analyse conditions under which elder care nurses use errors as a starting point for the engagement in social learning activities (ESLA) in the form of joint reflection with colleagues on potential causes of errors and ways to prevent them in future. The goal of our…
Fisseni, Gregor; Pentzek, Michael; Abholz, Heinz-Harald
2008-02-01
GPs' recollections about their 'most serious errors in treatment' and about the consequences for themselves. Does it make a difference, who (else) contributed to the error, or to its discovery or disclosure? Anonymous questionnaire study concerning the 'three most serious errors in your career as a GP'. The participating doctors were given an operational definition of 'serious error'. They applied a special recall technique, using patient-induced associations to bring to mind former 'serious errors'. The recall method and the semi-structured 25-item questionnaire used were developed and piloted by the authors. The items were analysed quantitatively and by qualitative content analysis. General practices in the North Rhine region in Germany: 32 GPs anonymously reported about 75 'most serious errors'. In more than half of the cases analysed, other people contributed considerably to the GPs' serious errors. Most of the errors were discovered and disclosed to the patient by doctors: either by the GPs themselves, or by colleagues. A lot of GPs suffered loss of reputation and loss of patients. However, the number of patients staying with their GP clearly exceeded the number leaving their GP, depending on who else contributed to the error, who discovered it and who disclosed it to the patient. The majority of patients still trusted their GP after a serious error, especially if the GP was not the only one who contributed to the error and if the GP played an active role in the discovery and disclosure or the error.
Sensitivity Analysis of Nuclide Importance to One-Group Neutron Cross Sections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sekimoto, Hiroshi; Nemoto, Atsushi; Yoshimura, Yoshikane
The importance of nuclides is useful when investigating nuclide characteristics in a given neutron spectrum. However, it is derived using one-group microscopic cross sections, which may contain large errors or uncertainties. The sensitivity coefficient shows the effect of these errors or uncertainties on the importance.The equations for calculating sensitivity coefficients of importance to one-group nuclear constants are derived using the perturbation method. Numerical values are also evaluated for some important cases for fast and thermal reactor systems.Many characteristics of the sensitivity coefficients are derived from the derived equations and numerical results. The matrix of sensitivity coefficients seems diagonally dominant. However,more » it is not always satisfied in a detailed structure. The detailed structure of the matrix and the characteristics of coefficients are given.By using the obtained sensitivity coefficients, some demonstration calculations have been performed. The effects of error and uncertainty of nuclear data and of the change of one-group cross-section input caused by fuel design changes through the neutron spectrum are investigated. These calculations show that the sensitivity coefficient is useful when evaluating error or uncertainty of nuclide importance caused by the cross-section data error or uncertainty and when checking effectiveness of fuel cell or core design change for improving neutron economy.« less
Space-Borne Laser Altimeter Geolocation Error Analysis
NASA Astrophysics Data System (ADS)
Wang, Y.; Fang, J.; Ai, Y.
2018-05-01
This paper reviews the development of space-borne laser altimetry technology over the past 40 years. Taking the ICESAT satellite as an example, a rigorous space-borne laser altimeter geolocation model is studied, and an error propagation equation is derived. The influence of the main error sources, such as the platform positioning error, attitude measurement error, pointing angle measurement error and range measurement error, on the geolocation accuracy of the laser spot are analysed by simulated experiments. The reasons for the different influences on geolocation accuracy in different directions are discussed, and to satisfy the accuracy of the laser control point, a design index for each error source is put forward.
NASA Technical Reports Server (NTRS)
Fiske, David R.
2004-01-01
In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.
Application of Ensemble Kalman Filter in Power System State Tracking and Sensitivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yulan; Huang, Zhenyu; Zhou, Ning
2012-05-01
Ensemble Kalman Filter (EnKF) is proposed to track dynamic states of generators. The algorithm of EnKF and its application to generator state tracking are presented in detail. The accuracy and sensitivity of the method are analyzed with respect to initial state errors, measurement noise, unknown fault locations, time steps and parameter errors. It is demonstrated through simulation studies that even with some errors in the parameters, the developed EnKF can effectively track generator dynamic states using disturbance data.
NASA Technical Reports Server (NTRS)
Favaregh, Amber L.; Houlden, Heather P.; Pinier, Jeremy T.
2016-01-01
A detailed description of the uncertainty quantification process for the Space Launch System Block 1 vehicle configuration liftoff/transition and ascent 6-Degree-of-Freedom (DOF) aerodynamic databases is presented. These databases were constructed from wind tunnel test data acquired in the NASA Langley Research Center 14- by 22-Foot Subsonic Wind Tunnel and the Boeing Polysonic Wind Tunnel in St. Louis, MO, respectively. The major sources of error for these databases were experimental error and database modeling errors.
Interim analyses in 2 x 2 crossover trials.
Cook, R J
1995-09-01
A method is presented for performing interim analyses in long term 2 x 2 crossover trials with serial patient entry. The analyses are based on a linear statistic that combines data from individuals observed for one treatment period with data from individuals observed for both periods. The coefficients in this linear combination can be chosen quite arbitrarily, but we focus on variance-based weights to maximize power for tests regarding direct treatment effects. The type I error rate of this procedure is controlled by utilizing the joint distribution of the linear statistics over analysis stages. Methods for performing power and sample size calculations are indicated. A two-stage sequential design involving simultaneous patient entry and a single between-period interim analysis is considered in detail. The power and average number of measurements required for this design are compared to those of the usual crossover trial. The results indicate that, while there is minimal loss in power relative to the usual crossover design in the absence of differential carry-over effects, the proposed design can have substantially greater power when differential carry-over effects are present. The two-stage crossover design can also lead to more economical studies in terms of the expected number of measurements required, due to the potential for early stopping. Attention is directed toward normally distributed responses.
Mechanism reduction for multicomponent surrogates: A case study using toluene reference fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niemeyer, Kyle E.; Sung, Chih-Jen
Strategies and recommendations for performing skeletal reductions of multicomponent surrogate fuels are presented, through the generation and validation of skeletal mechanisms for a three-component toluene reference fuel. Using the directed relation graph with error propagation and sensitivity analysis method followed by a further unimportant reaction elimination stage, skeletal mechanisms valid over comprehensive and high-temperature ranges of conditions were developed at varying levels of detail. These skeletal mechanisms were generated based on autoignition simulations, and validation using ignition delay predictions showed good agreement with the detailed mechanism in the target range of conditions. When validated using phenomena other than autoignition, suchmore » as perfectly stirred reactor and laminar flame propagation, tight error control or more restrictions on the reduction during the sensitivity analysis stage were needed to ensure good agreement. In addition, tight error limits were needed for close prediction of ignition delay when varying the mixture composition away from that used for the reduction. In homogeneous compression-ignition engine simulations, the skeletal mechanisms closely matched the point of ignition and accurately predicted species profiles for lean to stoichiometric conditions. Furthermore, the efficacy of generating a multicomponent skeletal mechanism was compared to combining skeletal mechanisms produced separately for neat fuel components; using the same error limits, the latter resulted in a larger skeletal mechanism size that also lacked important cross reactions between fuel components. Based on the present results, general guidelines for reducing detailed mechanisms for multicomponent fuels are discussed.« less
Mechanism reduction for multicomponent surrogates: A case study using toluene reference fuels
Niemeyer, Kyle E.; Sung, Chih-Jen
2014-11-01
Strategies and recommendations for performing skeletal reductions of multicomponent surrogate fuels are presented, through the generation and validation of skeletal mechanisms for a three-component toluene reference fuel. Using the directed relation graph with error propagation and sensitivity analysis method followed by a further unimportant reaction elimination stage, skeletal mechanisms valid over comprehensive and high-temperature ranges of conditions were developed at varying levels of detail. These skeletal mechanisms were generated based on autoignition simulations, and validation using ignition delay predictions showed good agreement with the detailed mechanism in the target range of conditions. When validated using phenomena other than autoignition, suchmore » as perfectly stirred reactor and laminar flame propagation, tight error control or more restrictions on the reduction during the sensitivity analysis stage were needed to ensure good agreement. In addition, tight error limits were needed for close prediction of ignition delay when varying the mixture composition away from that used for the reduction. In homogeneous compression-ignition engine simulations, the skeletal mechanisms closely matched the point of ignition and accurately predicted species profiles for lean to stoichiometric conditions. Furthermore, the efficacy of generating a multicomponent skeletal mechanism was compared to combining skeletal mechanisms produced separately for neat fuel components; using the same error limits, the latter resulted in a larger skeletal mechanism size that also lacked important cross reactions between fuel components. Based on the present results, general guidelines for reducing detailed mechanisms for multicomponent fuels are discussed.« less
Ideas for a pattern-oriented approach towards a VERA analysis ensemble
NASA Astrophysics Data System (ADS)
Gorgas, T.; Dorninger, M.
2010-09-01
Ideas for a pattern-oriented approach towards a VERA analysis ensemble For many applications in meteorology and especially for verification purposes it is important to have some information about the uncertainties of observation and analysis data. A high quality of these "reference data" is an absolute necessity as the uncertainties are reflected in verification measures. The VERA (Vienna Enhanced Resolution Analysis) scheme includes a sophisticated quality control tool which accounts for the correction of observational data and provides an estimation of the observation uncertainty. It is crucial for meteorologically and physically reliable analysis fields. VERA is based on a variational principle and does not need any first guess fields. It is therefore NWP model independent and can also be used as an unbiased reference for real time model verification. For downscaling purposes VERA uses an a priori knowledge on small-scale physical processes over complex terrain, the so called "fingerprint technique", which transfers information from rich to data sparse regions. The enhanced Joint D-PHASE and COPS data set forms the data base for the analysis ensemble study. For the WWRP projects D-PHASE and COPS a joint activity has been started to collect GTS and non-GTS data from the national and regional meteorological services in Central Europe for 2007. Data from more than 11.000 stations are available for high resolution analyses. The usage of random numbers as perturbations for ensemble experiments is a common approach in meteorology. In most implementations, like for NWP-model ensemble systems, the focus lies on error growth and propagation on the spatial and temporal scale. When defining errors in analysis fields we have to consider the fact that analyses are not time dependent and that no perturbation method aimed at temporal evolution is possible. Further, the method applied should respect two major sources of analysis errors: Observation errors AND analysis or interpolation errors. With the concept of an analysis ensemble we hope to get a more detailed sight on both sources of analysis errors. For the computation of the VERA ensemble members a sample of Gaussian random perturbations is produced for each station and parameter. The deviation of perturbations is based on the correction proposals by the VERA QC scheme to provide some "natural" limits for the ensemble. In order to put more emphasis on the weather situation we aim to integrate the main synoptic field structures as weighting factors for the perturbations. Two widely approved approaches are used for the definition of these main field structures: The Principal Component Analysis and a 2D-Discrete Wavelet Transform. The results of tests concerning the implementation of this pattern-supported analysis ensemble system and a comparison of the different approaches are given in the presentation.
Bayesian models for comparative analysis integrating phylogenetic uncertainty.
de Villemereuil, Pierre; Wells, Jessie A; Edwards, Robert D; Blomberg, Simon P
2012-06-28
Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language.
Bayesian models for comparative analysis integrating phylogenetic uncertainty
2012-01-01
Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language. PMID:22741602
Paddick, Stella-Maria; Mkenda, Sarah; Mbowe, Godfrey; Kisoli, Aloyce; Gray, William K; Dotchin, Catherine L; Ternent, Laura; Ogunniyi, Adesola; Kissima, John; Olakehinde, Olaide; Mushi, Declare; Walker, Richard W
2017-06-01
In the above article (Paddick, 2017) The corresponding author's details were previously listed incorrectly. The correct details are; contact number +44 191 293 2709 and email address William.gray@nhct.nhs.uk. The original article has been updated with the correct contact details. The publishers apologise for any inconvenience and confusion this error has caused.
Spatial abstraction for autonomous robot navigation.
Epstein, Susan L; Aroor, Anoop; Evanusa, Matthew; Sklar, Elizabeth I; Parsons, Simon
2015-09-01
Optimal navigation for a simulated robot relies on a detailed map and explicit path planning, an approach problematic for real-world robots that are subject to noise and error. This paper reports on autonomous robots that rely on local spatial perception, learning, and commonsense rationales instead. Despite realistic actuator error, learned spatial abstractions form a model that supports effective travel.
NASA Astrophysics Data System (ADS)
Fernandez, Alvaro; Müller, Inigo A.; Rodríguez-Sanz, Laura; van Dijk, Joep; Looser, Nathan; Bernasconi, Stefano M.
2017-12-01
Carbonate clumped isotopes offer a potentially transformational tool to interpret Earth's history, but the proxy is still limited by poor interlaboratory reproducibility. Here, we focus on the uncertainties that result from the analysis of only a few replicate measurements to understand the extent to which unconstrained errors affect calibration relationships and paleoclimate reconstructions. We find that highly precise data can be routinely obtained with multiple replicate analyses, but this is not always done in many laboratories. For instance, using published estimates of external reproducibilities we find that typical clumped isotope measurements (three replicate analyses) have margins of error at the 95% confidence level (CL) that are too large for many applications. These errors, however, can be systematically reduced with more replicate measurements. Second, using a Monte Carlo-type simulation we demonstrate that the degree of disagreement on published calibration slopes is about what we should expect considering the precision of Δ47 data, the number of samples and replicate analyses, and the temperature range covered in published calibrations. Finally, we show that the way errors are typically reported in clumped isotope data can be problematic and lead to the impression that data are more precise than warranted. We recommend that uncertainties in Δ47 data should no longer be reported as the standard error of a few replicate measurements. Instead, uncertainties should be reported as margins of error at a specified confidence level (e.g., 68% or 95% CL). These error bars are a more realistic indication of the reliability of a measurement.
A two-factor error model for quantitative steganalysis
NASA Astrophysics Data System (ADS)
Böhme, Rainer; Ker, Andrew D.
2006-02-01
Quantitative steganalysis refers to the exercise not only of detecting the presence of hidden stego messages in carrier objects, but also of estimating the secret message length. This problem is well studied, with many detectors proposed but only a sparse analysis of errors in the estimators. A deep understanding of the error model, however, is a fundamental requirement for the assessment and comparison of different detection methods. This paper presents a rationale for a two-factor model for sources of error in quantitative steganalysis, and shows evidence from a dedicated large-scale nested experimental set-up with a total of more than 200 million attacks. Apart from general findings about the distribution functions found in both classes of errors, their respective weight is determined, and implications for statistical hypothesis tests in benchmarking scenarios or regression analyses are demonstrated. The results are based on a rigorous comparison of five different detection methods under many different external conditions, such as size of the carrier, previous JPEG compression, and colour channel selection. We include analyses demonstrating the effects of local variance and cover saturation on the different sources of error, as well as presenting the case for a relative bias model for between-image error.
Dealing with systematic laser scanner errors due to misalignment at area-based deformation analyses
NASA Astrophysics Data System (ADS)
Holst, Christoph; Medić, Tomislav; Kuhlmann, Heiner
2018-04-01
The ability to acquire rapid, dense and high quality 3D data has made terrestrial laser scanners (TLS) a desirable instrument for tasks demanding a high geometrical accuracy, such as geodetic deformation analyses. However, TLS measurements are influenced by systematic errors due to internal misalignments of the instrument. The resulting errors in the point cloud might exceed the magnitude of random errors. Hence, it is important to assure that the deformation analysis is not biased by these influences. In this study, we propose and evaluate several strategies for reducing the effect of TLS misalignments on deformation analyses. The strategies are based on the bundled in-situ self-calibration and on the exploitation of two-face measurements. The strategies are verified analyzing the deformation of the Onsala Space Observatory's radio telescope's main reflector. It is demonstrated that either two-face measurements as well as the in-situ calibration of the laser scanner in a bundle adjustment improve the results of deformation analysis. The best solution is gained by a combination of both strategies.
BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of a...
NASA Astrophysics Data System (ADS)
He, Yingwei; Li, Ping; Feng, Guojin; Cheng, Li; Wang, Yu; Wu, Houping; Liu, Zilong; Zheng, Chundi; Sha, Dingguo
2010-11-01
For measuring large-aperture optical system transmittance, a novel sub-aperture scanning machine with double-rotating arms (SSMDA) was designed to obtain sub-aperture beam spot. Optical system full-aperture transmittance measurements can be achieved by applying sub-aperture beam spot scanning technology. The mathematical model of the SSMDA based on a homogeneous coordinate transformation matrix is established to develop a detailed methodology for analyzing the beam spot scanning errors. The error analysis methodology considers two fundamental sources of scanning errors, namely (1) the length systematic errors and (2) the rotational systematic errors. As the systematic errors of the parameters are given beforehand, computational results of scanning errors are between -0.007~0.028mm while scanning radius is not lager than 400.000mm. The results offer theoretical and data basis to the research on transmission characteristics of large optical system.
Spelling in adolescents with dyslexia: errors and modes of assessment.
Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc
2014-01-01
In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three main error categories were distinguished: phonological, orthographic, and grammatical errors (on the basis of morphology and language-specific spelling rules). The results indicated that higher-education students with dyslexia made on average twice as many spelling errors as the controls, with effect sizes of d ≥ 2. When the errors were classified as phonological, orthographic, or grammatical, we found a slight dominance of phonological errors in students with dyslexia. Sentence dictation did not provide more information than word dictation in the correct classification of students with and without dyslexia. © Hammill Institute on Disabilities 2012.
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
Dark Energy Survey Year 1 Results: Multi-Probe Methodology and Simulated Likelihood Analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krause, E.; et al.
We present the methodology for and detail the implementation of the Dark Energy Survey (DES) 3x2pt DES Year 1 (Y1) analysis, which combines configuration-space two-point statistics from three different cosmological probes: cosmic shear, galaxy-galaxy lensing, and galaxy clustering, using data from the first year of DES observations. We have developed two independent modeling pipelines and describe the code validation process. We derive expressions for analytical real-space multi-probe covariances, and describe their validation with numerical simulations. We stress-test the inference pipelines in simulated likelihood analyses that vary 6-7 cosmology parameters plus 20 nuisance parameters and precisely resemble the analysis to be presented in the DES 3x2pt analysis paper, using a variety of simulated input data vectors with varying assumptions. We find that any disagreement between pipelines leads to changes in assigned likelihoodmore » $$\\Delta \\chi^2 \\le 0.045$$ with respect to the statistical error of the DES Y1 data vector. We also find that angular binning and survey mask do not impact our analytic covariance at a significant level. We determine lower bounds on scales used for analysis of galaxy clustering (8 Mpc$$~h^{-1}$$) and galaxy-galaxy lensing (12 Mpc$$~h^{-1}$$) such that the impact of modeling uncertainties in the non-linear regime is well below statistical errors, and show that our analysis choices are robust against a variety of systematics. These tests demonstrate that we have a robust analysis pipeline that yields unbiased cosmological parameter inferences for the flagship 3x2pt DES Y1 analysis. We emphasize that the level of independent code development and subsequent code comparison as demonstrated in this paper is necessary to produce credible constraints from increasingly complex multi-probe analyses of current data.« less
Yu, Hao; Qian, Zheng; Liu, Huayi; Qu, Jiaqi
2018-02-14
This paper analyzes the measurement error, caused by the position of the current-carrying conductor, of a circular array of magnetic sensors for current measurement. The circular array of magnetic sensors is an effective approach for AC or DC non-contact measurement, as it is low-cost, light-weight, has a large linear range, wide bandwidth, and low noise. Especially, it has been claimed that such structure has excellent reduction ability for errors caused by the position of the current-carrying conductor, crosstalk current interference, shape of the conduction cross-section, and the Earth's magnetic field. However, the positions of the current-carrying conductor-including un-centeredness and un-perpendicularity-have not been analyzed in detail until now. In this paper, for the purpose of having minimum measurement error, a theoretical analysis has been proposed based on vector inner and exterior product. In the presented mathematical model of relative error, the un-center offset distance, the un-perpendicular angle, the radius of the circle, and the number of magnetic sensors are expressed in one equation. The comparison of the relative error caused by the position of the current-carrying conductor between four and eight sensors is conducted. Tunnel magnetoresistance (TMR) sensors are used in the experimental prototype to verify the mathematical model. The analysis results can be the reference to design the details of the circular array of magnetic sensors for current measurement in practical situations.
NASA Technical Reports Server (NTRS)
Taylor, B. K.; Casasent, D. P.
1989-01-01
The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.
Cohen, Michael X
2015-09-01
The purpose of this paper is to compare the effects of different spatial transformations applied to the same scalp-recorded EEG data. The spatial transformations applied are two referencing schemes (average and linked earlobes), the surface Laplacian, and beamforming (a distributed source localization procedure). EEG data were collected during a speeded reaction time task that provided a comparison of activity between error vs. correct responses. Analyses focused on time-frequency power, frequency band-specific inter-electrode connectivity, and within-subject cross-trial correlations between EEG activity and reaction time. Time-frequency power analyses showed similar patterns of midfrontal delta-theta power for errors compared to correct responses across all spatial transformations. Beamforming additionally revealed error-related anterior and lateral prefrontal beta-band activity. Within-subject brain-behavior correlations showed similar patterns of results across the spatial transformations, with the correlations being the weakest after beamforming. The most striking difference among the spatial transformations was seen in connectivity analyses: linked earlobe reference produced weak inter-site connectivity that was attributable to volume conduction (zero phase lag), while the average reference and Laplacian produced more interpretable connectivity results. Beamforming did not reveal any significant condition modulations of connectivity. Overall, these analyses show that some findings are robust to spatial transformations, while other findings, particularly those involving cross-trial analyses or connectivity, are more sensitive and may depend on the use of appropriate spatial transformations. Copyright © 2014 Elsevier B.V. All rights reserved.
Huang, Chengqiang; Yang, Youchang; Wu, Bo; Yu, Weize
2018-06-01
The sub-pixel arrangement of the RGBG panel and the image with RGB format are different and the algorithm that converts RGB to RGBG is urgently needed to display an image with RGB arrangement on the RGBG panel. However, the information loss is still large although color fringing artifacts are weakened in the published papers that study this conversion. In this paper, an RGB-to-RGBG conversion algorithm with adaptive weighting factors based on edge detection and minimal square error (EDMSE) is proposed. The main points of innovation include the following: (1) the edge detection is first proposed to distinguish image details with serious color fringing artifacts and image details which are prone to be lost in the process of RGB-RGBG conversion; (2) for image details with serious color fringing artifacts, the weighting factor 0.5 is applied to weaken the color fringing artifacts; and (3) for image details that are prone to be lost in the process of RGB-RGBG conversion, a special mechanism to minimize square error is proposed. The experiment shows that the color fringing artifacts are slightly improved by EDMSE, and the values of MSE of the image processed are 19.6% and 7% smaller than those of the image processed by the direct assignment and weighting factor algorithm, respectively. The proposed algorithm is implemented on a field programmable gate array to enable the image display on the RGBG panel.
Characterizing water surface elevation under different flow conditions for the upcoming SWOT mission
NASA Astrophysics Data System (ADS)
Domeneghetti, A.; Schumann, G. J.-P.; Frasson, R. P. M.; Wei, R.; Pavelsky, T. M.; Castellarin, A.; Brath, A.; Durand, M. T.
2018-06-01
The Surface Water and Ocean Topography satellite mission (SWOT), scheduled for launch in 2021, will deliver two-dimensional observations of water surface heights for lakes, rivers wider than 100 m and oceans. Even though the scientific literature has highlighted several fields of application for the expected products, detailed simulations of the SWOT radar performance for a realistic river scenario have not been presented in the literature. Understanding the error of the most fundamental "raw" SWOT hydrology product is important in order to have a greater awareness about strengths and limits of the forthcoming satellite observations. This study focuses on a reach (∼140 km in length) of the middle-lower portion of the Po River, in Northern Italy, and, to date, represents one of the few real-case analyses of the spatial patterns in water surface elevation accuracy expected from SWOT. The river stretch is characterized by a main channel varying from 100 to 500 m in width and a large floodplain (up to 5 km) delimited by a system of major embankments. The simulation of the water surface along the Po River for different flow conditions (high, low and mean annual flows) is performed with inputs from a quasi-2D model implemented using detailed topographic and bathymetric information (LiDAR, 2 m resolution). By employing a simulator that mimics many SWOT satellite sensor characteristics and generates proxies of the remotely sensed hydrometric data, this study characterizes the spatial observations potentially provided by SWOT. We evaluate SWOT performance under different hydraulic conditions and assess possible effects of river embankments, river width, river topography and distance from the satellite ground track. Despite analyzing errors from the raw radar pixel cloud, which receives minimal processing, the present study highlights the promising potential of this Ka-band interferometer for measuring water surface elevations, with mean elevation errors of 0.1 cm and 21 cm for high and low flows, respectively. Results of the study characterize the expected performance of the upcoming SWOT mission and provide additional insights into potential applications of SWOT observations.
NASA Technical Reports Server (NTRS)
Maynard, O. E.; Brown, W. C.; Edwards, A.; Haley, J. T.; Meltz, G.; Howell, J. M.; Nathan, A.
1975-01-01
Introduction, organization, analyses, conclusions, and recommendations for each of the spaceborne subsystems are presented. Environmental effects - propagation analyses are presented with appendices covering radio wave diffraction by random ionospheric irregularities, self-focusing plasma instabilities and ohmic heating of the D-region. Analyses of dc to rf conversion subsystems and system considerations for both the amplitron and the klystron are included with appendices for the klystron covering cavity circuit calculations, output power of the solenoid-focused klystron, thermal control system, and confined flow focusing of a relativistic beam. The photovoltaic power source characteristics are discussed as they apply to interfacing with the power distribution flow paths, magnetic field interaction, dc to rf converter protection, power distribution including estimates for the power budget, weights, and costs. Analyses for the transmitting antenna consider the aperture illumination and size, with associated efficiencies and ground power distributions. Analyses of subarray types and dimensions, attitude error, flatness, phase error, subarray layout, frequency tolerance, attenuation, waveguide dimensional tolerances, mechanical including thermal considerations are included. Implications associated with transportation, assembly and packaging, attitude control and alignment are discussed. The phase front control subsystem, including both ground based pilot signal driven adaptive and ground command approaches with their associated phase errors, are analyzed.
Fisher, Moria E; Huang, Felix C; Wright, Zachary A; Patton, James L
2014-01-01
Manipulation of error feedback has been of great interest to recent studies in motor control and rehabilitation. Typically, motor adaptation is shown as a change in performance with a single scalar metric for each trial, yet such an approach might overlook details about how error evolves through the movement. We believe that statistical distributions of movement error through the extent of the trajectory can reveal unique patterns of adaption and possibly reveal clues to how the motor system processes information about error. This paper describes different possible ordinate domains, focusing on representations in time and state-space, used to quantify reaching errors. We hypothesized that the domain with the lowest amount of variability would lead to a predictive model of reaching error with the highest accuracy. Here we showed that errors represented in a time domain demonstrate the least variance and allow for the highest predictive model of reaching errors. These predictive models will give rise to more specialized methods of robotic feedback and improve previous techniques of error augmentation.
Spelling in Adolescents with Dyslexia: Errors and Modes of Assessment
ERIC Educational Resources Information Center
Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc
2014-01-01
In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three…
A Reduced-Order Model For Zero-Mass Synthetic Jet Actuators
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.; Vatsa, Veer S.
2007-01-01
Accurate details of the general performance of fluid actuators is desirable over a range of flow conditions, within some predetermined error tolerance. Designers typically model actuators with different levels of fidelity depending on the acceptable level of error in each circumstance. Crude properties of the actuator (e.g., peak mass rate and frequency) may be sufficient for some designs, while detailed information is needed for other applications (e.g., multiple actuator interactions). This work attempts to address two primary objectives. The first objective is to develop a systematic methodology for approximating realistic 3-D fluid actuators, using quasi-1-D reduced-order models. Near full fidelity can be achieved with this approach at a fraction of the cost of full simulation and only a modest increase in cost relative to most actuator models used today. The second objective, which is a direct consequence of the first, is to determine the approximate magnitude of errors committed by actuator model approximations of various fidelities. This objective attempts to identify which model (ranging from simple orifice exit boundary conditions to full numerical simulations of the actuator) is appropriate for a given error tolerance.
Waffle mode error in the AEOS adaptive optics point-spread function
NASA Astrophysics Data System (ADS)
Makidon, Russell B.; Sivaramakrishnan, Anand; Roberts, Lewis C., Jr.; Oppenheimer, Ben R.; Graham, James R.
2003-02-01
Adaptive optics (AO) systems have improved astronomical imaging capabilities significantly over the last decade, and have the potential to revolutionize the kinds of science done with 4-5m class ground-based telescopes. However, provided sufficient detailed study and analysis, existing AO systems can be improved beyond their original specified error budgets. Indeed, modeling AO systems has been a major activity in the past decade: sources of noise in the atmosphere and the wavefront sensing WFS) control loop have received a great deal of attention, and many detailed and sophisticated control-theoretic and numerical models predicting AO performance are already in existence. However, in terms of AO system performance improvements, wavefront reconstruction (WFR) and wavefront calibration techniques have commanded relatively little attention. We elucidate the nature of some of these reconstruction problems, and demonstrate their existence in data from the AEOS AO system. We simulate the AO correction of AEOS in the I-band, and show that the magnitude of the `waffle mode' error in the AEOS reconstructor is considerably larger than expected. We suggest ways of reducing the magnitude of this error, and, in doing so, open up ways of understanding how wavefront reconstruction might handle bad actuators and partially-illuminated WFS subapertures.
Alexander, John H; Levy, Elliott; Lawrence, Jack; Hanna, Michael; Waclawski, Anthony P; Wang, Junyuan; Califf, Robert M; Wallentin, Lars; Granger, Christopher B
2013-09-01
In ARISTOTLE, apixaban resulted in a 21% reduction in stroke, a 31% reduction in major bleeding, and an 11% reduction in death. However, approval of apixaban was delayed to investigate a statement in the clinical study report that "7.3% of subjects in the apixaban group and 1.2% of subjects in the warfarin group received, at some point during the study, a container of the wrong type." Rates of study medication dispensing error were characterized through reviews of study medication container tear-off labels in 6,520 participants from randomly selected study sites. The potential effect of dispensing errors on study outcomes was statistically simulated in sensitivity analyses in the overall population. The rate of medication dispensing error resulting in treatment error was 0.04%. Rates of participants receiving at least 1 incorrect container were 1.04% (34/3,273) in the apixaban group and 0.77% (25/3,247) in the warfarin group. Most of the originally reported errors were data entry errors in which the correct medication container was dispensed but the wrong container number was entered into the case report form. Sensitivity simulations in the overall trial population showed no meaningful effect of medication dispensing error on the main efficacy and safety outcomes. Rates of medication dispensing error were low and balanced between treatment groups. The initially reported dispensing error rate was the result of data recording and data management errors and not true medication dispensing errors. These analyses confirm the previously reported results of ARISTOTLE. © 2013.
How to minimize perceptual error and maximize expertise in medical imaging
NASA Astrophysics Data System (ADS)
Kundel, Harold L.
2007-03-01
Visual perception is such an intimate part of human experience that we assume that it is entirely accurate. Yet, perception accounts for about half of the errors made by radiologists using adequate imaging technology. The true incidence of errors that directly affect patient well being is not known but it is probably at the lower end of the reported values of 3 to 25%. Errors in screening for lung and breast cancer are somewhat better characterized than errors in routine diagnosis. About 25% of cancers actually recorded on the images are missed and cancer is falsely reported in about 5% of normal people. Radiologists must strive to decrease error not only because of the potential impact on patient care but also because substantial variation among observers undermines confidence in the reliability of imaging diagnosis. Observer variation also has a major impact on technology evaluation because the variation between observers is frequently greater than the difference in the technologies being evaluated. This has become particularly important in the evaluation of computer aided diagnosis (CAD). Understanding the basic principles that govern the perception of medical images can provide a rational basis for making recommendations for minimizing perceptual error. It is convenient to organize thinking about perceptual error into five steps. 1) The initial acquisition of the image by the eye-brain (contrast and detail perception). 2) The organization of the retinal image into logical components to produce a literal perception (bottom-up, global, holistic). 3) Conversion of the literal perception into a preferred perception by resolving ambiguities in the literal perception (top-down, simulation, synthesis). 4) Selective visual scanning to acquire details that update the preferred perception. 5) Apply decision criteria to the preferred perception. The five steps are illustrated with examples from radiology with suggestions for minimizing error. The role of perceptual learning in the development of expertise is also considered.
Error Analysis of Present Simple Tense in the Interlanguage of Adult Arab English Language Learners
ERIC Educational Resources Information Center
Muftah, Muneera; Rafik-Galea, Shameem
2013-01-01
The present study analyses errors on present simple tense among adult Arab English language learners. It focuses on the error on 3sg "-s" (the third person singular present tense agreement morpheme "-s"). The learners are undergraduate adult Arabic speakers learning English as a foreign language. The study gathered data from…
Implementation of Concept Mapping to Novices: Reasons for Errors, a Matter of Technique or Content?
ERIC Educational Resources Information Center
Conradty, Catherine; Bogner, Franz X.
2010-01-01
Concept mapping is discussed as a means to promote meaningful learning and in particular progress in reading comprehension skills. Its increasing implementation necessitates the acquisition of adequate knowledge about frequent errors in order to make available an effective introduction to the new learning method. To analyse causes of errors, 283…
Errors in Science and Their Treatment in Teaching Science
ERIC Educational Resources Information Center
Kipnis, Nahum
2011-01-01
This paper analyses the real origin and nature of scientific errors against claims of science critics, by examining a number of examples from the history of electricity and optics. This analysis leads to a conclusion that errors are a natural and unavoidable part of scientific process. If made available to students, through their science teachers,…
Partitioning error components for accuracy-assessment of near-neighbor methods of imputation
Albert R. Stage; Nicholas L. Crookston
2007-01-01
Imputation is applied for two quite different purposes: to supply missing data to complete a data set for subsequent modeling analyses or to estimate subpopulation totals. Error properties of the imputed values have different effects in these two contexts. We partition errors of imputation derived from similar observation units as arising from three sources:...
A survey of community members' perceptions of medical errors in Oman
Al-Mandhari, Ahmed S; Al-Shafaee, Mohammed A; Al-Azri, Mohammed H; Al-Zakwani, Ibrahim S; Khan, Mushtaq; Al-Waily, Ahmed M; Rizvi, Syed
2008-01-01
Background Errors have been the concern of providers and consumers of health care services. However, consumers' perception of medical errors in developing countries is rarely explored. The aim of this study is to assess community members' perceptions about medical errors and to analyse the factors affecting this perception in one Middle East country, Oman. Methods Face to face interviews were conducted with heads of 212 households in two villages in North Al-Batinah region of Oman selected because of close proximity to the Sultan Qaboos University (SQU), Muscat, Oman. Participants' perceived knowledge about medical errors was assessed. Responses were coded and categorised. Analyses were performed using Pearson's χ2, Fisher's exact tests, and multivariate logistic regression model wherever appropriate. Results Seventy-eight percent (n = 165) of participants believed they knew what was meant by medical errors. Of these, 34% and 26.5% related medical errors to wrong medications or diagnoses, respectively. Understanding of medical errors was correlated inversely with age and positively with family income. Multivariate logistic regression revealed that a one-year increase in age was associated with a 4% reduction in perceived knowledge of medical errors (CI: 1% to 7%; p = 0.045). The study found that 49% of those who believed they knew the meaning of medical errors had experienced such errors. The most common consequence of the errors was severe pain (45%). Of the 165 informed participants, 49% felt that an uncaring health care professional was the main cause of medical errors. Younger participants were able to list more possible causes of medical errors than were older subjects (Incident Rate Ratio of 0.98; p < 0.001). Conclusion The majority of participants believed they knew the meaning of medical errors. Younger participants were more likely to be aware of such errors and could list one or more causes. PMID:18664245
Gagnon, Bernadine; Miozzo, Michele
2017-01-01
Purpose This study aimed to test whether an approach to distinguishing errors arising in phonological processing from those arising in motor planning also predicts the extent to which repetition-based training can lead to improved production of difficult sound sequences. Method Four individuals with acquired speech production impairment who produced consonant cluster errors involving deletion were examined using a repetition task. We compared the acoustic details of productions with deletion errors in target consonant clusters to singleton consonants. Changes in accuracy over the course of the study were also compared. Results Two individuals produced deletion errors consistent with a phonological locus of the errors, and 2 individuals produced errors consistent with a motoric locus of the errors. The 2 individuals who made phonologically driven errors showed no change in performance on a repetition training task, whereas the 2 individuals with motoric errors improved in their production of both trained and untrained items. Conclusions The results extend previous findings about a metric for identifying the source of sound production errors in individuals with both apraxia of speech and aphasia. In particular, this work may provide a tool for identifying predominant error types in individuals with complex deficits. PMID:28655044
Topping, David J.; Wright, Scott A.
2016-05-04
It is commonly recognized that suspended-sediment concentrations in rivers can change rapidly in time and independently of water discharge during important sediment‑transporting events (for example, during floods); thus, suspended-sediment measurements at closely spaced time intervals are necessary to characterize suspended‑sediment loads. Because the manual collection of sufficient numbers of suspended-sediment samples required to characterize this variability is often time and cost prohibitive, several “surrogate” techniques have been developed for in situ measurements of properties related to suspended-sediment characteristics (for example, turbidity, laser-diffraction, acoustics). Herein, we present a new physically based method for the simultaneous measurement of suspended-silt-and-clay concentration, suspended-sand concentration, and suspended‑sand median grain size in rivers, using multi‑frequency arrays of single-frequency side‑looking acoustic-Doppler profilers. The method is strongly grounded in the extensive scientific literature on the incoherent scattering of sound by random suspensions of small particles. In particular, the method takes advantage of theory that relates acoustic frequency, acoustic attenuation, acoustic backscatter, suspended-sediment concentration, and suspended-sediment grain-size distribution. We develop the theory and methods, and demonstrate the application of the method at six study sites on the Colorado River and Rio Grande, where large numbers of suspended-sediment samples have been collected concurrently with acoustic attenuation and backscatter measurements over many years. The method produces acoustical measurements of suspended-silt-and-clay and suspended-sand concentration (in units of mg/L), and acoustical measurements of suspended-sand median grain size (in units of mm) that are generally in good to excellent agreement with concurrent physical measurements of these quantities in the river cross sections at these sites. In addition, detailed, step-by-step procedures are presented for the general river application of the method.Quantification of errors in sediment-transport measurements made using this acoustical method is essential if the measurements are to be used effectively, for example, to evaluate uncertainty in long-term sediment loads and budgets. Several types of error analyses are presented to evaluate (1) the stability of acoustical calibrations over time, (2) the effect of neglecting backscatter from silt and clay, (3) the bias arising from changes in sand grain size, (4) the time-varying error in the method, and (5) the influence of nonrandom processes on error. Results indicate that (1) acoustical calibrations can be stable for long durations (multiple years), (2) neglecting backscatter from silt and clay can result in unacceptably high bias, (3) two frequencies are likely required to obtain sand-concentration measurements that are unbiased by changes in grain size, depending on site-specific conditions and acoustic frequency, (4) relative errors in silt-and-clay- and sand-concentration measurements decrease substantially as concentration increases, and (5) nonrandom errors may arise from slow changes in the spatial structure of suspended sediment that affect the relations between concentration in the acoustically ensonified part of the cross section and concentration in the entire river cross section. Taken together, the error analyses indicate that the two-frequency method produces unbiased measurements of suspended-silt-and-clay and sand concentration, with errors that are similar to, or larger than, those associated with conventional sampling methods.
Forward scattering in two-beam laser interferometry
NASA Astrophysics Data System (ADS)
Mana, G.; Massa, E.; Sasso, C. P.
2018-04-01
A fractional error as large as 25 pm mm-1 at the zero optical-path difference has been observed in an optical interferometer measuring the displacement of an x-ray interferometer used to determine the lattice parameter of silicon. Detailed investigations have brought to light that the error was caused by light forward-scattered from the beam feeding the interferometer. This paper reports on the impact of forward-scattered light on the accuracy of two-beam optical interferometry applied to length metrology, and supplies a model capable of explaining the observed error.
Technical approaches for measurement of human errors
NASA Technical Reports Server (NTRS)
Clement, W. F.; Heffley, R. K.; Jewell, W. F.; Mcruer, D. T.
1980-01-01
Human error is a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents. The technical details of a variety of proven approaches for the measurement of human errors in the context of the national airspace system are presented. Unobtrusive measurements suitable for cockpit operations and procedures in part of full mission simulation are emphasized. Procedure, system performance, and human operator centered measurements are discussed as they apply to the manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations.
Defining robustness protocols: a method to include and evaluate robustness in clinical plans
NASA Astrophysics Data System (ADS)
McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.
2015-04-01
We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.
Britt, H; Miller, G C; Steven, I D; Howarth, G C; Nicholson, P A; Bhasale, A L; Norton, K J
1997-04-01
The prediction and subsequent prevention of errors, which are an integral element of human behaviour, require an understanding of their cause. The incident monitoring technique was developed in the study of aviation errors in the Second World War and has been applied more recently in the field of anaesthetics. This pilot study represents one of the first attempts to apply the incident monitoring technique in the general practice environment. A total of 297 GPs across Australia anonymously reported details of unintended events which harmed or could have harmed the patient. Reports were contemporaneously recorded on prepared forms which allowed a free text description of the incident, and structured responses for contributing and mitigating factors, immediate and long-term out-comes, additional costs etc. The first 500 reports were analysed using both of qualitative and quantitative methods and a brief overview of results is presented. The methodological issues arising in the application of this technique to such a large, widely spread profession, in which episodes of care are not necessarily confined to a single consultation, are discussed. This study demonstrated that the incident monitoring technique can be successfully applied in general practice and that the resulting information can facilitate the identification of common factors contributing to such events and allow the development of preventive interventions.
Zhang, Jingyi; Li, Bin; Chen, Yumin; Chen, Meijie; Fang, Tao; Liu, Yongfeng
2018-06-11
This paper proposes a regression model using the Eigenvector Spatial Filtering (ESF) method to estimate ground PM 2.5 concentrations. Covariates are derived from remotely sensed data including aerosol optical depth, normal differential vegetation index, surface temperature, air pressure, relative humidity, height of planetary boundary layer and digital elevation model. In addition, cultural variables such as factory densities and road densities are also used in the model. With the Yangtze River Delta region as the study area, we constructed ESF-based Regression (ESFR) models at different time scales, using data for the period between December 2015 and November 2016. We found that the ESFR models effectively filtered spatial autocorrelation in the OLS residuals and resulted in increases in the goodness-of-fit metrics as well as reductions in residual standard errors and cross-validation errors, compared to the classic OLS models. The annual ESFR model explained 70% of the variability in PM 2.5 concentrations, 16.7% more than the non-spatial OLS model. With the ESFR models, we performed detail analyses on the spatial and temporal distributions of PM 2.5 concentrations in the study area. The model predictions are lower than ground observations but match the general trend. The experiment shows that ESFR provides a promising approach to PM 2.5 analysis and prediction.
Supersonic Retro-Propulsion Experimental Design for Computational Fluid Dynamics Model Validation
NASA Technical Reports Server (NTRS)
Berry, Scott A.; Laws, Christopher T.; Kleb, W. L.; Rhode, Matthew N.; Spells, Courtney; McCrea, Andrew C.; Truble, Kerry A.; Schauerhamer, Daniel G.; Oberkampf, William L.
2011-01-01
The development of supersonic retro-propulsion, an enabling technology for heavy payload exploration missions to Mars, is the primary focus for the present paper. A new experimental model, intended to provide computational fluid dynamics model validation data, was recently designed for the Langley Research Center Unitary Plan Wind Tunnel Test Section 2. Pre-test computations were instrumental for sizing and refining the model, over the Mach number range of 2.4 to 4.6, such that tunnel blockage and internal flow separation issues would be minimized. A 5-in diameter 70-deg sphere-cone forebody, which accommodates up to four 4:1 area ratio nozzles, followed by a 10-in long cylindrical aftbody was developed for this study based on the computational results. The model was designed to allow for a large number of surface pressure measurements on the forebody and aftbody. Supplemental data included high-speed Schlieren video and internal pressures and temperatures. The run matrix was developed to allow for the quantification of various sources of experimental uncertainty, such as random errors due to run-to-run variations and bias errors due to flow field or model misalignments. Some preliminary results and observations from the test are presented, although detailed analyses of the data and uncertainties are still on going.
Comparison of Test and Finite Element Analysis for Two Full-Scale Helicopter Crash Tests
NASA Technical Reports Server (NTRS)
Annett, Martin S.; Horta,Lucas G.
2011-01-01
Finite element analyses have been performed for two full-scale crash tests of an MD-500 helicopter. The first crash test was conducted to evaluate the performance of a composite deployable energy absorber under combined flight loads. In the second crash test, the energy absorber was removed to establish the baseline loads. The use of an energy absorbing device reduced the impact acceleration levels by a factor of three. Accelerations and kinematic data collected from the crash tests were compared to analytical results. Details of the full-scale crash tests and development of the system-integrated finite element model are briefly described along with direct comparisons of acceleration magnitudes and durations for the first full-scale crash test. Because load levels were significantly different between tests, models developed for the purposes of predicting the overall system response with external energy absorbers were not adequate under more severe conditions seen in the second crash test. Relative error comparisons were inadequate to guide model calibration. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used for the second full-scale crash test. The calibrated parameter set reduced 2-norm prediction error by 51% but did not improve impact shape orthogonality.
Statistical Data Editing in Scientific Articles.
Habibzadeh, Farrokh
2017-07-01
Scientific journals are important scholarly forums for sharing research findings. Editors have important roles in safeguarding standards of scientific publication and should be familiar with correct presentation of results, among other core competencies. Editors do not have access to the raw data and should thus rely on clues in the submitted manuscripts. To identify probable errors, they should look for inconsistencies in presented results. Common statistical problems that can be picked up by a knowledgeable manuscript editor are discussed in this article. Manuscripts should contain a detailed section on statistical analyses of the data. Numbers should be reported with appropriate precisions. Standard error of the mean (SEM) should not be reported as an index of data dispersion. Mean (standard deviation [SD]) and median (interquartile range [IQR]) should be used for description of normally and non-normally distributed data, respectively. If possible, it is better to report 95% confidence interval (CI) for statistics, at least for main outcome variables. And, P values should be presented, and interpreted with caution, if there is a hypothesis. To advance knowledge and skills of their members, associations of journal editors are better to develop training courses on basic statistics and research methodology for non-experts. This would in turn improve research reporting and safeguard the body of scientific evidence. © 2017 The Korean Academy of Medical Sciences.
NASA Astrophysics Data System (ADS)
Li, H. J.; Wei, F. S.; Feng, X. S.; Xie, Y. Q.
2008-09-01
This paper investigates methods to improve the predictions of Shock Arrival Time (SAT) of the original Shock Propagation Model (SPM). According to the classical blast wave theory adopted in the SPM, the shock propagating speed is determined by the total energy of the original explosion together with the background solar wind speed. Noting that there exists an intrinsic limit to the transit times computed by the SPM predictions for a specified ambient solar wind, we present a statistical analysis on the forecasting capability of the SPM using this intrinsic property. Two facts about SPM are found: (1) the error in shock energy estimation is not the only cause of the prediction errors and we should not expect that the accuracy of SPM to be improved drastically by an exact shock energy input; and (2) there are systematic differences in prediction results both for the strong shocks propagating into a slow ambient solar wind and for the weak shocks into a fast medium. Statistical analyses indicate the physical details of shock propagation and thus clearly point out directions of the future improvement of the SPM. A simple modification is presented here, which shows that there is room for improvement of SPM and thus that the original SPM is worthy of further development.
Reduced change blindness suggests enhanced attention to detail in individuals with autism.
Smith, Hayley; Milne, Elizabeth
2009-03-01
The phenomenon of change blindness illustrates that a limited number of items within the visual scene are attended to at any one time. It has been suggested that individuals with autism focus attention on less contextually relevant aspects of the visual scene, show superior perceptual discrimination and notice details which are often ignored by typical observers. In this study we investigated change blindness in autism by asking participants to detect continuity errors deliberately introduced into a short film. Whether the continuity errors involved central/marginal or social/non-social aspects of the visual scene was varied. Thirty adolescent participants, 15 with autistic spectrum disorder (ASD) and 15 typically developing (TD) controls participated. The participants with ASD detected significantly more errors than the TD participants. Both groups identified more errors involving central rather than marginal aspects of the scene, although this effect was larger in the TD participants. There was no difference in the number of social or non-social errors detected by either group of participants. In line with previous data suggesting an abnormally broad attentional spotlight and enhanced perceptual function in individuals with ASD, the results of this study suggest enhanced awareness of the visual scene in ASD. The results of this study could reflect superior top-down control of visual search in autism, enhanced perceptual function, or inefficient filtering of visual information in ASD.
Over-Distribution in Source Memory
Brainerd, C. J.; Reyna, V. F.; Holliday, R. E.; Nakamura, K.
2012-01-01
Semantic false memories are confounded with a second type of error, over-distribution, in which items are attributed to contradictory episodic states. Over-distribution errors have proved to be more common than false memories when the two are disentangled. We investigated whether over-distribution is prevalent in another classic false memory paradigm: source monitoring. It is. Conventional false memory responses (source misattributions) were predominantly over-distribution errors, but unlike semantic false memory, over-distribution also accounted for more than half of true memory responses (correct source attributions). Experimental control of over-distribution was achieved via a series of manipulations that affected either recollection of contextual details or item memory (concreteness, frequency, list-order, number of presentation contexts, and individual differences in verbatim memory). A theoretical model was used to analyze the data (conjoint process dissociation) that predicts that predicts that (a) over-distribution is directly proportional to item memory but inversely proportional to recollection and (b) item memory is not a necessary precondition for recollection of contextual details. The results were consistent with both predictions. PMID:21942494
Developing a model for the adequate description of electronic communication in hospitals.
Saboor, Samrend; Ammenwerth, Elske
2011-01-01
Adequate information and communication systems (ICT) can help to improve the communication in hospitals. Changes to the ICT-infrastructure of hospitals must be planed carefully. In order to support a comprehensive planning, we presented a classification of 81 common errors of the electronic communication on the MIE 2008 congress. Our objective now was to develop a data model that defines specific requirements for an adequate description of electronic communication processes We first applied the method of explicating qualitative content analysis on the error categorization in order to determine the essential process details. After this, we applied the method of subsuming qualitative content analysis on the results of the first step. A data model for the adequate description of electronic communication. This model comprises 61 entities and 91 relationships. The data model comprises and organizes all details that are necessary for the detection of the respective errors. It can be for either used to extend the capabilities of existing modeling methods or as a basis for the development of a new approach.
Detailed Uncertainty Analysis of the ZEM-3 Measurement System
NASA Technical Reports Server (NTRS)
Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred
2014-01-01
The measurement of Seebeck coefficient and electrical resistivity are critical to the investigation of all thermoelectric systems. Therefore, it stands that the measurement uncertainty must be well understood to report ZT values which are accurate and trustworthy. A detailed uncertainty analysis of the ZEM-3 measurement system has been performed. The uncertainty analysis calculates error in the electrical resistivity measurement as a result of sample geometry tolerance, probe geometry tolerance, statistical error, and multi-meter uncertainty. The uncertainty on Seebeck coefficient includes probe wire correction factors, statistical error, multi-meter uncertainty, and most importantly the cold-finger effect. The cold-finger effect plagues all potentiometric (four-probe) Seebeck measurement systems, as heat parasitically transfers through thermocouple probes. The effect leads to an asymmetric over-estimation of the Seebeck coefficient. A thermal finite element analysis allows for quantification of the phenomenon, and provides an estimate on the uncertainty of the Seebeck coefficient. The thermoelectric power factor has been found to have an uncertainty of +9-14 at high temperature and 9 near room temperature.
Delusions and prediction error: clarifying the roles of behavioural and brain responses
Corlett, Philip Robert; Fletcher, Paul Charles
2015-01-01
Griffiths and colleagues provided a clear and thoughtful review of the prediction error model of delusion formation [Cognitive Neuropsychiatry, 2014 April 4 (Epub ahead of print)]. As well as reviewing the central ideas and concluding that the existing evidence base is broadly supportive of the model, they provide a detailed critique of some of the experiments that we have performed to study it. Though they conclude that the shortcomings that they identify in these experiments do not fundamentally challenge the prediction error model, we nevertheless respond to these criticisms. We begin by providing a more detailed outline of the model itself as there are certain important aspects of it that were not covered in their review. We then respond to their specific criticisms of the empirical evidence. We defend the neuroimaging contrasts that we used to explore this model of psychosis arguing that, while any single contrast entails some ambiguity, our assumptions have been justified by our extensive background work before and since. PMID:25559871
Rapid production of optimal-quality reduced-resolution representations of very large databases
Sigeti, David E.; Duchaineau, Mark; Miller, Mark C.; Wolinsky, Murray; Aldrich, Charles; Mineev-Weinstein, Mark B.
2001-01-01
View space representation data is produced in real time from a world space database representing terrain features. The world space database is first preprocessed. A database is formed having one element for each spatial region corresponding to a finest selected level of detail. A multiresolution database is then formed by merging elements and a strict error metric is computed for each element at each level of detail that is independent of parameters defining the view space. The multiresolution database and associated strict error metrics are then processed in real time for real time frame representations. View parameters for a view volume comprising a view location and field of view are selected. The error metric with the view parameters is converted to a view-dependent error metric. Elements with the coarsest resolution are chosen for an initial representation. Data set first elements from the initial representation data set are selected that are at least partially within the view volume. The first elements are placed in a split queue ordered by the value of the view-dependent error metric. If the number of first elements in the queue meets or exceeds a predetermined number of elements or whether the largest error metric is less than or equal to a selected upper error metric bound, the element at the head of the queue is force split and the resulting elements are inserted into the queue. Force splitting is continued until the determination is positive to form a first multiresolution set of elements. The first multiresolution set of elements is then outputted as reduced resolution view space data representing the terrain features.
Determination of Shift/Bias in Digital Aerial Triangulation of UAV Imagery Sequences
NASA Astrophysics Data System (ADS)
Wierzbicki, Damian
2017-12-01
Currently UAV Photogrammetry is characterized a largely automated and efficient data processing. Depicting from the low altitude more often gains on the meaning in the uses of applications as: cities mapping, corridor mapping, road and pipeline inspections or mapping of large areas e.g. forests. Additionally, high-resolution video image (HD and bigger) is more often use for depicting from the low altitude from one side it lets deliver a lot of details and characteristics of ground surfaces features, and from the other side is presenting new challenges in the data processing. Therefore, determination of elements of external orientation plays a substantial role the detail of Digital Terrain Models and artefact-free ortophoto generation. Parallel a research on the quality of acquired images from UAV and above the quality of products e.g. orthophotos are conducted. Despite so fast development UAV photogrammetry still exists the necessity of accomplishment Automatic Aerial Triangulation (AAT) on the basis of the observations GPS/INS and via ground control points. During low altitude photogrammetric flight, the approximate elements of external orientation registered by UAV are burdened with the influence of some shift/bias errors. In this article, methods of determination shift/bias error are presented. In the process of the digital aerial triangulation two solutions are applied. In the first method shift/bias error was determined together with the drift/bias error, elements of external orientation and coordinates of ground control points. In the second method shift/bias error was determined together with the elements of external orientation, coordinates of ground control points and drift/bias error equals 0. When two methods were compared the difference for shift/bias error is more than ±0.01 m for all terrain coordinates XYZ.
From Treating Childhood Malnutrition to Public Health Nutrition.
James, W Philip T
2018-01-01
This analysis sets out an overview of an IUNS presentation of a European clinician's assessment of the challenges of coping with immediate critical clinical problems and how to use metabolic and a mechanistic understanding of disease when developing nutritional policies. Critically ill malnourished children prove very sensitive to both mineral and general nutritional overload, but after careful metabolic control they can cope with a high-quality, energy-rich diet provided their initial lactase deficiency and intestinal atrophy are taken into account. Detailed intestinal perfusion studies also showed that gastroenteritis can be combatted by multiple frequent glucose/saline feeds, which has saved millions of lives. However, persisting pancreatic islet cell damage may explain our findings of pandemic rates of adult diabetes in Asia, the Middle East and Mexico and perhaps elsewhere including Africa and Latin America. These handicaps together with the magnitude of epigenetic changes emphasized the importance of a whole life course approach to nutritional policy making. Whole body calorimetric analyses of energy requirements allowed a complete revision of estimates for world food needs and detailed clinical experience showed the value of redefining stunting and wasting in childhood and the value of BMI for classifying appropriate adult weights, underweight and obesity. Lithium tracer studies of dietary salt sources should also dictate priorities in population salt-reduction strategies. Metabolic and clinical studies combined with meticulous measures of population dietary intakes now suggest the need for far more radical steps to lower the dietary goals for both free sugars and total dietary fat unencumbered by flawed cohort studies that neglect not only dietary errors but also the intrinsic inter-individual differences in metabolic responses to most nutrients. Key Messages: Detailed clinical and metabolic analyses of physiological responses combined with rigorous dietary and preferably biomarker of mechanistic pathways should underpin a new approach not only to clinical care but also to the development of more radical nutritional policies. © 2018 S. Karger AG, Basel.
A method for the automated processing and analysis of images of ULVWF-platelet strings.
Reeve, Scott R; Abbitt, Katherine B; Cruise, Thomas D; Hose, D Rodney; Lawford, Patricia V
2013-01-01
We present a method for identifying and analysing unusually large von Willebrand factor (ULVWF)-platelet strings in noisy low-quality images. The method requires relatively inexpensive, non-specialist equipment and allows multiple users to be employed in the capture of images. Images are subsequently enhanced and analysed, using custom-written software to perform the processing tasks. The formation and properties of ULVWF-platelet strings released in in vitro flow-based assays have recently become a popular research area. Endothelial cells are incorporated into a flow chamber, chemically stimulated to induce ULVWF release and perfused with isolated platelets which are able to bind to the ULVWF to form strings. The numbers and lengths of the strings released are related to characteristics of the flow. ULVWF-platelet strings are routinely identified by eye from video recordings captured during experiments and analysed manually using basic NIH image software to determine the number of strings and their lengths. This is a laborious, time-consuming task and a single experiment, often consisting of data from four to six dishes of endothelial cells, can take 2 or more days to analyse. The method described here allows analysis of the strings to provide data such as the number and length of strings, number of platelets per string and the distance between each platelet to be found. The software reduces analysis time, and more importantly removes user subjectivity, producing highly reproducible results with an error of less than 2% when compared with detailed manual analysis.
Pilkington, Emma; Keidel, James; Kendrick, Luke T.; Saddy, James D.; Sage, Karen; Robson, Holly
2017-01-01
This study examined patterns of neologistic and perseverative errors during word repetition in fluent Jargon aphasia. The principal hypotheses accounting for Jargon production indicate that poor activation of a target stimulus leads to weakly activated target phoneme segments, which are outcompeted at the phonological encoding level. Voxel-lesion symptom mapping studies of word repetition errors suggest a breakdown in the translation from auditory-phonological analysis to motor activation. Behavioral analyses of repetition data were used to analyse the target relatedness (Phonological Overlap Index: POI) of neologistic errors and patterns of perseveration in 25 individuals with Jargon aphasia. Lesion-symptom analyses explored the relationship between neurological damage and jargon repetition in a group of 38 aphasia participants. Behavioral results showed that neologisms produced by 23 jargon individuals contained greater degrees of target lexico-phonological information than predicted by chance and that neologistic and perseverative production were closely associated. A significant relationship between jargon production and lesions to temporoparietal regions was identified. Region of interest regression analyses suggested that damage to the posterior superior temporal gyrus and superior temporal sulcus in combination was best predictive of a Jargon aphasia profile. Taken together, these results suggest that poor phonological encoding, secondary to impairment in sensory-motor integration, alongside impairments in self-monitoring result in jargon repetition. Insights for clinical management and future directions are discussed. PMID:28522967
NASA Technical Reports Server (NTRS)
Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.
2003-01-01
We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duncan, M. G.
The suitability of several temperature measurement schemes for an irradiation creep experiment is examined. It is found that the specimen resistance can be used to measure and control the sample temperature if compensated for resistance drift due to radiation and annealing effects. A modified Kelvin bridge is presented that allows compensation for resistance drift by periodically checking the sample resistance at a controlled ambient temperature. A new phase-insensitive method for detecting the bridge error signals is presented. The phase-insensitive detector is formed by averaging the magnitude of two bridge voltages. Although this method is substantially less sensitive to stray reactancesmore » in the bridge than conventional phase-sensitive detectors, it is sensitive to gain stability and linearity of the rectifier circuits. Accuracy limitations of rectifier circuits are examined both theoretically and experimentally in great detail. Both hand analyses and computer simulations of rectifier errors are presented. Finally, the design of a temperature control system based on sample resistance measurement is presented. The prototype is shown to control a 316 stainless steel sample to within a 0.15/sup 0/C short term (10 sec) and a 0.03/sup 0/C long term (10 min) standard deviation at temperatures between 150 and 700/sup 0/C. The phase-insensitive detector typically contributes less than 10 ppM peak resistance measurement error (0.04/sup 0/C at 700/sup 0/C for 316 stainless steel or 0.005/sup 0/C at 150/sup 0/C for zirconium).« less
Quantifying Biomass and Bare Earth Changes from the Hayman Fire Using Multi-temporal Lidar
NASA Astrophysics Data System (ADS)
Stoker, J. M.; Kaufmann, M. R.; Greenlee, S. K.
2007-12-01
Small-footprint multiple-return lidar data collected in the Cheesman Lake property prior to the 2002 Hayman fire in Colorado provided an excellent opportunity to evaluate Lidar as a tool to predict and analyze fire effects on both soil erosion and overstory structure. Re-measuring this area and applying change detection techniques allowed for analyses at a high level of detail. Our primary objectives focused on the use of change detection techniques using multi-temporal lidar data to: (1) evaluate the effectiveness of change detection to identify and quantify areas of erosion or deposition caused by post-fire rain events and rehab activities; (2) identify and quantify areas of biomass loss or forest structure change due to the Hayman fire; and (3) examine effects of pre-fire fuels and vegetation structure derived from lidar data on patterns of burn severity. While we were successful in identifying areas where changes occurred, the original error bounds on the variation in actual elevations made it difficult, if not misleading to quantify volumes of material changed on a per pixel basis. In order to minimize these variations in the two datasets, we investigated several correction and co-registration methodologies. The lessons learned from this project highlight the need for a high level of flight planning and understanding of errors in a lidar dataset in order to correctly estimate and report quantities of vertical change. Directly measuring vertical change using only lidar without ancillary information can provide errors that could make quantifications confusing, especially in areas with steep slopes.
The Deference Due the Oracle: Computerized Text Analysis in a Basic Writing Class.
ERIC Educational Resources Information Center
Otte, George
1989-01-01
Describes how a computerized text analysis program can help students discover error patterns in their writing, and notes how students' responses to analyses can reduce errors and improve their writing. (MM)
Solar Cycle Variability and Surface Differential Rotation from Ca II K-line Time Series Data
NASA Astrophysics Data System (ADS)
Scargle, Jeffrey D.; Keil, Stephen L.; Worden, Simon P.
2013-07-01
Analysis of over 36 yr of time series data from the NSO/AFRL/Sac Peak K-line monitoring program elucidates 5 components of the variation of the 7 measured chromospheric parameters: (a) the solar cycle (period ~ 11 yr), (b) quasi-periodic variations (periods ~ 100 days), (c) a broadband stochastic process (wide range of periods), (d) rotational modulation, and (e) random observational errors, independent of (a)-(d). Correlation and power spectrum analyses elucidate periodic and aperiodic variation of these parameters. Time-frequency analysis illuminates periodic and quasi-periodic signals, details of frequency modulation due to differential rotation, and in particular elucidates the rather complex harmonic structure (a) and (b) at timescales in the range ~0.1-10 yr. These results using only full-disk data suggest that similar analyses will be useful for detecting and characterizing differential rotation in stars from stellar light curves such as those being produced by NASA's Kepler observatory. Component (c) consists of variations over a range of timescales, in the manner of a 1/f random process with a power-law slope index that varies in a systematic way. A time-dependent Wilson-Bappu effect appears to be present in the solar cycle variations (a), but not in the more rapid variations of the stochastic process (c). Component (d) characterizes differential rotation of the active regions. Component (e) is of course not characteristic of solar variability, but the fact that the observational errors are quite small greatly facilitates the analysis of the other components. The data analyzed in this paper can be found at the National Solar Observatory Web site http://nsosp.nso.edu/cak_mon/, or by file transfer protocol at ftp://ftp.nso.edu/idl/cak.parameters.
SOLAR CYCLE VARIABILITY AND SURFACE DIFFERENTIAL ROTATION FROM Ca II K-LINE TIME SERIES DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scargle, Jeffrey D.; Worden, Simon P.; Keil, Stephen L.
Analysis of over 36 yr of time series data from the NSO/AFRL/Sac Peak K-line monitoring program elucidates 5 components of the variation of the 7 measured chromospheric parameters: (a) the solar cycle (period {approx} 11 yr), (b) quasi-periodic variations (periods {approx} 100 days), (c) a broadband stochastic process (wide range of periods), (d) rotational modulation, and (e) random observational errors, independent of (a)-(d). Correlation and power spectrum analyses elucidate periodic and aperiodic variation of these parameters. Time-frequency analysis illuminates periodic and quasi-periodic signals, details of frequency modulation due to differential rotation, and in particular elucidates the rather complex harmonic structuremore » (a) and (b) at timescales in the range {approx}0.1-10 yr. These results using only full-disk data suggest that similar analyses will be useful for detecting and characterizing differential rotation in stars from stellar light curves such as those being produced by NASA's Kepler observatory. Component (c) consists of variations over a range of timescales, in the manner of a 1/f random process with a power-law slope index that varies in a systematic way. A time-dependent Wilson-Bappu effect appears to be present in the solar cycle variations (a), but not in the more rapid variations of the stochastic process (c). Component (d) characterizes differential rotation of the active regions. Component (e) is of course not characteristic of solar variability, but the fact that the observational errors are quite small greatly facilitates the analysis of the other components. The data analyzed in this paper can be found at the National Solar Observatory Web site http://nsosp.nso.edu/cak{sub m}on/, or by file transfer protocol at ftp://ftp.nso.edu/idl/cak.parameters.« less
NASA Astrophysics Data System (ADS)
Shen, Feifei; Xu, Dongmei; Xue, Ming; Min, Jinzhong
2017-07-01
This study examines the impacts of assimilating radar radial velocity (Vr) data for the simulation of hurricane Ike (2008) with two different ensemble generation techniques in the framework of the hybrid ensemble-variational (EnVar) data assimilation system of Weather Research and Forecasting model. For the generation of ensemble perturbations we apply two techniques, the ensemble transform Kalman filter (ETKF) and the ensemble of data assimilation (EDA). For the ETKF-EnVar, the forecast ensemble perturbations are updated by the ETKF, while for the EDA-EnVar, the hybrid is employed to update each ensemble member with perturbed observations. The ensemble mean is analyzed by the hybrid method with flow-dependent ensemble covariance for both EnVar. The sensitivity of analyses and forecasts to the two applied ensemble generation techniques is investigated in our current study. It is found that the EnVar system is rather stable with different ensemble update techniques in terms of its skill on improving the analyses and forecasts. The EDA-EnVar-based ensemble perturbations are likely to include slightly less organized spatial structures than those in ETKF-EnVar, and the perturbations of the latter are constructed more dynamically. Detailed diagnostics reveal that both of the EnVar schemes not only produce positive temperature increments around the hurricane center but also systematically adjust the hurricane location with the hurricane-specific error covariance. On average, the analysis and forecast from the ETKF-EnVar have slightly smaller errors than that from the EDA-EnVar in terms of track, intensity, and precipitation forecast. Moreover, ETKF-EnVar yields better forecasts when verified against conventional observations.
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.; Keil, Stephen L.; Worden, Simon P.
2014-01-01
Analysis of more than 36 years of time series of seven parameters measured in the NSO/AFRL/Sac Peak K-line monitoring program elucidates five elucidates five components of the variation: (1) the solar cycle (period approx. 11 years), (2) quasi-periodic variations (periods approx 100 days), (3) a broad band stochastic process (wide range of periods), (4) rotational modulation, and (5) random observational errors. Correlation and power spectrum analyses elucidate periodic and aperiodic variation of the chromospheric parameters. Time-frequency analysis illuminates periodic and quasi periodic signals, details of frequency modulation due to differential rotation, and in particular elucidates the rather complex harmonic structure (1) and (2) at time scales in the range approx 0.1 - 10 years. These results using only full-disk data further suggest that similar analyses will be useful at detecting and characterizing differential rotation in stars from stellar light-curves such as those being produced by NASA's Kepler observatory. Component (3) consists of variations over a range of timescales, in the manner of a 1/f random noise process. A timedependent Wilson-Bappu effect appears to be present in the solar cycle variations (1), but not in the stochastic process (3). Component (4) characterizes differential rotation of the active regions, and (5) is of course not characteristic of solar variability, but the fact that the observational errors are quite small greatly facilitates the analysis of the other components. The recent data suggest that the current cycle is starting late and may be relatively weak. The data analyzed in this paper can be found at the National Solar Observatory web site http://nsosp.nso.edu/cak_mon/, or by file transfer protocol at ftp://ftp.nso.edu/idl/cak.parameters.
TPX: Contractor preliminary design review. Volume 3, Design and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-06-30
Several models have been formed for investigating the maximum electromagnetic loading and magnetic field levels associated with the Tokamak Physics eXperiment (TPX) superconducting Poloidal Field (PF) coils. The analyses have been performed to support the design of the individual fourteen hoop coils forming the PF system. The coils have been sub-divided into three coil systems consisting of the central solenoid (CS), PF5 coils, and the larger radius PF6 and PF7 coils. Various electromagnetic analyses have been performed to determine the electromagnetic loadings that the coils will experience during normal operating conditions, plasma disruptions, and fault conditions. The loadings are presentedmore » as net body forces acting individual coils, spatial variations throughout the coil cross section, and force variations along the path of the conductor due to interactions with the TF coils. Three refined electromagnetic models of the PF coil system that include a turn-by-turn description of the fields and forces during a worst case event are presented in this report. A global model including both the TF and PF system was formed to obtain the force variations along the path of the PF conductors resulting from interactions with the TF currents. In addition to spatial variations, the loadings are further subdivided into time-varying and steady components so that structural fatigue issues can be addressed by designers and analysts. Other electromagnetic design issues such as the impact of the detailed coil designs on field errors are addressed in this report. Coil features that are analyzed include radial transitions via short jogs vs. spiral type windings and the effects of layer-to-layer rotations (i.e clocking) on the field errors.« less
NASA Astrophysics Data System (ADS)
Zheng, Sifa; Liu, Haitao; Dan, Jiabi; Lian, Xiaomin
2015-05-01
Linear time-invariant assumption for the determination of acoustic source characteristics, the source strength and the source impedance in the frequency domain has been proved reasonable in the design of an exhaust system. Different methods have been proposed to its identification and the multi-load method is widely used for its convenience by varying the load number and impedance. Theoretical error analysis has rarely been referred to and previous results have shown an overdetermined set of open pipes can reduce the identification error. This paper contributes a theoretical error analysis for the load selection. The relationships between the error in the identification of source characteristics and the load selection were analysed. A general linear time-invariant model was built based on the four-load method. To analyse the error of the source impedance, an error estimation function was proposed. The dispersion of the source pressure was obtained by an inverse calculation as an indicator to detect the accuracy of the results. It was found that for a certain load length, the load resistance at the frequency points of one-quarter wavelength of odd multiples results in peaks and in the maximum error for source impedance identification. Therefore, the load impedance of frequency range within the one-quarter wavelength of odd multiples should not be used for source impedance identification. If the selected loads have more similar resistance values (i.e., the same order of magnitude), the identification error of the source impedance could be effectively reduced.
Ballesteros Peña, Sendoa
2013-04-01
To estimate the frequency of therapeutic errors and to evaluate the diagnostic accuracy in the recognition of shockable rhythms by automated external defibrillators. A retrospective descriptive study. Nine basic life support units from Biscay (Spain). Included 201 patients with cardiac arrest, since 2006 to 2011. The study was made of the suitability of treatment (shock or not) after each analysis and medical errors identified. The sensitivity, specificity and predictive values with 95% confidence intervals were then calculated. A total of 811 electrocardiographic rhythm analyses were obtained, of which 120 (14.1%), from 30 patients, corresponded to shockable rhythms. Sensitivity and specificity for appropriate automated external defibrillators management of a shockable rhythm were 85% (95% CI, 77.5% to 90.3%) and 100% (95% CI, 99.4% to 100%), respectively. Positive and negative predictive values were 100% (95% CI, 96.4% to 100%) and 97.5% (95% CI, 96% to 98.4%), respectively. There were 18 (2.2%; 95% CI, 1.3% to 3.5%) errors associated with defibrillator management, all relating to cases of shockable rhythms that were not shocked. One error was operator dependent, 6 were defibrillator dependent (caused by interaction of pacemakers), and 11 were unclassified. Automated external defibrillators have a very high specificity and moderately high sensitivity. There are few operator dependent errors. Implanted pacemakers interfere with defibrillator analyses. Copyright © 2012 Elsevier España, S.L. All rights reserved.
A decade of individual participant data meta-analyses: A review of current practice.
Simmonds, Mark; Stewart, Gavin; Stewart, Lesley
2015-11-01
Individual participant data (IPD) systematic reviews and meta-analyses are often considered to be the gold standard for meta-analysis. In the ten years since the first review into the methodology and reporting practice of IPD reviews was published much has changed in the field. This paper investigates current reporting and statistical practice in IPD systematic reviews. A systematic review was performed to identify systematic reviews that collected and analysed IPD. Data were extracted from each included publication on a variety of issues related to the reporting of IPD review process, and the statistical methods used. There has been considerable growth in the use of "one-stage" methods to perform IPD meta-analyses. The majority of reviews consider at least one covariate other than the primary intervention, either using subgroup analysis or including covariates in one-stage regression models. Random-effects analyses, however, are not often used. Reporting of review methods was often limited, with few reviews presenting a risk-of-bias assessment. Details on issues specific to the use of IPD were little reported, including how IPD were obtained; how data was managed and checked for consistency and errors; and for how many studies and participants IPD were sought and obtained. While the last ten years have seen substantial changes in how IPD meta-analyses are performed there remains considerable scope for improving the quality of reporting for both the process of IPD systematic reviews, and the statistical methods employed in them. It is to be hoped that the publication of the PRISMA-IPD guidelines specific to IPD reviews will improve reporting in this area. Copyright © 2015 Elsevier Inc. All rights reserved.
A stellar tracking reference system
NASA Technical Reports Server (NTRS)
Klestadt, B.
1971-01-01
A stellar attitude reference system concept for satellites was studied which promises to permit continuous precision pointing of payloads with accuracies of 0.001 degree without the use of gyroscopes. It is accomplished with the use of a single, clustered star tracker assembly mounted on a non-orthogonal, two gimbal mechanism, driven so as to unwind satellite orbital and orbit precession rates. A set of eight stars was found which assures the presence of an adequate inertial reference on a continuous basis in an arbitrary orbit. Acquisition and operational considerations were investigated and inherent reference redundancy/reliability was established. Preliminary designs for the gimbal mechanism, its servo drive, and the star tracker cluster with its associated signal processing were developed for a baseline sun-synchronous, noon-midnight orbit. The functions required of the onboard computer were determined and the equations to be solved were found. In addition detailed error analyses were carried out, based on structural, thermal and other operational considerations.
Data inversion algorithm development for the hologen occultation experiment
NASA Technical Reports Server (NTRS)
Gordley, Larry L.; Mlynczak, Martin G.
1986-01-01
The successful retrieval of atmospheric parameters from radiometric measurement requires not only the ability to do ideal radiometric calculations, but also a detailed understanding of instrument characteristics. Therefore a considerable amount of time was spent in instrument characterization in the form of test data analysis and mathematical formulation. Analyses of solar-to-reference interference (electrical cross-talk), detector nonuniformity, instrument balance error, electronic filter time-constants and noise character were conducted. A second area of effort was the development of techniques for the ideal radiometric calculations required for the Halogen Occultation Experiment (HALOE) data reduction. The computer code for these calculations must be extremely complex and fast. A scheme for meeting these requirements was defined and the algorithms needed form implementation are currently under development. A third area of work included consulting on the implementation of the Emissivity Growth Approximation (EGA) method of absorption calculation into a HALOE broadband radiometer channel retrieval algorithm.
Cartographic quality of ERTS-1 images
NASA Technical Reports Server (NTRS)
Welch, R. I.
1973-01-01
Analyses of simulated and operational ERTS images have provided initial estimates of resolution, ground resolution, detectability thresholds and other measures of image quality of interest to earth scientists and cartographers. Based on these values, including an approximate ground resolution of 250 meters for both RBV and MSS systems, the ERTS-1 images appear suited to the production and/or revision of planimetric and photo maps of 1:500,000 scale and smaller for which map accuracy standards are compatible with the imaged detail. Thematic mapping, although less constrained by map accuracy standards, will be influenced by measurement thresholds and errors which have yet to be accurately determined for ERTS images. This study also indicates the desirability of establishing a quantitative relationship between image quality values and map products which will permit both engineers and cartographers/earth scientists to contribute to the design requirements of future satellite imaging systems.
Smart spectroscopy sensors: II. Narrow-band laser systems
NASA Astrophysics Data System (ADS)
Matharoo, Inderdeep; Peshko, Igor
2013-03-01
This paper describes the principles of operation of a miniature multifunctional optical sensory system based on laser technology and spectroscopic principles of analysis. The operation of the system as a remote oxygen sensor has been demonstrated. The multi-component alarm sensor has been designed to recognise gases and to measure gas concentration (O2, CO2, CO, CH4, N2O, C2H2, HI, OH radicals and H2O vapour, including semi-heavy water), temperature, pressure, humidity, and background radiation from the environment. Besides gas sensing, the same diode lasers are used for range-finding and to provide sensor self-calibration. The complete system operates as an inhomogeneous sensory network: the laser sensors are capable of using information received from environmental sensors for improving accuracy and reliability of gas concentration measurement. The sources of measurement errors associated with hardware and algorithms of operation and data processing have been analysed in detail.
NASA Astrophysics Data System (ADS)
Manojlović, Stojadin M.; Barbarić, Žarko P.; Mitrović, Srđan T.
2015-06-01
A new tracking design for laser systems with different arrangements of a quadrant photodetector, based on the principle of active disturbance rejection control is suggested. The detailed models of quadrant photodetector with standard add-subtract, difference-over-sum and diagonal-difference-over-sum algorithms for displacement signals are included in the control loop. Target moving, non-linearity of a photodetector, parameter perturbations and exterior disturbances are treated as a total disturbance. Active disturbance rejection controllers with linear extended state observers for total disturbance estimation and rejection are designed. Proposed methods are analysed in frequency domain to quantify their stability characteristics and disturbance rejection performances. It is shown through simulations, that tracking errors are effectively compensated, providing the laser spot positioning in the area near the centre of quadrant photodetector where the mentioned algorithms have the highest sensitivity, which provides tracking of the manoeuvring targets with high accuracy.
Error-trellis Syndrome Decoding Techniques for Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Error-trellis syndrome decoding techniques for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1985-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Analysis on the dynamic error for optoelectronic scanning coordinate measurement network
NASA Astrophysics Data System (ADS)
Shi, Shendong; Yang, Linghui; Lin, Jiarui; Guo, Siyang; Ren, Yongjie
2018-01-01
Large-scale dynamic three-dimension coordinate measurement technique is eagerly demanded in equipment manufacturing. Noted for advantages of high accuracy, scale expandability and multitask parallel measurement, optoelectronic scanning measurement network has got close attention. It is widely used in large components jointing, spacecraft rendezvous and docking simulation, digital shipbuilding and automated guided vehicle navigation. At present, most research about optoelectronic scanning measurement network is focused on static measurement capacity and research about dynamic accuracy is insufficient. Limited by the measurement principle, the dynamic error is non-negligible and restricts the application. The workshop measurement and positioning system is a representative which can realize dynamic measurement function in theory. In this paper we conduct deep research on dynamic error resources and divide them two parts: phase error and synchronization error. Dynamic error model is constructed. Based on the theory above, simulation about dynamic error is carried out. Dynamic error is quantized and the rule of volatility and periodicity has been found. Dynamic error characteristics are shown in detail. The research result lays foundation for further accuracy improvement.
A new method for the analysis of fire spread modeling errors
Francis M. Fujioka
2002-01-01
Fire spread models have a long history, and their use will continue to grow as they evolve from a research tool to an operational tool. This paper describes a new method to analyse two-dimensional fire spread modeling errors, particularly to quantify the uncertainties of fire spread predictions. Measures of error are defined from the respective spread distances of...
ERIC Educational Resources Information Center
Weinzierl, Christiane; Kerkhoff, Georg; van Eimeren, Lucia; Keller, Ingo; Stenneken, Prisca
2012-01-01
Unilateral spatial neglect frequently involves a lateralised reading disorder, neglect dyslexia (ND). Reading of single words in ND is characterised by left-sided omissions and substitutions of letters. However, it is unclear whether the distribution of error types and positions within a word shows a unique pattern of ND when directly compared to…
Keers, Richard N; Williams, Steven D; Cooke, Jonathan; Ashcroft, Darren M
2013-11-01
Underlying systems factors have been seen to be crucial contributors to the occurrence of medication errors. By understanding the causes of these errors, the most appropriate interventions can be designed and implemented to minimise their occurrence. This study aimed to systematically review and appraise empirical evidence relating to the causes of medication administration errors (MAEs) in hospital settings. Nine electronic databases (MEDLINE, EMBASE, International Pharmaceutical Abstracts, ASSIA, PsycINFO, British Nursing Index, CINAHL, Health Management Information Consortium and Social Science Citations Index) were searched between 1985 and May 2013. Inclusion and exclusion criteria were applied to identify eligible publications through title analysis followed by abstract and then full text examination. English language publications reporting empirical data on causes of MAEs were included. Reference lists of included articles and relevant review papers were hand searched for additional studies. Studies were excluded if they did not report data on specific MAEs, used accounts from individuals not directly involved in the MAE concerned or were presented as conference abstracts with insufficient detail. A total of 54 unique studies were included. Causes of MAEs were categorised according to Reason's model of accident causation. Studies were assessed to determine relevance to the research question and how likely the results were to reflect the potential underlying causes of MAEs based on the method(s) used. Slips and lapses were the most commonly reported unsafe acts, followed by knowledge-based mistakes and deliberate violations. Error-provoking conditions influencing administration errors included inadequate written communication (prescriptions, documentation, transcription), problems with medicines supply and storage (pharmacy dispensing errors and ward stock management), high perceived workload, problems with ward-based equipment (access, functionality), patient factors (availability, acuity), staff health status (fatigue, stress) and interruptions/distractions during drug administration. Few studies sought to determine the causes of intravenous MAEs. A number of latent pathway conditions were less well explored, including local working culture and high-level managerial decisions. Causes were often described superficially; this may be related to the use of quantitative surveys and observation methods in many studies, limited use of established error causation frameworks to analyse data and a predominant focus on issues other than the causes of MAEs among studies. As only English language publications were included, some relevant studies may have been missed. Limited evidence from studies included in this systematic review suggests that MAEs are influenced by multiple systems factors, but if and how these arise and interconnect to lead to errors remains to be fully determined. Further research with a theoretical focus is needed to investigate the MAE causation pathway, with an emphasis on ensuring interventions designed to minimise MAEs target recognised underlying causes of errors to maximise their impact.
Fault-Tolerant Signal Processing Architectures with Distributed Error Control.
1985-01-01
Zm, Revisited," Information and Control, Vol. 37, pp. 100-104, 1978. 13. J. Wakerly , Error Detecting Codes. SeIf-Checkino Circuits and Applications ...However, the newer results concerning applications of real codes are still in the publication process. Hence, two very detailed appendices are included to...significant entities to be protected. While the distributed finite field approach afforded adequate protection, its applicability was restricted and
A Study on Mutil-Scale Background Error Covariances in 3D-Var Data Assimilation
NASA Astrophysics Data System (ADS)
Zhang, Xubin; Tan, Zhe-Min
2017-04-01
The construction of background error covariances is a key component of three-dimensional variational data assimilation. There are different scale background errors and interactions among them in the numerical weather Prediction. However, the influence of these errors and their interactions cannot be represented in the background error covariances statistics when estimated by the leading methods. So, it is necessary to construct background error covariances influenced by multi-scale interactions among errors. With the NMC method, this article firstly estimates the background error covariances at given model-resolution scales. And then the information of errors whose scales are larger and smaller than the given ones is introduced respectively, using different nesting techniques, to estimate the corresponding covariances. The comparisons of three background error covariances statistics influenced by information of errors at different scales reveal that, the background error variances enhance particularly at large scales and higher levels when introducing the information of larger-scale errors by the lateral boundary condition provided by a lower-resolution model. On the other hand, the variances reduce at medium scales at the higher levels, while those show slight improvement at lower levels in the nested domain, especially at medium and small scales, when introducing the information of smaller-scale errors by nesting a higher-resolution model. In addition, the introduction of information of larger- (smaller-) scale errors leads to larger (smaller) horizontal and vertical correlation scales of background errors. Considering the multivariate correlations, the Ekman coupling increases (decreases) with the information of larger- (smaller-) scale errors included, whereas the geostrophic coupling in free atmosphere weakens in both situations. The three covariances obtained in above work are used in a data assimilation and model forecast system respectively, and then the analysis-forecast cycles for a period of 1 month are conducted. Through the comparison of both analyses and forecasts from this system, it is found that the trends for variation in analysis increments with information of different scale errors introduced are consistent with those for variation in variances and correlations of background errors. In particular, introduction of smaller-scale errors leads to larger amplitude of analysis increments for winds at medium scales at the height of both high- and low- level jet. And analysis increments for both temperature and humidity are greater at the corresponding scales at middle and upper levels under this circumstance. These analysis increments improve the intensity of jet-convection system which includes jets at different levels and coupling between them associated with latent heat release, and these changes in analyses contribute to the better forecasts for winds and temperature in the corresponding areas. When smaller-scale errors are included, analysis increments for humidity enhance significantly at large scales at lower levels to moisten southern analyses. This humidification devotes to correcting dry bias there and eventually improves forecast skill of humidity. Moreover, inclusion of larger- (smaller-) scale errors is beneficial for forecast quality of heavy (light) precipitation at large (small) scales due to the amplification (diminution) of intensity and area in precipitation forecasts but tends to overestimate (underestimate) light (heavy) precipitation .
NASA Astrophysics Data System (ADS)
Langford, B.; Acton, W.; Ammann, C.; Valach, A.; Nemitz, E.
2015-10-01
All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. Here, we are applying a consistent approach based on auto- and cross-covariance functions to quantify the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining data sets from several analysers and using simulations, we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time lag eliminates these effects (provided the time lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time lag. Finally, we make recommendations for the analysis and reporting of data with low signal-to-noise and their associated errors.
NASA Astrophysics Data System (ADS)
Langford, B.; Acton, W.; Ammann, C.; Valach, A.; Nemitz, E.
2015-03-01
All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. We are here applying a consistent approach based on auto- and cross-covariance functions to quantifying the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time-lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time-lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining datasets from several analysers and using simulations we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time-lag eliminates these effects (provided the time-lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time-lag. Finally, we make recommendations for the analysis and reporting of data with low signal-to-noise and their associated errors.
Identification of factors associated with diagnostic error in primary care.
Minué, Sergio; Bermúdez-Tamayo, Clara; Fernández, Alberto; Martín-Martín, José Jesús; Benítez, Vivian; Melguizo, Miguel; Caro, Araceli; Orgaz, María José; Prados, Miguel Angel; Díaz, José Enrique; Montoro, Rafael
2014-05-12
Missed, delayed or incorrect diagnoses are considered to be diagnostic errors. The aim of this paper is to describe the methodology of a study to analyse cognitive aspects of the process by which primary care (PC) physicians diagnose dyspnoea. It examines the possible links between the use of heuristics, suboptimal cognitive acts and diagnostic errors, using Reason's taxonomy of human error (slips, lapses, mistakes and violations). The influence of situational factors (professional experience, perceived overwork and fatigue) is also analysed. Cohort study of new episodes of dyspnoea in patients receiving care from family physicians and residents at PC centres in Granada (Spain). With an initial expected diagnostic error rate of 20%, and a sampling error of 3%, 384 episodes of dyspnoea are calculated to be required. In addition to filling out the electronic medical record of the patients attended, each physician fills out 2 specially designed questionnaires about the diagnostic process performed in each case of dyspnoea. The first questionnaire includes questions on the physician's initial diagnostic impression, the 3 most likely diagnoses (in order of likelihood), and the diagnosis reached after the initial medical history and physical examination. It also includes items on the physicians' perceived overwork and fatigue during patient care. The second questionnaire records the confirmed diagnosis once it is reached. The complete diagnostic process is peer-reviewed to identify and classify the diagnostic errors. The possible use of heuristics of representativeness, availability, and anchoring and adjustment in each diagnostic process is also analysed. Each audit is reviewed with the physician responsible for the diagnostic process. Finally, logistic regression models are used to determine if there are differences in the diagnostic error variables based on the heuristics identified. This work sets out a new approach to studying the diagnostic decision-making process in PC, taking advantage of new technologies which allow immediate recording of the decision-making process.
Identification of factors associated with diagnostic error in primary care
2014-01-01
Background Missed, delayed or incorrect diagnoses are considered to be diagnostic errors. The aim of this paper is to describe the methodology of a study to analyse cognitive aspects of the process by which primary care (PC) physicians diagnose dyspnoea. It examines the possible links between the use of heuristics, suboptimal cognitive acts and diagnostic errors, using Reason’s taxonomy of human error (slips, lapses, mistakes and violations). The influence of situational factors (professional experience, perceived overwork and fatigue) is also analysed. Methods Cohort study of new episodes of dyspnoea in patients receiving care from family physicians and residents at PC centres in Granada (Spain). With an initial expected diagnostic error rate of 20%, and a sampling error of 3%, 384 episodes of dyspnoea are calculated to be required. In addition to filling out the electronic medical record of the patients attended, each physician fills out 2 specially designed questionnaires about the diagnostic process performed in each case of dyspnoea. The first questionnaire includes questions on the physician’s initial diagnostic impression, the 3 most likely diagnoses (in order of likelihood), and the diagnosis reached after the initial medical history and physical examination. It also includes items on the physicians’ perceived overwork and fatigue during patient care. The second questionnaire records the confirmed diagnosis once it is reached. The complete diagnostic process is peer-reviewed to identify and classify the diagnostic errors. The possible use of heuristics of representativeness, availability, and anchoring and adjustment in each diagnostic process is also analysed. Each audit is reviewed with the physician responsible for the diagnostic process. Finally, logistic regression models are used to determine if there are differences in the diagnostic error variables based on the heuristics identified. Discussion This work sets out a new approach to studying the diagnostic decision-making process in PC, taking advantage of new technologies which allow immediate recording of the decision-making process. PMID:24884984
Yoshikawa, Munemitsu; Yamashiro, Kenji; Miyake, Masahiro; Oishi, Maho; Akagi-Kurashige, Yumiko; Kumagai, Kyoko; Nakata, Isao; Nakanishi, Hideo; Oishi, Akio; Gotoh, Norimoto; Yamada, Ryo; Matsuda, Fumihiko; Yoshimura, Nagahisa
2014-10-21
We investigated the association between refractive error in a Japanese population and myopia-related genes identified in two recent large-scale genome-wide association studies. Single-nucleotide polymorphisms (SNPs) in 51 genes that were reported by the Consortium for Refractive Error and Myopia and/or the 23andMe database were genotyped in 3712 healthy Japanese volunteers from the Nagahama Study using HumanHap610K Quad, HumanOmni2.5M, and/or HumanExome Arrays. To evaluate the association between refractive error and recently identified myopia-related genes, we used three approaches to perform quantitative trait locus analyses of mean refractive error in both eyes of the participants: per-SNP, gene-based top-SNP, and gene-based all-SNP analyses. Association plots of successfully replicated genes also were investigated. In our per-SNP analysis, eight myopia gene associations were replicated successfully: GJD2, RASGRF1, BICC1, KCNQ5, CD55, CYP26A1, LRRC4C, and B4GALNT2.Seven additional gene associations were replicated in our gene-based analyses: GRIA4, BMP2, QKI, BMP4, SFRP1, SH3GL2, and EHBP1L1. The signal strength of the reported SNPs and their tagging SNPs increased after considering different linkage disequilibrium patterns across ethnicities. Although two previous studies suggested strong associations between PRSS56, LAMA2, TOX, and RDH5 and myopia, we could not replicate these results. Our results confirmed the significance of the myopia-related genes reported previously and suggested that gene-based replication analyses are more effective than per-SNP analyses. Our comparison with two previous studies suggested that BMP3 SNPs cause myopia primarily in Caucasian populations, while they may exhibit protective effects in Asian populations. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.
Multi-GNSS signal-in-space range error assessment - Methodology and results
NASA Astrophysics Data System (ADS)
Montenbruck, Oliver; Steigenberger, Peter; Hauschild, André
2018-06-01
The positioning accuracy of global and regional navigation satellite systems (GNSS/RNSS) depends on a variety of influence factors. For constellation-specific performance analyses it has become common practice to separate a geometry-related quality factor (the dilution of precision, DOP) from the measurement and modeling errors of the individual ranging measurements (known as user equivalent range error, UERE). The latter is further divided into user equipment errors and contributions related to the space and control segment. The present study reviews the fundamental concepts and underlying assumptions of signal-in-space range error (SISRE) analyses and presents a harmonized framework for multi-GNSS performance monitoring based on the comparison of broadcast and precise ephemerides. The implications of inconsistent geometric reference points, non-common time systems, and signal-specific range biases are analyzed, and strategies for coping with these issues in the definition and computation of SIS range errors are developed. The presented concepts are, furthermore, applied to current navigation satellite systems, and representative results are presented along with a discussion of constellation-specific problems in their determination. Based on data for the January to December 2017 time frame, representative global average root-mean-square (RMS) SISRE values of 0.2 m, 0.6 m, 1 m, and 2 m are obtained for Galileo, GPS, BeiDou-2, and GLONASS, respectively. Roughly two times larger values apply for the corresponding 95th-percentile values. Overall, the study contributes to a better understanding and harmonization of multi-GNSS SISRE analyses and their use as key performance indicators for the various constellations.
Improved parallel image reconstruction using feature refinement.
Cheng, Jing; Jia, Sen; Ying, Leslie; Liu, Yuanyuan; Wang, Shanshan; Zhu, Yanjie; Li, Ye; Zou, Chao; Liu, Xin; Liang, Dong
2018-07-01
The aim of this study was to develop a novel feature refinement MR reconstruction method from highly undersampled multichannel acquisitions for improving the image quality and preserve more detail information. The feature refinement technique, which uses a feature descriptor to pick up useful features from residual image discarded by sparsity constrains, is applied to preserve the details of the image in compressed sensing and parallel imaging in MRI (CS-pMRI). The texture descriptor and structure descriptor recognizing different types of features are required for forming the feature descriptor. Feasibility of the feature refinement was validated using three different multicoil reconstruction methods on in vivo data. Experimental results show that reconstruction methods with feature refinement improve the quality of reconstructed image and restore the image details more accurately than the original methods, which is also verified by the lower values of the root mean square error and high frequency error norm. A simple and effective way to preserve more useful detailed information in CS-pMRI is proposed. This technique can effectively improve the reconstruction quality and has superior performance in terms of detail preservation compared with the original version without feature refinement. Magn Reson Med 80:211-223, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Morcrette, C. J.; Van Weverberg, K.; Ma, H. -Y.; ...
2018-02-16
We introduce the Clouds Above the United States and Errors at the Surface (CAUSES) project with its aim of better understanding the physical processes leading to warm screen temperature biases over the American Midwest in many numerical models. In this first of four companion papers, 11 different models, from nine institutes, perform a series of 5 day hindcasts, each initialized from reanalyses. After describing the common experimental protocol and detailing each model configuration, a gridded temperature data set is derived from observations and used to show that all the models have a warm bias over parts of the Midwest. Additionally,more » a strong diurnal cycle in the screen temperature bias is found in most models. In some models the bias is largest around midday, while in others it is largest during the night. At the Department of Energy Atmospheric Radiation Measurement Southern Great Plains (SGP) site, the model biases are shown to extend several kilometers into the atmosphere. Finally, to provide context for the companion papers, in which observations from the SGP site are used to evaluate the different processes contributing to errors there, it is shown that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP. This suggests that conclusions drawn from detailed evaluation of models using instruments located at SGP will be representative of errors that are prevalent over a larger spatial scale.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morcrette, C. J.; Van Weverberg, K.; Ma, H. -Y.
We introduce the Clouds Above the United States and Errors at the Surface (CAUSES) project with its aim of better understanding the physical processes leading to warm screen temperature biases over the American Midwest in many numerical models. In this first of four companion papers, 11 different models, from nine institutes, perform a series of 5 day hindcasts, each initialized from reanalyses. After describing the common experimental protocol and detailing each model configuration, a gridded temperature data set is derived from observations and used to show that all the models have a warm bias over parts of the Midwest. Additionally,more » a strong diurnal cycle in the screen temperature bias is found in most models. In some models the bias is largest around midday, while in others it is largest during the night. At the Department of Energy Atmospheric Radiation Measurement Southern Great Plains (SGP) site, the model biases are shown to extend several kilometers into the atmosphere. Finally, to provide context for the companion papers, in which observations from the SGP site are used to evaluate the different processes contributing to errors there, it is shown that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP. This suggests that conclusions drawn from detailed evaluation of models using instruments located at SGP will be representative of errors that are prevalent over a larger spatial scale.« less
NASA Astrophysics Data System (ADS)
Morcrette, C. J.; Van Weverberg, K.; Ma, H.-Y.; Ahlgrimm, M.; Bazile, E.; Berg, L. K.; Cheng, A.; Cheruy, F.; Cole, J.; Forbes, R.; Gustafson, W. I.; Huang, M.; Lee, W.-S.; Liu, Y.; Mellul, L.; Merryfield, W. J.; Qian, Y.; Roehrig, R.; Wang, Y.-C.; Xie, S.; Xu, K.-M.; Zhang, C.; Klein, S.; Petch, J.
2018-03-01
We introduce the Clouds Above the United States and Errors at the Surface (CAUSES) project with its aim of better understanding the physical processes leading to warm screen temperature biases over the American Midwest in many numerical models. In this first of four companion papers, 11 different models, from nine institutes, perform a series of 5 day hindcasts, each initialized from reanalyses. After describing the common experimental protocol and detailing each model configuration, a gridded temperature data set is derived from observations and used to show that all the models have a warm bias over parts of the Midwest. Additionally, a strong diurnal cycle in the screen temperature bias is found in most models. In some models the bias is largest around midday, while in others it is largest during the night. At the Department of Energy Atmospheric Radiation Measurement Southern Great Plains (SGP) site, the model biases are shown to extend several kilometers into the atmosphere. Finally, to provide context for the companion papers, in which observations from the SGP site are used to evaluate the different processes contributing to errors there, it is shown that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP. This suggests that conclusions drawn from detailed evaluation of models using instruments located at SGP will be representative of errors that are prevalent over a larger spatial scale.
Solar and Magnetic Attitude Determination for Small Spacecraft
NASA Technical Reports Server (NTRS)
Woodham, Kurt; Blackman, Kathie; Sanneman, Paul
1997-01-01
During the Phase B development of the NASA New Millennium Program (NMP) Earth Orbiter-1 (EO-1) spacecraft, detailed analyses were performed for on-board attitude determination using the Sun and the Earth's magnetic field. This work utilized the TRMM 'Contingency Mode' as a starting point but concentrated on implementation for a small spacecraft without a high performance mechanical gyro package. The analyses and simulations performed demonstrate a geographic dependence due to diurnal variations in the Earth magnetic field with respect to the Sun synchronous, nearly polar orbit. Sensitivity to uncompensated residual magnetic fields of the spacecraft and field modeling errors is shown to be the most significant obstacle for maximizing performance. Performance has been evaluated with a number of inertial reference units and various mounting orientations for the two-axis Fine Sun Sensors. Attitude determination accuracy using the six state Kalman Filter executing at 2 Hz is approximately 0.2 deg, 3-sigma, per axis. Although EO-1 was subsequently driven to a stellar-based attitude determination system as a result of tighter pointing requirements, solar/magnetic attitude determination is demonstrated to be applicable to a range of small spacecraft with medium precision pointing requirements.
Alignment methods: strategies, challenges, benchmarking, and comparative overview.
Löytynoja, Ari
2012-01-01
Comparative evolutionary analyses of molecular sequences are solely based on the identities and differences detected between homologous characters. Errors in this homology statement, that is errors in the alignment of the sequences, are likely to lead to errors in the downstream analyses. Sequence alignment and phylogenetic inference are tightly connected and many popular alignment programs use the phylogeny to divide the alignment problem into smaller tasks. They then neglect the phylogenetic tree, however, and produce alignments that are not evolutionarily meaningful. The use of phylogeny-aware methods reduces the error but the resulting alignments, with evolutionarily correct representation of homology, can challenge the existing practices and methods for viewing and visualising the sequences. The inter-dependency of alignment and phylogeny can be resolved by joint estimation of the two; methods based on statistical models allow for inferring the alignment parameters from the data and correctly take into account the uncertainty of the solution but remain computationally challenging. Widely used alignment methods are based on heuristic algorithms and unlikely to find globally optimal solutions. The whole concept of one correct alignment for the sequences is questionable, however, as there typically exist vast numbers of alternative, roughly equally good alignments that should also be considered. This uncertainty is hidden by many popular alignment programs and is rarely correctly taken into account in the downstream analyses. The quest for finding and improving the alignment solution is complicated by the lack of suitable measures of alignment goodness. The difficulty of comparing alternative solutions also affects benchmarks of alignment methods and the results strongly depend on the measure used. As the effects of alignment error cannot be predicted, comparing the alignments' performance in downstream analyses is recommended.
Hette Tronquart, Nicolas; Mazeas, Laurent; Reuilly-Manenti, Liana; Zahm, Amandine; Belliard, Jérôme
2012-07-30
Dorsal white muscle is the standard tissue analysed in fish trophic studies using stable isotope analyses. However, sampling white muscle often implies the sacrifice of fish. Thus, we examined whether the non-lethal sampling of fin tissue can substitute muscle sampling in food web studies. Analysing muscle and fin δ(15)N and δ(13)C values of 466 European freshwater fish (14 species) with an elemental analyser coupled with an isotope ratio mass spectrometer, we compared the isotope values of the two tissues. Correlations between fin and muscle isotope ratios were examined for all fish together and specifically for 12 species. We further proposed four methods of assessing muscle from fin isotope ratios and estimated the errors made using these muscle surrogates. Despite significant differences between isotope values of the two tissues, fin and muscle isotopic signals are strongly correlated. Muscle values, estimated with raw fin isotope ratios (1st method), induce an error of ca. 1‰ for both isotopes. In comparison, specific (2nd method) or general (3rd method) correlations provide meaningful corrections of fin isotope ratios (errors <0.6‰). On the other hand, relationships, established for Australian tropical fish, only give poor muscle estimates (errors >0.8‰). There is little chance that a global model can be created. However, the 2nd and 3rd methods of estimating muscle values from fin isotope ratios should provide an acceptable level of error for the studies of European freshwater food web. We thus recommend that future studies use fin tissue as a non-lethal surrogate for muscle. Copyright © 2012 John Wiley & Sons, Ltd.
Fottrell, Edward; Byass, Peter; Berhane, Yemane
2008-03-25
As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs). Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP) DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty) were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. The low sensitivity of parameter estimates and regression analyses to significant amounts of randomly introduced errors indicates a high level of robustness of the dataset. This apparent inertia of population parameter estimates to simulated errors is largely due to the size of the dataset. Tolerable margins of random error in DSS data may exceed 20%. While this is not an argument in favour of poor quality data, reducing the time and valuable resources spent on detecting and correcting random errors in routine DSS operations may be justifiable as the returns from such procedures diminish with increasing overall accuracy. The money and effort currently spent on endlessly correcting DSS datasets would perhaps be better spent on increasing the surveillance population size and geographic spread of DSSs and analysing and disseminating research findings.
Simulating a transmon implementation of the surface code, Part I
NASA Astrophysics Data System (ADS)
Tarasinski, Brian; O'Brien, Thomas; Rol, Adriaan; Bultink, Niels; Dicarlo, Leo
Current experimental efforts aim to realize Surface-17, a distance-3 surface-code logical qubit, using transmon qubits in a circuit QED architecture. Following experimental proposals for this device, and currently achieved fidelities on physical qubits, we define a detailed error model that takes experimentally relevant error sources into account, such as amplitude and phase damping, imperfect gate pulses, and coherent errors due to low-frequency flux noise. Using the GPU-accelerated software package 'quantumsim', we simulate the density matrix evolution of the logical qubit under this error model. Combining the simulation results with a minimum-weight matching decoder, we obtain predictions for the error rate of the resulting logical qubit when used as a quantum memory, and estimate the contribution of different error sources to the logical error budget. Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.
Measurements of the toroidal torque balance of error field penetration locked modes
Shiraki, Daisuke; Paz-Soldan, Carlos; Hanson, Jeremy M.; ...
2015-01-05
Here, detailed measurements from the DIII-D tokamak of the toroidal dynamics of error field penetration locked modes under the influence of slowly evolving external fields, enable study of the toroidal torques on the mode, including interaction with the intrinsic error field. The error field in these low density Ohmic discharges is well known based on the mode penetration threshold, allowing resonant and non-resonant torque effects to be distinguished. These m/n = 2/1 locked modes are found to be well described by a toroidal torque balance between the resonant interaction with n = 1 error fields, and a viscous torque inmore » the electron diamagnetic drift direction which is observed to scale as the square of the perturbed field due to the island. Fitting to this empirical torque balance allows a time-resolved measurement of the intrinsic error field of the device, providing evidence for a time-dependent error field in DIII-D due to ramping of the Ohmic coil current.« less
Analysis of the impact of error detection on computer performance
NASA Technical Reports Server (NTRS)
Shin, K. C.; Lee, Y. H.
1983-01-01
Conventionally, reliability analyses either assume that a fault/error is detected immediately following its occurrence, or neglect damages caused by latent errors. Though unrealistic, this assumption was imposed in order to avoid the difficulty of determining the respective probabilities that a fault induces an error and the error is then detected in a random amount of time after its occurrence. As a remedy for this problem a model is proposed to analyze the impact of error detection on computer performance under moderate assumptions. Error latency, the time interval between occurrence and the moment of detection, is used to measure the effectiveness of a detection mechanism. This model is used to: (1) predict the probability of producing an unreliable result, and (2) estimate the loss of computation due to fault and/or error.
A Monte-Carlo Bayesian framework for urban rainfall error modelling
NASA Astrophysics Data System (ADS)
Ochoa Rodriguez, Susana; Wang, Li-Pen; Willems, Patrick; Onof, Christian
2016-04-01
Rainfall estimates of the highest possible accuracy and resolution are required for urban hydrological applications, given the small size and fast response which characterise urban catchments. While significant progress has been made in recent years towards meeting rainfall input requirements for urban hydrology -including increasing use of high spatial resolution radar rainfall estimates in combination with point rain gauge records- rainfall estimates will never be perfect and the true rainfall field is, by definition, unknown [1]. Quantifying the residual errors in rainfall estimates is crucial in order to understand their reliability, as well as the impact that their uncertainty may have in subsequent runoff estimates. The quantification of errors in rainfall estimates has been an active topic of research for decades. However, existing rainfall error models have several shortcomings, including the fact that they are limited to describing errors associated to a single data source (i.e. errors associated to rain gauge measurements or radar QPEs alone) and to a single representative error source (e.g. radar-rain gauge differences, spatial temporal resolution). Moreover, rainfall error models have been mostly developed for and tested at large scales. Studies at urban scales are mostly limited to analyses of propagation of errors in rain gauge records-only through urban drainage models and to tests of model sensitivity to uncertainty arising from unmeasured rainfall variability. Only few radar rainfall error models -originally developed for large scales- have been tested at urban scales [2] and have been shown to fail to well capture small-scale storm dynamics, including storm peaks, which are of utmost important for urban runoff simulations. In this work a Monte-Carlo Bayesian framework for rainfall error modelling at urban scales is introduced, which explicitly accounts for relevant errors (arising from insufficient accuracy and/or resolution) in multiple data sources (in this case radar and rain gauge estimates typically available at present), while at the same time enabling dynamic combination of these data sources (thus not only quantifying uncertainty, but also reducing it). This model generates an ensemble of merged rainfall estimates, which can then be used as input to urban drainage models in order to examine how uncertainties in rainfall estimates propagate to urban runoff estimates. The proposed model is tested using as case study a detailed rainfall and flow dataset, and a carefully verified urban drainage model of a small (~9 km2) pilot catchment in North-East London. The model has shown to well characterise residual errors in rainfall data at urban scales (which remain after the merging), leading to improved runoff estimates. In fact, the majority of measured flow peaks are bounded within the uncertainty area produced by the runoff ensembles generated with the ensemble rainfall inputs. REFERENCES: [1] Ciach, G. J. & Krajewski, W. F. (1999). On the estimation of radar rainfall error variance. Advances in Water Resources, 22 (6), 585-595. [2] Rico-Ramirez, M. A., Liguori, S. & Schellart, A. N. A. (2015). Quantifying radar-rainfall uncertainties in urban drainage flow modelling. Journal of Hydrology, 528, 17-28.
SBION: A Program for Analyses of Salt-Bridges from Multiple Structure Files.
Gupta, Parth Sarthi Sen; Mondal, Sudipta; Mondal, Buddhadev; Islam, Rifat Nawaz Ul; Banerjee, Shyamashree; Bandyopadhyay, Amal K
2014-01-01
Salt-bridge and network salt-bridge are specific electrostatic interactions that contribute to the overall stability of proteins. In hierarchical protein folding model, these interactions play crucial role in nucleation process. The advent and growth of protein structure database and its availability in public domain made an urgent need for context dependent rapid analysis of salt-bridges. While these analyses on single protein is cumbersome and time-consuming, batch analyses need efficient software for rapid topological scan of a large number of protein for extracting details on (i) fraction of salt-bridge residues (acidic and basic). (ii) Chain specific intra-molecular salt-bridges, (iii) inter-molecular salt-bridges (protein-protein interactions) in all possible binary combinations (iv) network salt-bridges and (v) secondary structure distribution of salt-bridge residues. To the best of our knowledge, such efficient software is not available in public domain. At this juncture, we have developed a program i.e. SBION which can perform all the above mentioned computations for any number of protein with any number of chain at any given distance of ion-pair. It is highly efficient, fast, error-free and user friendly. Finally we would say that our SBION indeed possesses potential for applications in the field of structural and comparative bioinformatics studies. SBION is freely available for non-commercial/academic institutions on formal request to the corresponding author (akbanerjee@biotech.buruniv.ac.in).
Research on effects of phase error in phase-shifting interferometer
NASA Astrophysics Data System (ADS)
Wang, Hongjun; Wang, Zhao; Zhao, Hong; Tian, Ailing; Liu, Bingcai
2007-12-01
Referring to phase-shifting interferometry technology, the phase shifting error from the phase shifter is the main factor that directly affects the measurement accuracy of the phase shifting interferometer. In this paper, the resources and sorts of phase shifting error were introduction, and some methods to eliminate errors were mentioned. Based on the theory of phase shifting interferometry, the effects of phase shifting error were analyzed in detail. The Liquid Crystal Display (LCD) as a new shifter has advantage as that the phase shifting can be controlled digitally without any mechanical moving and rotating element. By changing coded image displayed on LCD, the phase shifting in measuring system was induced. LCD's phase modulation characteristic was analyzed in theory and tested. Based on Fourier transform, the effect model of phase error coming from LCD was established in four-step phase shifting interferometry. And the error range was obtained. In order to reduce error, a new error compensation algorithm was put forward. With this method, the error can be obtained by process interferogram. The interferogram can be compensated, and the measurement results can be obtained by four-step phase shifting interferogram. Theoretical analysis and simulation results demonstrate the feasibility of this approach to improve measurement accuracy.
Analysis of Errors and Misconceptions in the Learning of Calculus by Undergraduate Students
ERIC Educational Resources Information Center
Muzangwa, Jonatan; Chifamba, Peter
2012-01-01
This paper is going to analyse errors and misconceptions in an undergraduate course in Calculus. The study will be based on a group of 10 BEd. Mathematics students at Great Zimbabwe University. Data is gathered through use of two exercises on Calculus 1&2.The analysis of the results from the tests showed that a majority of the errors were due…
Digital Mirror Device Application in Reduction of Wave-front Phase Errors
Zhang, Yaping; Liu, Yan; Wang, Shuxue
2009-01-01
In order to correct the image distortion created by the mixing/shear layer, creative and effectual correction methods are necessary. First, a method combining adaptive optics (AO) correction with a digital micro-mirror device (DMD) is presented. Second, performance of an AO system using the Phase Diverse Speckle (PDS) principle is characterized in detail. Through combining the DMD method with PDS, a significant reduction in wavefront phase error is achieved in simulations and experiments. This kind of complex correction principle can be used to recovery the degraded images caused by unforeseen error sources. PMID:22574016
Study on optical 3D angular deformations measurement
NASA Astrophysics Data System (ADS)
Gao, Yang; Wang, Xingshu; Huang, Zongsheng; Yang, Jinliang
2013-12-01
3D angular deformations will be inevitable when ships are sailing, due to the changes of the environmental temperature and external stresses. The measurement of 3D angular deformations is one of the most critical and difficult issues in navy and shipbuilding industry around the world. In this paper, we propose an optical method to measure 3D ship angular deformations and discuss the measurement errors in detail. Theoretical analysis shows that the measured errors of the pitching and yawing deformations are induced by the installation errors of the image aperture, and the measured error of the rolling deformation depends on the subpixel location algorithm in image processing. It indicates that the measured errors of the optical measurement proposed in this paper are at the magnitude of angular seconds, when the elaborated installation and precise image processing technology are both performed.
Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You
2013-11-04
A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.
Advanced Information Processing System - Fault detection and error handling
NASA Technical Reports Server (NTRS)
Lala, J. H.
1985-01-01
The Advanced Information Processing System (AIPS) is designed to provide a fault tolerant and damage tolerant data processing architecture for a broad range of aerospace vehicles, including tactical and transport aircraft, and manned and autonomous spacecraft. A proof-of-concept (POC) system is now in the detailed design and fabrication phase. This paper gives an overview of a preliminary fault detection and error handling philosophy in AIPS.
NASA Technical Reports Server (NTRS)
Meyers, A. P.; Davidson, W. A.; Gortowski, R. C.
1973-01-01
Detailed drawings of the five year tape transport are presented. Analytical tools used in the various analyses are described. These analyses include: tape guidance, tape stress over crowned rollers, tape pack stress program, response (computer) program, and control system electronics description.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niemeyer, Kyle E.; Sung, Chih-Jen; Raju, Mandhapati P.
2010-09-15
A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination ofmore » the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane generated using the DRGEP, DRGASA, and DRGEPSA methods are presented and compared to illustrate the improvement of DRGEPSA. From a detailed reaction mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions, two skeletal mechanisms for n-decane generated using DRGEPSA, one covering a comprehensive range of temperature, pressure, and equivalence ratio conditions for autoignition and the other limited to high temperatures, are presented and validated. The comprehensive skeletal mechanism consists of 202 species and 846 reactions and the high-temperature skeletal mechanism consists of 51 species and 256 reactions. Both mechanisms are further demonstrated to well reproduce the results of the detailed mechanism in perfectly-stirred reactor and laminar flame simulations over a wide range of conditions. The comprehensive and high-temperature n-decane skeletal mechanisms are included as supplementary material with this article. (author)« less
Computed tomography of x-ray images using neural networks
NASA Astrophysics Data System (ADS)
Allred, Lloyd G.; Jones, Martin H.; Sheats, Matthew J.; Davis, Anthony W.
2000-03-01
Traditional CT reconstruction is done using the technique of Filtered Backprojection. While this technique is widely employed in industrial and medical applications, it is not generally understood that FB has a fundamental flaw. Gibbs phenomena states any Fourier reconstruction will produce errors in the vicinity of all discontinuities, and that the error will equal 28 percent of the discontinuity. A number of years back, one of the authors proposed a biological perception model whereby biological neural networks perceive 3D images from stereo vision. The perception model proports an internal hard-wired neural network which emulates the external physical process. A process is repeated whereby erroneous unknown internal values are used to generate an emulated signal with is compared to external sensed data, generating an error signal. Feedback from the error signal is then sued to update the erroneous internal values. The process is repeated until the error signal no longer decrease. It was soon realized that the same method could be used to obtain CT from x-rays without having to do Fourier transforms. Neural networks have the additional potential for handling non-linearities and missing data. The technique has been applied to some coral images, collected at the Los Alamos high-energy x-ray facility. The initial images show considerable promise, in some instances showing more detail than the FB images obtained from the same data. Although routine production using this new method would require a massively parallel computer, the method shows promise, especially where refined detail is required.
Qualitative and quantitative assessment of Illumina's forensic STR and SNP kits on MiSeq FGx™.
Sharma, Vishakha; Chow, Hoi Yan; Siegel, Donald; Wurmbach, Elisa
2017-01-01
Massively parallel sequencing (MPS) is a powerful tool transforming DNA analysis in multiple fields ranging from medicine, to environmental science, to evolutionary biology. In forensic applications, MPS offers the ability to significantly increase the discriminatory power of human identification as well as aid in mixture deconvolution. However, before the benefits of any new technology can be employed, a thorough evaluation of its quality, consistency, sensitivity, and specificity must be rigorously evaluated in order to gain a detailed understanding of the technique including sources of error, error rates, and other restrictions/limitations. This extensive study assessed the performance of Illumina's MiSeq FGx MPS system and ForenSeq™ kit in nine experimental runs including 314 reaction samples. In-depth data analysis evaluated the consequences of different assay conditions on test results. Variables included: sample numbers per run, targets per run, DNA input per sample, and replications. Results are presented as heat maps revealing patterns for each locus. Data analysis focused on read numbers (allele coverage), drop-outs, drop-ins, and sequence analysis. The study revealed that loci with high read numbers performed better and resulted in fewer drop-outs and well balanced heterozygous alleles. Several loci were prone to drop-outs which led to falsely typed homozygotes and therefore to genotype errors. Sequence analysis of allele drop-in typically revealed a single nucleotide change (deletion, insertion, or substitution). Analyses of sequences, no template controls, and spurious alleles suggest no contamination during library preparation, pooling, and sequencing, but indicate that sequencing or PCR errors may have occurred due to DNA polymerase infidelities. Finally, we found utilizing Illumina's FGx System at recommended conditions does not guarantee 100% outcomes for all samples tested, including the positive control, and required manual editing due to low read numbers and/or allele drop-in. These findings are important for progressing towards implementation of MPS in forensic DNA testing.
Warrick, J.A.; Rubin, D.M.; Ruggiero, P.; Harney, J.N.; Draut, A.E.; Buscombe, D.
2009-01-01
A new application of the autocorrelation grain size analysis technique for mixed to coarse sediment settings has been investigated. Photographs of sand- to boulder-sized sediment along the Elwha River delta beach were taken from approximately 1??2 m above the ground surface, and detailed grain size measurements were made from 32 of these sites for calibration and validation. Digital photographs were found to provide accurate estimates of the long and intermediate axes of the surface sediment (r2 > 0??98), but poor estimates of the short axes (r2 = 0??68), suggesting that these short axes were naturally oriented in the vertical dimension. The autocorrelation method was successfully applied resulting in total irreducible error of 14% over a range of mean grain sizes of 1 to 200 mm. Compared with reported edge and object-detection results, it is noted that the autocorrelation method presented here has lower error and can be applied to a much broader range of mean grain sizes without altering the physical set-up of the camera (~200-fold versus ~6-fold). The approach is considerably less sensitive to lighting conditions than object-detection methods, although autocorrelation estimates do improve when measures are taken to shade sediments from direct sunlight. The effects of wet and dry conditions are also evaluated and discussed. The technique provides an estimate of grain size sorting from the easily calculated autocorrelation standard error, which is correlated with the graphical standard deviation at an r2 of 0??69. The technique is transferable to other sites when calibrated with linear corrections based on photo-based measurements, as shown by excellent grain-size analysis results (r2 = 0??97, irreducible error = 16%) from samples from the mixed grain size beaches of Kachemak Bay, Alaska. Thus, a method has been developed to measure mean grain size and sorting properties of coarse sediments. ?? 2009 John Wiley & Sons, Ltd.
The role of blood vessels in high-resolution volume conductor head modeling of EEG.
Fiederer, L D J; Vorwerk, J; Lucka, F; Dannhauer, M; Yang, S; Dümpelmann, M; Schulze-Bonhage, A; Aertsen, A; Speck, O; Wolters, C H; Ball, T
2016-03-01
Reconstruction of the electrical sources of human EEG activity at high spatio-temporal accuracy is an important aim in neuroscience and neurological diagnostics. Over the last decades, numerous studies have demonstrated that realistic modeling of head anatomy improves the accuracy of source reconstruction of EEG signals. For example, including a cerebro-spinal fluid compartment and the anisotropy of white matter electrical conductivity were both shown to significantly reduce modeling errors. Here, we for the first time quantify the role of detailed reconstructions of the cerebral blood vessels in volume conductor head modeling for EEG. To study the role of the highly arborized cerebral blood vessels, we created a submillimeter head model based on ultra-high-field-strength (7T) structural MRI datasets. Blood vessels (arteries and emissary/intraosseous veins) were segmented using Frangi multi-scale vesselness filtering. The final head model consisted of a geometry-adapted cubic mesh with over 17×10(6) nodes. We solved the forward model using a finite-element-method (FEM) transfer matrix approach, which allowed reducing computation times substantially and quantified the importance of the blood vessel compartment by computing forward and inverse errors resulting from ignoring the blood vessels. Our results show that ignoring emissary veins piercing the skull leads to focal localization errors of approx. 5 to 15mm. Large errors (>2cm) were observed due to the carotid arteries and the dense arterial vasculature in areas such as in the insula or in the medial temporal lobe. Thus, in such predisposed areas, errors caused by neglecting blood vessels can reach similar magnitudes as those previously reported for neglecting white matter anisotropy, the CSF or the dura - structures which are generally considered important components of realistic EEG head models. Our findings thus imply that including a realistic blood vessel compartment in EEG head models will be helpful to improve the accuracy of EEG source analyses particularly when high accuracies in brain areas with dense vasculature are required. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Sommer, S A; Geissler, R; Stampfl, U; Wolf, M B; Radeleff, B A; Richter, G M; Kauczor, H-U; Pereira, P L; Sommer, C M
2016-04-01
On February 26th, 2013 the patient law became effective in Germany. Goal of the lawmakers was a most authoritative case law for liability of malpractice and to improve enforcement of the rights of the patients. The following article contains several examples detailing legal situation. By no means should these discourage those persons who treat patients. Rather should they be sensitized to to various aspects of this increasingly important field of law. To identify relevant sources according to judicial standard research was conducted including first- and second selection. Goal was the identification of jurisdiction, literature and other various analyses that all deal with liability of malpractice and patient law within the field of Interventional Radiology--with particular focus on transarterial chemoembolization of the liver and related procedures. In summary, 89 different sources were included and analyzed. The individual who treats a patient is liable for an error in treatment if it causes injury to life, the body or the patient's health. Independent of the error in treatment the individual providing medical care is liable for mistakes made in the context of obtaining informed consent. Prerequisite is the presence of an error made when obtaining informed consent and its causality for the patient's consent for the treatment. Without an effective consent the treatment is considered illegal whether it was free of treatment error or not. The new patient law does not cause material change of the German liablity of malpractice law. •On February 26th, 2013 the new patient law came into effect. Materially, there was no fundamental remodeling of the German liability for medical malpractice. •Regarding a physician's liability for medical malpractice two different elements of an offence come into consideration: for one the liability for malpractice and, in turn, liability for errors made during medical consultation in the process of obtaining informed consent. •Forensic practice shows that patients frequently enforce both offences concurrently. © Georg Thieme Verlag KG Stuttgart · New York.
Chang, Wen-Pin; Davies, Patricia L; Gavin, William J
2010-10-01
Recent studies have investigated the relationship between psychological symptoms and personality traits and error monitoring measured by error-related negativity (ERN) and error positivity (Pe) event-related potential (ERP) components, yet there remains a paucity of studies examining the collective simultaneous effects of psychological symptoms and personality traits on error monitoring. This present study, therefore, examined whether measures of hyperactivity-impulsivity, depression, anxiety and antisocial personality characteristics could collectively account for significant interindividual variability of both ERN and Pe amplitudes, in 29 healthy adults with no known disorders, ages 18-30 years. The bivariate zero-order correlation analyses found that only the anxiety measure was significantly related to both ERN and Pe amplitudes. However, multiple regression analyses that included all four characteristic measures while controlling for number of segments in the ERP average revealed that both depression and antisocial personality characteristics were significant predictors for the ERN amplitudes whereas antisocial personality was the only significant predictor for the Pe amplitude. These findings suggest that psychological symptoms and personality traits are associated with individual variations in error monitoring in healthy adults, and future studies should consider these variables when comparing group difference in error monitoring between adults with and without disabilities. © 2010 The Authors. European Journal of Neuroscience © 2010 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Denoising DNA deep sequencing data—high-throughput sequencing errors and their correction
Laehnemann, David; Borkhardt, Arndt
2016-01-01
Characterizing the errors generated by common high-throughput sequencing platforms and telling true genetic variation from technical artefacts are two interdependent steps, essential to many analyses such as single nucleotide variant calling, haplotype inference, sequence assembly and evolutionary studies. Both random and systematic errors can show a specific occurrence profile for each of the six prominent sequencing platforms surveyed here: 454 pyrosequencing, Complete Genomics DNA nanoball sequencing, Illumina sequencing by synthesis, Ion Torrent semiconductor sequencing, Pacific Biosciences single-molecule real-time sequencing and Oxford Nanopore sequencing. There is a large variety of programs available for error removal in sequencing read data, which differ in the error models and statistical techniques they use, the features of the data they analyse, the parameters they determine from them and the data structures and algorithms they use. We highlight the assumptions they make and for which data types these hold, providing guidance which tools to consider for benchmarking with regard to the data properties. While no benchmarking results are included here, such specific benchmarks would greatly inform tool choices and future software development. The development of stand-alone error correctors, as well as single nucleotide variant and haplotype callers, could also benefit from using more of the knowledge about error profiles and from (re)combining ideas from the existing approaches presented here. PMID:26026159
Error analysis and correction of discrete solutions from finite element codes
NASA Technical Reports Server (NTRS)
Thurston, G. A.; Stein, P. A.; Knight, N. F., Jr.; Reissner, J. E.
1984-01-01
Many structures are an assembly of individual shell components. Therefore, results for stresses and deflections from finite element solutions for each shell component should agree with the equations of shell theory. This paper examines the problem of applying shell theory to the error analysis and the correction of finite element results. The general approach to error analysis and correction is discussed first. Relaxation methods are suggested as one approach to correcting finite element results for all or parts of shell structures. Next, the problem of error analysis of plate structures is examined in more detail. The method of successive approximations is adapted to take discrete finite element solutions and to generate continuous approximate solutions for postbuckled plates. Preliminary numerical results are included.
Geolocation error tracking of ZY-3 three line cameras
NASA Astrophysics Data System (ADS)
Pan, Hongbo
2017-01-01
The high-accuracy geolocation of high-resolution satellite images (HRSIs) is a key issue for mapping and integrating multi-temporal, multi-sensor images. In this manuscript, we propose a new geometric frame for analysing the geometric error of a stereo HRSI, in which the geolocation error can be divided into three parts: the epipolar direction, cross base direction, and height direction. With this frame, we proved that the height error of three line cameras (TLCs) is independent of nadir images, and that the terrain effect has a limited impact on the geolocation errors. For ZY-3 error sources, the drift error in both the pitch and roll angle and its influence on the geolocation accuracy are analysed. Epipolar and common tie-point constraints are proposed to study the bundle adjustment of HRSIs. Epipolar constraints explain that the relative orientation can reduce the number of compensation parameters in the cross base direction and have a limited impact on the height accuracy. The common tie points adjust the pitch-angle errors to be consistent with each other for TLCs. Therefore, free-net bundle adjustment of a single strip cannot significantly improve the geolocation accuracy. Furthermore, the epipolar and common tie-point constraints cause the error to propagate into the adjacent strip when multiple strips are involved in the bundle adjustment, which results in the same attitude uncertainty throughout the whole block. Two adjacent strips-Orbit 305 and Orbit 381, covering 7 and 12 standard scenes separately-and 308 ground control points (GCPs) were used for the experiments. The experiments validate the aforementioned theory. The planimetric and height root mean square errors were 2.09 and 1.28 m, respectively, when two GCPs were settled at the beginning and end of the block.
DOT National Transportation Integrated Search
2012-10-22
Recent accidents in commuter rail operations and analyses of rule violations have highlighted the need for : better understanding of the contributory role of distraction and attentional errors. Distracted driving has : thoroughly been studied in rece...
Gaskin, Cadeyrn J; Happell, Brenda
2014-05-01
To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Lawlor, Debbie A; Peters, Tim J; Howe, Laura D; Noble, Sian M; Kipping, Ruth R; Jago, Russell
2013-07-24
The Active For Life Year 5 (AFLY5) randomised controlled trial protocol was published in this journal in 2011. It provided a summary analysis plan. This publication is an update of that protocol and provides a detailed analysis plan. This update provides a detailed analysis plan of the effectiveness and cost-effectiveness of the AFLY5 intervention. The plan includes details of how variables will be quality control checked and the criteria used to define derived variables. Details of four key analyses are provided: (a) effectiveness analysis 1 (the effect of the AFLY5 intervention on primary and secondary outcomes at the end of the school year in which the intervention is delivered); (b) mediation analyses (secondary analyses examining the extent to which any effects of the intervention are mediated via self-efficacy, parental support and knowledge, through which the intervention is theoretically believed to act); (c) effectiveness analysis 2 (the effect of the AFLY5 intervention on primary and secondary outcomes 12 months after the end of the intervention) and (d) cost effectiveness analysis (the cost-effectiveness of the AFLY5 intervention). The details include how the intention to treat and per-protocol analyses were defined and planned sensitivity analyses for dealing with missing data. A set of dummy tables are provided in Additional file 1. This detailed analysis plan was written prior to any analyst having access to any data and was approved by the AFLY5 Trial Steering Committee. Its publication will ensure that analyses are in accordance with an a priori plan related to the trial objectives and not driven by knowledge of the data. ISRCTN50133740.
Clarification of terminology in medication errors: definitions and classification.
Ferner, Robin E; Aronson, Jeffrey K
2006-01-01
We have previously described and analysed some terms that are used in drug safety and have proposed definitions. Here we discuss and define terms that are used in the field of medication errors, particularly terms that are sometimes misunderstood or misused. We also discuss the classification of medication errors. A medication error is a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient. Errors can be classified according to whether they are mistakes, slips, or lapses. Mistakes are errors in the planning of an action. They can be knowledge based or rule based. Slips and lapses are errors in carrying out an action - a slip through an erroneous performance and a lapse through an erroneous memory. Classification of medication errors is important because the probabilities of errors of different classes are different, as are the potential remedies.
ERIC Educational Resources Information Center
Ludtke, Oliver; Marsh, Herbert W.; Robitzsch, Alexander; Trautwein, Ulrich
2011-01-01
In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data…
NASA Technical Reports Server (NTRS)
1984-01-01
Rectifications of multispectral scanner and thematic mapper data sets for full and subscene areas, analyses of planimetric errors, assessments of the number and distribution of ground control points required to minimize errors, and factors contributing to error residual are examined. Other investigations include the generation of three dimensional terrain models and the effects of spatial resolution on digital classification accuracies.
ERIC Educational Resources Information Center
Py, Bernard
A progress report is presented of a study which applies a system of generative grammar to error analysis. The objective of the study was to reconstruct the grammar of students' interlanguage, using a systematic analysis of errors. (Interlanguage refers to the linguistic competence of a student who possesses a relatively systematic body of rules,…
Quantum cryptography: individual eavesdropping with the knowledge of the error-correcting protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horoshko, D B
2007-12-31
The quantum key distribution protocol BB84 combined with the repetition protocol for error correction is analysed from the point of view of its security against individual eavesdropping relying on quantum memory. It is shown that the mere knowledge of the error-correcting protocol changes the optimal attack and provides the eavesdropper with additional information on the distributed key. (fifth seminar in memory of d.n. klyshko)
Study on the calibration and optimization of double theodolites baseline
NASA Astrophysics Data System (ADS)
Ma, Jing-yi; Ni, Jin-ping; Wu, Zhi-chao
2018-01-01
For the double theodolites measurement system baseline as the benchmark of the scale of the measurement system and affect the accuracy of the system, this paper puts forward a method for calibration and optimization of the double theodolites baseline. Using double theodolites to measure the known length of the reference ruler, and then reverse the baseline formula. Based on the error propagation law, the analyses show that the baseline error function is an important index to measure the accuracy of the system, and the reference ruler position, posture and so on have an impact on the baseline error. The optimization model is established and the baseline error function is used as the objective function, and optimizes the position and posture of the reference ruler. The simulation results show that the height of the reference ruler has no effect on the baseline error; the posture is not uniform; when the reference ruler is placed at x=500mm and y=1000mm in the measurement space, the baseline error is the smallest. The experimental results show that the experimental results are consistent with the theoretical analyses in the measurement space. In this paper, based on the study of the placement of the reference ruler, for improving the accuracy of the double theodolites measurement system has a reference value.
Guan, W; Meng, X F; Dong, X M
2014-12-01
Rectification error is a critical characteristic of inertial accelerometers. Accelerometers working in operational situations are stimulated by composite inputs, including constant acceleration and vibration, from multiple directions. However, traditional methods for evaluating rectification error only use one-dimensional vibration. In this paper, a double turntable centrifuge (DTC) was utilized to produce the constant acceleration and vibration simultaneously and we tested the rectification error due to the composite accelerations. At first, we deduced the expression of the rectification error with the output of the DTC and a static model of the single-axis pendulous accelerometer under test. Theoretical investigation and analysis were carried out in accordance with the rectification error model. Then a detailed experimental procedure and testing results were described. We measured the rectification error with various constant accelerations at different frequencies and amplitudes of the vibration. The experimental results showed the distinguished characteristics of the rectification error caused by the composite accelerations. The linear relation between the constant acceleration and the rectification error was proved. The experimental procedure and results presented in this context can be referenced for the investigation of the characteristics of accelerometer with multiple inputs.
Devitt, Aleea L.; Tippett, Lynette; Schacter, Daniel L.; Addis, Donna Rose
2016-01-01
Because of its reconstructive nature, autobiographical memory (AM) is subject to a range of distortions. One distortion involves the erroneous incorporation of features from one episodic memory into another, forming what are known as memory conjunction errors. Healthy aging has been associated with an enhanced susceptibility to conjunction errors for laboratory stimuli, yet it is unclear whether these findings translate to the autobiographical domain. We investigated the impact of aging on vulnerability to AM conjunction errors, and explored potential cognitive processes underlying the formation of these errors. An imagination recombination paradigm was used to elicit AM conjunction errors in young and older adults. Participants also completed a battery of neuropsychological tests targeting relational memory and inhibition ability. Consistent with findings using laboratory stimuli, older adults were more susceptible to AM conjunction errors than younger adults. However, older adults were not differentially vulnerable to the inflating effects of imagination. Individual variation in AM conjunction error vulnerability was attributable to inhibitory capacity. An inability to suppress the cumulative familiarity of individual AM details appears to contribute to the heightened formation of AM conjunction errors with age. PMID:27929343
Actualities and Development of Heavy-Duty CNC Machine Tool Thermal Error Monitoring Technology
NASA Astrophysics Data System (ADS)
Zhou, Zu-De; Gui, Lin; Tan, Yue-Gang; Liu, Ming-Yao; Liu, Yi; Li, Rui-Ya
2017-09-01
Thermal error monitoring technology is the key technological support to solve the thermal error problem of heavy-duty CNC (computer numerical control) machine tools. Currently, there are many review literatures introducing the thermal error research of CNC machine tools, but those mainly focus on the thermal issues in small and medium-sized CNC machine tools and seldom introduce thermal error monitoring technologies. This paper gives an overview of the research on the thermal error of CNC machine tools and emphasizes the study of thermal error of the heavy-duty CNC machine tool in three areas. These areas are the causes of thermal error of heavy-duty CNC machine tool and the issues with the temperature monitoring technology and thermal deformation monitoring technology. A new optical measurement technology called the "fiber Bragg grating (FBG) distributed sensing technology" for heavy-duty CNC machine tools is introduced in detail. This technology forms an intelligent sensing and monitoring system for heavy-duty CNC machine tools. This paper fills in the blank of this kind of review articles to guide the development of this industry field and opens up new areas of research on the heavy-duty CNC machine tool thermal error.
NASA Technical Reports Server (NTRS)
Weiss, Jerold L.; Hsu, John Y.
1986-01-01
The use of a decentralized approach to failure detection and isolation for use in restructurable control systems is examined. This work has produced: (1) A method for evaluating fundamental limits to FDI performance; (2) Application using flight recorded data; (3) A working control element FDI system with maximal sensitivity to critical control element failures; (4) Extensive testing on realistic simulations; and (5) A detailed design methodology involving parameter optimization (with respect to model uncertainties) and sensitivity analyses. This project has concentrated on detection and isolation of generic control element failures since these failures frequently lead to emergency conditions and since knowledge of remaining control authority is essential for control system redesign. The failures are generic in the sense that no temporal failure signature information was assumed. Thus, various forms of functional failures are treated in a unified fashion. Such a treatment results in a robust FDI system (i.e., one that covers all failure modes) but sacrifices some performance when detailed failure signature information is known, useful, and employed properly. It was assumed throughout that all sensors are validated (i.e., contain only in-spec errors) and that only the first failure of a single control element needs to be detected and isolated. The FDI system which has been developed will handle a class of multiple failures.
NASA Technical Reports Server (NTRS)
Kia, T.; Longuski, J. M.
1984-01-01
Analytic error bounds are presented for the solutions of approximate models for self-excited near-symmetric rigid bodies. The error bounds are developed for analytic solutions to Euler's equations of motion. The results are applied to obtain a simplified analytic solution for Eulerian rates and angles. The results of a sample application of the range and error bound expressions for the case of the Galileo spacecraft experiencing transverse torques demonstrate the use of the bounds in analyses of rigid body spin change maneuvers.
Guelpa, Anina; Bevilacqua, Marta; Marini, Federico; O'Kennedy, Kim; Geladi, Paul; Manley, Marena
2015-04-15
It has been established in this study that the Rapid Visco Analyser (RVA) can describe maize hardness, irrespective of the RVA profile, when used in association with appropriate multivariate data analysis techniques. Therefore, the RVA can complement or replace current and/or conventional methods as a hardness descriptor. Hardness modelling based on RVA viscograms was carried out using seven conventional hardness methods (hectoliter mass (HLM), hundred kernel mass (HKM), particle size index (PSI), percentage vitreous endosperm (%VE), protein content, percentage chop (%chop) and near infrared (NIR) spectroscopy) as references and three different RVA profiles (hard, soft and standard) as predictors. An approach using locally weighted partial least squares (LW-PLS) was followed to build the regression models. The resulted prediction errors (root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP)) for the quantification of hardness values were always lower or in the same order of the laboratory error of the reference method. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Radziukynas, V.; Klementavičius, A.
2016-04-01
The paper analyses the performance results of the recently developed short-term forecasting suit for the Latvian power system. The system load and wind power are forecasted using ANN and ARIMA models, respectively, and the forecasting accuracy is evaluated in terms of errors, mean absolute errors and mean absolute percentage errors. The investigation of influence of additional input variables on load forecasting errors is performed. The interplay of hourly loads and wind power forecasting errors is also evaluated for the Latvian power system with historical loads (the year 2011) and planned wind power capacities (the year 2023).
NASA Technical Reports Server (NTRS)
Carson, William; Lindemuth, Kathleen; Mich, John; White, K. Preston; Parker, Peter A.
2009-01-01
Probabilistic engineering design enhances safety and reduces costs by incorporating risk assessment directly into the design process. In this paper, we assess the format of the quantitative metrics for the vehicle which will replace the Space Shuttle, the Ares I rocket. Specifically, we address the metrics for in-flight measurement error in the vector position of the motor nozzle, dictated by limits on guidance, navigation, and control systems. Analyses include the propagation of error from measured to derived parameters, the time-series of dwell points for the duty cycle during static tests, and commanded versus achieved yaw angle during tests. Based on these analyses, we recommend a probabilistic template for specifying the maximum error in angular displacement and radial offset for the nozzle-position vector. Criteria for evaluating individual tests and risky decisions also are developed.
Moisture Forecast Bias Correction in GEOS DAS
NASA Technical Reports Server (NTRS)
Dee, D.
1999-01-01
Data assimilation methods rely on numerous assumptions about the errors involved in measuring and forecasting atmospheric fields. One of the more disturbing of these is that short-term model forecasts are assumed to be unbiased. In case of atmospheric moisture, for example, observational evidence shows that the systematic component of errors in forecasts and analyses is often of the same order of magnitude as the random component. we have implemented a sequential algorithm for estimating forecast moisture bias from rawinsonde data in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The algorithm is designed to remove the systematic component of analysis errors and can be easily incorporated in an existing statistical data assimilation system. We will present results of initial experiments that show a significant reduction of bias in the GEOS DAS moisture analyses.
An evaluation of satellite-derived humidity and its relationship to convective development
NASA Technical Reports Server (NTRS)
Fuelberg, Henry E.
1993-01-01
An aircraft prototype of the High-Resolution Interferometer Sounder (HIS) was flown over Tennessee and northern Alabama during summer 1986. The HIS temperature and dewpoint soundings were examined on two flight days to determine their error characteristics and utility in mesoscale analyses. Random errors were calculated from structure functions while total errors were obtained by pairing the HIS soundings with radiosonde-derived profiles. Random temperature errors were found to be less than 1 C at most levels, but random dewpoint errors ranged from 1 to 5 C. Total errors of both parameters were considerably greater, with dewpoint errors especially large on the day having a pronounced subsidence inversion. Cumulus cloud cover on 15 June limited HIS mesoscale analyses on that day. Previously undetected clouds were found in many HIS fields of view, and these probably produced the low-level horizontal temperature and dewpoint variations observed in the retrievals. HIS dewpoints at 300 mb indicated a strong moisture gradient that was confirmed by GOES 6.7-micron imagery. HIS mesoscale analyses on 19 June revealed a tongue of humid air stretching across the study area. The moist region was confirmed by radiosonde data and imagery from the Multispectral Atmospheric Mapping Sensor (MAMS). Convective temperatures derived from HIS retrievals helped explain the cloud formation that occurred after the HIS overflights. Crude estimates of Bowen ratio were obtained from HIS data using a mixing-line approach. Values indicated that areas of large sensible heat flux were the areas of first cloud development. These locations were also suggested by GOES visible and infrared imagery. The HIS retrievals indicated that areas of thunderstorm formation were regions of greatest instability. Local landscape variability and atmospheric temperature and humidity fluctuations were found to be important factors in producing the cumulus clouds on 19 June. HIS soundings were capable of detecting some of this variability. The authors were impressed by HIS's performance on the two study days.
Distributed consensus for discrete-time heterogeneous multi-agent systems
NASA Astrophysics Data System (ADS)
Zhao, Huanyu; Fei, Shumin
2018-06-01
This paper studies the consensus problem for a class of discrete-time heterogeneous multi-agent systems. Two kinds of consensus algorithms will be considered. The heterogeneous multi-agent systems considered are converted into equivalent error systems by a model transformation. Then we analyse the consensus problem of the original systems by analysing the stability problem of the error systems. Some sufficient conditions for consensus of heterogeneous multi-agent systems are obtained by applying algebraic graph theory and matrix theory. Simulation examples are presented to show the usefulness of the results.
Performance and evaluation of real-time multicomputer control systems
NASA Technical Reports Server (NTRS)
Shin, K. G.
1983-01-01
New performance measures, detailed examples, modeling of error detection process, performance evaluation of rollback recovery methods, experiments on FTMP, and optimal size of an NMR cluster are discussed.
Marine Corps Body Composition Program: The Flawed Measurement System
2006-02-07
fitness expert and writer for ABC Bodybuilding , an error of 3% in a body fat evaluation is extreme and methods that have this margin of error should not...most other methods. In fact, bodybuilders use a seven to nine point skin fold measurement weekly during their training to monitor body fat...19.95 and recommended and endorsed by “Body-For-Life” and the World Natural Bodybuilding Federation. The caliper comes with detailed instructions
Dynamic assertion testing of flight control software
NASA Technical Reports Server (NTRS)
Andrews, D. M.; Mahmood, A.; Mccluskey, E. J.
1985-01-01
An experiment in using assertions to dynamically test fault tolerant flight software is described. The experiment showed that 87% of typical errors introduced into the program would be detected by assertions. Detailed analysis of the test data showed that the number of assertions needed to detect those errors could be reduced to a minimal set. The analysis also revealed that the most effective assertions tested program parameters that provided greater indirect (collateral) testing of other parameters.
Error Analysis and Performance Data from an Automated Azimuth Measuring System,
1981-02-17
microprocessors, tape drives, input and i NM. A detailed error analysis of the output hardware, a dual-axis tiltmeter ystem and methods to improve...performance mounted on the azimuth gimbal of each ALS, and accuracy are presented. Discussion and six tiltmeters arranged on an optical includes selected...velocity air flowing through tubes along the optical paths to each target. 1 . Introduction Temperature sensors are located in each To accurately and
NASA Technical Reports Server (NTRS)
Gardner, Robert; Gillis, James W.; Griesel, Ann; Pardo, Bruce
1985-01-01
An analysis of the direction finding (DF) and fix estimation algorithms in TRAILBLAZER is presented. The TRAILBLAZER software analyzed is old and not currently used in the field. However, the algorithms analyzed are used in other current IEW systems. The underlying algorithm assumptions (including unmodeled errors) are examined along with their appropriateness for TRAILBLAZER. Coding and documentation problems are then discussed. A detailed error budget is presented.
Addressee Errors in ATC Communications: The Call Sign Problem
NASA Technical Reports Server (NTRS)
Monan, W. P.
1983-01-01
Communication errors involving aircraft call signs were portrayed in reports of 462 hazardous incidents voluntarily submitted to the ASRS during an approximate four-year period. These errors resulted in confusion, disorder, and uncoordinated traffic conditions and produced the following types of operational anomalies: altitude deviations, wrong-way headings, aborted takeoffs, go arounds, runway incursions, missed crossing altitude restrictions, descents toward high terrain, and traffic conflicts in flight and on the ground. Analysis of the report set resulted in identification of five categories of errors involving call signs: (1) faulty radio usage techniques, (2) call sign loss or smearing due to frequency congestion, (3) confusion resulting from similar sounding call signs, (4) airmen misses of call signs leading to failures to acknowledge or readback, and (5) controller failures regarding confirmation of acknowledgements or readbacks. These error categories are described in detail and several associated hazard mitigating measures that might be aken are considered.
NASA Technical Reports Server (NTRS)
Colombo, O. L.
1984-01-01
The nature of the orbit error and its effect on the sea surface heights calculated with satellite altimetry are explained. The elementary concepts of celestial mechanics required to follow a general discussion of the problem are included. Consideration of errors in the orbits of satellites with precisely repeating ground tracks (SEASAT, TOPEX, ERS-1, POSEIDON, amongst past and future altimeter satellites) are detailed. The theoretical conclusions are illustrated with the numerical results of computer simulations. The nature of the errors in this type of orbits is such that this error can be filtered out by using height differences along repeating (overlapping) passes. This makes them particularly valuable for the study and monitoring of changes in the sea surface, such as tides. Elements of tidal theory, showing how these principles can be combined with those pertinent to the orbit error to make direct maps of the tides using altimetry are presented.
Online Deviation Detection for Medical Processes
Christov, Stefan C.; Avrunin, George S.; Clarke, Lori A.
2014-01-01
Human errors are a major concern in many medical processes. To help address this problem, we are investigating an approach for automatically detecting when performers of a medical process deviate from the acceptable ways of performing that process as specified by a detailed process model. Such deviations could represent errors and, thus, detecting and reporting deviations as they occur could help catch errors before harm is done. In this paper, we identify important issues related to the feasibility of the proposed approach and empirically evaluate the approach for two medical procedures, chemotherapy and blood transfusion. For the evaluation, we use the process models to generate sample process executions that we then seed with synthetic errors. The process models describe the coordination of activities of different process performers in normal, as well as in exceptional situations. The evaluation results suggest that the proposed approach could be applied in clinical settings to help catch errors before harm is done. PMID:25954343
NASA Technical Reports Server (NTRS)
Boland, J. S., III
1973-01-01
The derivation of an approximate error characteristic equation describing the transient system error response is given, along with a procedure for selecting adaptive gain parameters so as to relate to the transient error response. A detailed example of the application and implementation of these methods for a space shuttle type vehicle is included. An extension of the characteristic equation technique is used to provide an estimate of the magnitude of the maximum system error and an estimate of the time of occurrence of this maximum after a plant parameter disturbance. Techniques for relaxing certain stability requirements and the conditions under which this can be done and still guarantee asymptotic stability of the system error are discussed. Such conditions are possible because the Lyapunov methods used in the stability derivation allow for overconstraining a problem in the process of insuring stability.
Database Design to Ensure Anonymous Study of Medical Errors: A Report from the ASIPS collaborative
Pace, Wilson D.; Staton, Elizabeth W.; Higgins, Gregory S.; Main, Deborah S.; West, David R.; Harris, Daniel M.
2003-01-01
Medical error reporting systems are important information sources for designing strategies to improve the safety of health care. Applied Strategies for Improving Patient Safety (ASIPS) is a multi-institutional, practice-based research project that collects and analyzes data on primary care medical errors and develops interventions to reduce error. The voluntary ASIPS Patient Safety Reporting System captures anonymous and confidential reports of medical errors. Confidential reports, which are quickly de-identified, provide better detail than do anonymous reports; however, concerns exist about the confidentiality of those reports should the database be subject to legal discovery or other security breaches. Standard database elements, for example, serial ID numbers, date/time stamps, and backups, could enable an outsider to link an ASIPS report to a specific medical error. The authors present the design and implementation of a database and administrative system that reduce this risk, facilitate research, and maintain near anonymity of the events, practices, and clinicians. PMID:12925548
The Effect of Interruptions on Part 121 Air Carrier Operations
NASA Technical Reports Server (NTRS)
Damos, Diane L.
1998-01-01
The primary purpose of this study was to determine the relative priorities of various events and activities by examining the probability that a given activity was interrupted by a given event. The analysis will begin by providing frequency of interruption data by crew position (captain versus first officer) and event type. Any differences in the pattern of interruptions between the first officers and the captains will be explored and interpreted in terms of standard operating procedures. Subsequent data analyses will focus on comparing the frequency of interruptions for different types of activities and for the same activities under normal versus emergency conditions. Briefings and checklists will receive particular attention. The frequency with which specific activities are interrupted under multiple- versus single-task conditions also will be examined; because the majority of multiple-task data were obtained under laboratory conditions, LOFT-type tapes offer a unique opportunity to examine concurrent task performance under 'real-world' conditions. A second purpose of this study is to examine the effects of the interruptions on performance. More specifically, when possible, the time to resume specific activities will be compared to determine if pilots are slower to resume certain types of activities. Errors in resumption or failures to resume specific activities will be noted and any patterns in these errors will be identified. Again, particular attention will be given to the effects of interruptions on the completion of checklists and briefings. Other types of errors and missed events (i.e., the crew should have responded to the event but did not) will be examined. Any methodology using interruptions to examine task prioritization must be able to identify when an interruption has occurred and describe the ongoing activities that were interrupted. Both of these methodological problems are discussed In detail in the following section,
Generalized Fourier analyses of the advection-diffusion equation - Part I: one-dimensional domains
NASA Astrophysics Data System (ADS)
Christon, Mark A.; Martinez, Mario J.; Voth, Thomas E.
2004-07-01
This paper presents a detailed multi-methods comparison of the spatial errors associated with finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. The errors are reported in terms of non-dimensional phase and group speed, discrete diffusivity, artificial diffusivity, and grid-induced anisotropy. It is demonstrated that Fourier analysis provides an automatic process for separating the discrete advective operator into its symmetric and skew-symmetric components and characterizing the spectral behaviour of each operator. For each of the numerical methods considered, asymptotic truncation error and resolution estimates are presented for the limiting cases of pure advection and pure diffusion. It is demonstrated that streamline upwind Petrov-Galerkin and its control-volume finite element analogue, the streamline upwind control-volume method, produce both an artificial diffusivity and a concomitant phase speed adjustment in addition to the usual semi-discrete artifacts observed in the phase speed, group speed and diffusivity. The Galerkin finite element method and its streamline upwind derivatives are shown to exhibit super-convergent behaviour in terms of phase and group speed when a consistent mass matrix is used in the formulation. In contrast, the CVFEM method and its streamline upwind derivatives yield strictly second-order behaviour. In Part II of this paper, we consider two-dimensional semi-discretizations of the advection-diffusion equation and also assess the affects of grid-induced anisotropy observed in the non-dimensional phase speed, and the discrete and artificial diffusivities. Although this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common analysis framework. Published in 2004 by John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Eluszkiewicz, Janusz; Nehrkorn, Thomas; Wofsy, Steven C.; Matross, Daniel; Gerbig, Christoph; Lin, John C.; Freitas, Saulo; Longo, Marcos; Andrews, Arlyn E.; Peters, Wouter
2007-01-01
This paper evaluates simulations of atmospheric CO2 measured in 2004 at continental surface and airborne receptors, intended to test the capability to use data with high temporal and spatial resolution for analyses of carbon sources and sinks at regional and continental scales. The simulations were performed using the Stochastic Time-Inverted Lagrangian Transport (STILT) model driven by the Weather Forecast and Research (WRF) model, and linked to surface fluxes from the satellite-driven Vegetation Photosynthesis and Respiration Model (VPRM). The simulations provide detailed representations of hourly CO2 tower data and reproduce the shapes of airborne vertical profiles with high fidelity. WRF meteorology gives superior model performance compared with standard meteorological products, and the impact of including WRF convective mass fluxes in the STILT trajectory calculations is significant in individual cases. Important biases in the simulation are associated with the nighttime CO2 build-up and subsequent morning transition to convective conditions, and with errors in the advected lateral boundary condition. Comparison of STILT simulations driven by the WRF model against those driven by the Brazilian variant of the Regional Atmospheric Modeling System (BRAMS) shows that model-to-model differences are smaller than between an individual transport model and observations, pointing to systematic errors in the simulated transport. Future developments in the WRF model s data assimilation capabilities, basic research into the fundamental aspects of trajectory calculations, and intercomparison studies involving other transport models, are possible venues for reducing these errors. Overall, the STILT/WRF/VPRM offers a powerful tool for continental and regional scale carbon flux estimates.
NASA Astrophysics Data System (ADS)
Quercia, A.; Albanese, R.; Fresa, R.; Minucci, S.; Arshad, S.; Vayakis, G.
2017-12-01
The paper carries out a comprehensive study of the performances of Rogowski coils. It describes methodologies that were developed in order to assess the capabilities of the Continuous External Rogowski (CER), which measures the total toroidal current in the ITER machine. Even though the paper mainly considers the CER, the contents are general and relevant to any Rogowski sensor. The CER consists of two concentric helical coils which are wound along a complex closed path. Modelling and computational activities were performed to quantify the measurement errors, taking detailed account of the ITER environment. The geometrical complexity of the sensor is accurately accounted for and the standard model which provides the classical expression to compute the flux linkage of Rogowski sensors is quantitatively validated. Then, in order to take into account the non-ideality of the winding, a generalized expression, formally analogue to the classical one, is presented. Models to determine the worst case and the statistical measurement accuracies are hence provided. The following sources of error are considered: effect of the joints, disturbances due to external sources of field (the currents flowing in the poloidal field coils and the ferromagnetic inserts of ITER), deviations from ideal geometry, toroidal field variations, calibration, noise and integration drift. The proposed methods are applied to the measurement error of the CER, in particular in its high and low operating ranges, as prescribed by the ITER system design description documents, and during transients, which highlight the large time constant related to the shielding of the vacuum vessel. The analyses presented in the paper show that the design of the CER diagnostic is capable of achieving the requisite performance as needed for the operation of the ITER machine.
Ansari, Mozafar; Othman, Faridah; Abunama, Taher; El-Shafie, Ahmed
2018-04-01
The function of a sewage treatment plant is to treat the sewage to acceptable standards before being discharged into the receiving waters. To design and operate such plants, it is necessary to measure and predict the influent flow rate. In this research, the influent flow rate of a sewage treatment plant (STP) was modelled and predicted by autoregressive integrated moving average (ARIMA), nonlinear autoregressive network (NAR) and support vector machine (SVM) regression time series algorithms. To evaluate the models' accuracy, the root mean square error (RMSE) and coefficient of determination (R 2 ) were calculated as initial assessment measures, while relative error (RE), peak flow criterion (PFC) and low flow criterion (LFC) were calculated as final evaluation measures to demonstrate the detailed accuracy of the selected models. An integrated model was developed based on the individual models' prediction ability for low, average and peak flow. An initial assessment of the results showed that the ARIMA model was the least accurate and the NAR model was the most accurate. The RE results also prove that the SVM model's frequency of errors above 10% or below - 10% was greater than the NAR model's. The influent was also forecasted up to 44 weeks ahead by both models. The graphical results indicate that the NAR model made better predictions than the SVM model. The final evaluation of NAR and SVM demonstrated that SVM made better predictions at peak flow and NAR fit well for low and average inflow ranges. The integrated model developed includes the NAR model for low and average influent and the SVM model for peak inflow.
Development and testing of a new ray-tracing approach to GNSS carrier-phase multipath modelling
NASA Astrophysics Data System (ADS)
Lau, Lawrence; Cross, Paul
2007-11-01
Multipath is one of the most important error sources in Global Navigation Satellite System (GNSS) carrier-phase-based precise relative positioning. Its theoretical maximum is a quarter of the carrier wavelength (about 4.8 cm for the Global Positioning System (GPS) L1 carrier) and, although it rarely reaches this size, it must clearly be mitigated if millimetre-accuracy positioning is to be achieved. In most static applications, this may be accomplished by averaging over a sufficiently long period of observation, but in kinematic applications, a modelling approach must be used. This paper is concerned with one such approach: the use of ray-tracing to reconstruct the error and therefore remove it. In order to apply such an approach, it is necessary to have a detailed understanding of the signal transmitted from the satellite, the reflection process, the antenna characteristics and the way that the reflected and direct signal are processed within the receiver. This paper reviews all of these and introduces a formal ray-tracing method for multipath estimation based on precise knowledge of the satellite reflector antenna geometry and of the reflector material and antenna characteristics. It is validated experimentally using GPS signals reflected from metal, water and a brick building, and is shown to be able to model most of the main multipath characteristics. The method will have important practical applications for correcting for multipath in well-constrained environments (such as at base stations for local area GPS networks, at International GNSS Service (IGS) reference stations, and on spacecraft), and it can be used to simulate realistic multipath errors for various performance analyses in high-precision positioning.
Using Audit Information to Adjust Parameter Estimates for Data Errors in Clinical Trials
Shepherd, Bryan E.; Shaw, Pamela A.; Dodd, Lori E.
2013-01-01
Background Audits are often performed to assess the quality of clinical trial data, but beyond detecting fraud or sloppiness, the audit data is generally ignored. In earlier work using data from a non-randomized study, Shepherd and Yu (2011) developed statistical methods to incorporate audit results into study estimates, and demonstrated that audit data could be used to eliminate bias. Purpose In this manuscript we examine the usefulness of audit-based error-correction methods in clinical trial settings where a continuous outcome is of primary interest. Methods We demonstrate the bias of multiple linear regression estimates in general settings with an outcome that may have errors and a set of covariates for which some may have errors and others, including treatment assignment, are recorded correctly for all subjects. We study this bias under different assumptions including independence between treatment assignment, covariates, and data errors (conceivable in a double-blinded randomized trial) and independence between treatment assignment and covariates but not data errors (possible in an unblinded randomized trial). We review moment-based estimators to incorporate the audit data and propose new multiple imputation estimators. The performance of estimators is studied in simulations. Results When treatment is randomized and unrelated to data errors, estimates of the treatment effect using the original error-prone data (i.e., ignoring the audit results) are unbiased. In this setting, both moment and multiple imputation estimators incorporating audit data are more variable than standard analyses using the original data. In contrast, in settings where treatment is randomized but correlated with data errors and in settings where treatment is not randomized, standard treatment effect estimates will be biased. And in all settings, parameter estimates for the original, error-prone covariates will be biased. Treatment and covariate effect estimates can be corrected by incorporating audit data using either the multiple imputation or moment-based approaches. Bias, precision, and coverage of confidence intervals improve as the audit size increases. Limitations The extent of bias and the performance of methods depend on the extent and nature of the error as well as the size of the audit. This work only considers methods for the linear model. Settings much different than those considered here need further study. Conclusions In randomized trials with continuous outcomes and treatment assignment independent of data errors, standard analyses of treatment effects will be unbiased and are recommended. However, if treatment assignment is correlated with data errors or other covariates, naive analyses may be biased. In these settings, and when covariate effects are of interest, approaches for incorporating audit results should be considered. PMID:22848072
Defining the Relationship Between Human Error Classes and Technology Intervention Strategies
NASA Technical Reports Server (NTRS)
Wiegmann, Douglas A.; Rantanen, Esa; Crisp, Vicki K. (Technical Monitor)
2002-01-01
One of the main factors in all aviation accidents is human error. The NASA Aviation Safety Program (AvSP), therefore, has identified several human-factors safety technologies to address this issue. Some technologies directly address human error either by attempting to reduce the occurrence of errors or by mitigating the negative consequences of errors. However, new technologies and system changes may also introduce new error opportunities or even induce different types of errors. Consequently, a thorough understanding of the relationship between error classes and technology "fixes" is crucial for the evaluation of intervention strategies outlined in the AvSP, so that resources can be effectively directed to maximize the benefit to flight safety. The purpose of the present project, therefore, was to examine the repositories of human factors data to identify the possible relationship between different error class and technology intervention strategies. The first phase of the project, which is summarized here, involved the development of prototype data structures or matrices that map errors onto "fixes" (and vice versa), with the hope of facilitating the development of standards for evaluating safety products. Possible follow-on phases of this project are also discussed. These additional efforts include a thorough and detailed review of the literature to fill in the data matrix and the construction of a complete database and standards checklists.
How to Create Automatically Graded Spreadsheets for Statistics Courses
ERIC Educational Resources Information Center
LoSchiavo, Frank M.
2016-01-01
Instructors often use spreadsheet software (e.g., Microsoft Excel) in their statistics courses so that students can gain experience conducting computerized analyses. Unfortunately, students tend to make several predictable errors when programming spreadsheets. Without immediate feedback, programming errors are likely to go undetected, and as a…
QUANTIFYING UNCERTAINTY DUE TO RANDOM ERRORS FOR MOMENT ANALYSES OF BREAKTHROUGH CURVES
The uncertainty in moments calculated from breakthrough curves (BTCs) is investigated as a function of random measurement errors in the data used to define the BTCs. The method presented assumes moments are calculated by numerical integration using the trapezoidal rule, and is t...
NASA Astrophysics Data System (ADS)
Tret'yakov, Evgeniy V.; Shuvalov, Vladimir V.; Shutov, I. V.
2002-11-01
An approximate algorithm is tested for solving the problem of diffusion optical tomography in experiments on the visualisation of details of the inner structure of strongly scattering model objects containing scattering and semitransparent inclusions, as well as absorbing inclusions located inside other optical inhomogeneities. The stability of the algorithm to errors is demonstrated, which allows its use for a rapid (2 — 3 min) image reconstruction of the details of objects with a complicated inner structure.
Using EHR Data to Detect Prescribing Errors in Rapidly Discontinued Medication Orders.
Burlison, Jonathan D; McDaniel, Robert B; Baker, Donald K; Hasan, Murad; Robertson, Jennifer J; Howard, Scott C; Hoffman, James M
2018-01-01
Previous research developed a new method for locating prescribing errors in rapidly discontinued electronic medication orders. Although effective, the prospective design of that research hinders its feasibility for regular use. Our objectives were to assess a method to retrospectively detect prescribing errors, to characterize the identified errors, and to identify potential improvement opportunities. Electronically submitted medication orders from 28 randomly selected days that were discontinued within 120 minutes of submission were reviewed and categorized as most likely errors, nonerrors, or not enough information to determine status. Identified errors were evaluated by amount of time elapsed from original submission to discontinuation, error type, staff position, and potential clinical significance. Pearson's chi-square test was used to compare rates of errors across prescriber types. In all, 147 errors were identified in 305 medication orders. The method was most effective for orders that were discontinued within 90 minutes. Duplicate orders were most common; physicians in training had the highest error rate ( p < 0.001), and 24 errors were potentially clinically significant. None of the errors were voluntarily reported. It is possible to identify prescribing errors in rapidly discontinued medication orders by using retrospective methods that do not require interrupting prescribers to discuss order details. Future research could validate our methods in different clinical settings. Regular use of this measure could help determine the causes of prescribing errors, track performance, and identify and evaluate interventions to improve prescribing systems and processes. Schattauer GmbH Stuttgart.
Purpora, Christina; Blegen, Mary A; Stotts, Nancy A
2015-01-01
To test hypotheses from a horizontal violence and quality and safety of patient care model: horizontal violence (negative behavior among peers) is inversely related to peer relations, quality of care and it is positively related to errors and adverse events. Additionally, the association between horizontal violence, peer relations, quality of care, errors and adverse events, and nurse and work characteristics were determined. A random sample (n= 175) of hospital staff Registered Nurses working in California. Nurses participated via survey. Bivariate and multivariate analyses tested the study hypotheses. Hypotheses were supported. Horizontal violence was inversely related to peer relations and quality of care, and positively related to errors and adverse events. Including peer relations in the analyses altered the relationship between horizontal violence and quality of care but not between horizontal violence, errors and adverse events. Nurse and hospital characteristics were not related to other variables. Clinical area contributed significantly in predicting the quality of care, errors and adverse events but not peer relationships. Horizontal violence affects peer relationships and the quality and safety of patient care as perceived by participating nurses. Supportive peer relationships are important to mitigate the impact of horizontal violence on quality of care.
Retention-error patterns in complex alphanumeric serial-recall tasks.
Mathy, Fabien; Varré, Jean-Stéphane
2013-01-01
We propose a new method based on an algorithm usually dedicated to DNA sequence alignment in order to both reliably score short-term memory performance on immediate serial-recall tasks and analyse retention-error patterns. There can be considerable confusion on how performance on immediate serial list recall tasks is scored, especially when the to-be-remembered items are sampled with replacement. We discuss the utility of sequence-alignment algorithms to compare the stimuli to the participants' responses. The idea is that deletion, substitution, translocation, and insertion errors, which are typical in DNA, are also typical putative errors in short-term memory (respectively omission, confusion, permutation, and intrusion errors). We analyse four data sets in which alphanumeric lists included a few (or many) repetitions. After examining the method on two simple data sets, we show that sequence alignment offers 1) a compelling method for measuring capacity in terms of chunks when many regularities are introduced in the material (third data set) and 2) a reliable estimator of individual differences in short-term memory capacity. This study illustrates the difficulty of arriving at a good measure of short-term memory performance, and also attempts to characterise the primary factors underpinning remembering and forgetting.
Liquid Medication Dosing Errors in Children: Role of Provider Counseling Strategies
Yin, H. Shonna; Dreyer, Benard P.; Moreira, Hannah A.; van Schaick, Linda; Rodriguez, Luis; Boettger, Susanne; Mendelsohn, Alan L.
2014-01-01
Objective To examine the degree to which recommended provider counseling strategies, including advanced communication techniques and dosing instrument provision, are associated with reductions in parent liquid medication dosing errors. Methods Cross-sectional analysis of baseline data on provider communication and dosing instrument provision from a study of a health literacy intervention to reduce medication errors. Parents whose children (<9 years) were seen in two urban public hospital pediatric emergency departments (EDs) and were prescribed daily dose liquid medications self-reported whether they received counseling about their child’s medication, including advanced strategies (teachback, drawings/pictures, demonstration, showback) and receipt of a dosing instrument. Primary dependent variable: observed dosing error (>20% deviation from prescribed). Multivariate logistic regression analyses performed, controlling for: parent age, language, country, ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease status; site. Results Of 287 parents, 41.1% made dosing errors. Advanced counseling and instrument provision in the ED were reported by 33.1% and 19.2%, respectively; 15.0% reported both. Advanced counseling and instrument provision in the ED were associated with decreased errors (30.5 vs. 46.4%, p=0.01; 21.8 vs. 45.7%, p=0.001). In adjusted analyses, ED advanced counseling in combination with instrument provision was associated with a decreased odds of error compared to receiving neither (AOR 0.3; 95% CI 0.1–0.7); advanced counseling alone and instrument alone were not significantly associated with odds of error. Conclusion Provider use of advanced counseling strategies and dosing instrument provision may be especially effective in reducing errors when used together. PMID:24767779
Type I and Type II error concerns in fMRI research: re-balancing the scale
Cunningham, William A.
2009-01-01
Statistical thresholding (i.e. P-values) in fMRI research has become increasingly conservative over the past decade in an attempt to diminish Type I errors (i.e. false alarms) to a level traditionally allowed in behavioral science research. In this article, we examine the unintended negative consequences of this single-minded devotion to Type I errors: increased Type II errors (i.e. missing true effects), a bias toward studying large rather than small effects, a bias toward observing sensory and motor processes rather than complex cognitive and affective processes and deficient meta-analyses. Power analyses indicate that the reductions in acceptable P-values over time are producing dramatic increases in the Type II error rate. Moreover, the push for a mapwide false discovery rate (FDR) of 0.05 is based on the assumption that this is the FDR in most behavioral research; however, this is an inaccurate assessment of the conventions in actual behavioral research. We report simulations demonstrating that combined intensity and cluster size thresholds such as P < 0.005 with a 10 voxel extent produce a desirable balance between Types I and II error rates. This joint threshold produces high but acceptable Type II error rates and produces a FDR that is comparable to the effective FDR in typical behavioral science articles (while a 20 voxel extent threshold produces an actual FDR of 0.05 with relatively common imaging parameters). We recommend a greater focus on replication and meta-analysis rather than emphasizing single studies as the unit of analysis for establishing scientific truth. From this perspective, Type I errors are self-erasing because they will not replicate, thus allowing for more lenient thresholding to avoid Type II errors. PMID:20035017
Liquid medication dosing errors in children: role of provider counseling strategies.
Yin, H Shonna; Dreyer, Benard P; Moreira, Hannah A; van Schaick, Linda; Rodriguez, Luis; Boettger, Susanne; Mendelsohn, Alan L
2014-01-01
To examine the degree to which recommended provider counseling strategies, including advanced communication techniques and dosing instrument provision, are associated with reductions in parent liquid medication dosing errors. Cross-sectional analysis of baseline data on provider communication and dosing instrument provision from a study of a health literacy intervention to reduce medication errors. Parents whose children (<9 years) were seen in 2 urban public hospital pediatric emergency departments (EDs) and were prescribed daily dose liquid medications self-reported whether they received counseling about their child's medication, including advanced strategies (teachback, drawings/pictures, demonstration, showback) and receipt of a dosing instrument. The primary dependent variable was observed dosing error (>20% deviation from prescribed). Multivariate logistic regression analyses were performed, controlling for parent age, language, country, ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease status; and site. Of 287 parents, 41.1% made dosing errors. Advanced counseling and instrument provision in the ED were reported by 33.1% and 19.2%, respectively; 15.0% reported both. Advanced counseling and instrument provision in the ED were associated with decreased errors (30.5 vs. 46.4%, P = .01; 21.8 vs. 45.7%, P = .001). In adjusted analyses, ED advanced counseling in combination with instrument provision was associated with a decreased odds of error compared to receiving neither (adjusted odds ratio 0.3; 95% confidence interval 0.1-0.7); advanced counseling alone and instrument alone were not significantly associated with odds of error. Provider use of advanced counseling strategies and dosing instrument provision may be especially effective in reducing errors when used together. Copyright © 2014 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
Issues with data and analyses: Errors, underlying themes, and potential solutions
Allison, David B.
2018-01-01
Some aspects of science, taken at the broadest level, are universal in empirical research. These include collecting, analyzing, and reporting data. In each of these aspects, errors can and do occur. In this work, we first discuss the importance of focusing on statistical and data errors to continually improve the practice of science. We then describe underlying themes of the types of errors and postulate contributing factors. To do so, we describe a case series of relatively severe data and statistical errors coupled with surveys of some types of errors to better characterize the magnitude, frequency, and trends. Having examined these errors, we then discuss the consequences of specific errors or classes of errors. Finally, given the extracted themes, we discuss methodological, cultural, and system-level approaches to reducing the frequency of commonly observed errors. These approaches will plausibly contribute to the self-critical, self-correcting, ever-evolving practice of science, and ultimately to furthering knowledge. PMID:29531079
Continuous slope-area discharge records in Maricopa County, Arizona, 2004–2012
Wiele, Stephen M.; Heaton, John W.; Bunch, Claire E.; Gardner, David E.; Smith, Christopher F.
2015-12-29
Analyses of sources of errors and the impact stage data errors have on calculated discharge time series are considered, along with issues in data reduction. Steeper, longer stream reaches are generally less sensitive to measurement error. Other issues considered are pressure transducer drawdown, capture of flood peaks with discrete stage data, selection of stage record for development of rating curves, and minimum stages for the calculation of discharge.
NASA Technical Reports Server (NTRS)
Frantz, Brian D.; Ivancic, William D.
2001-01-01
Asynchronous Transfer Mode (ATM) Quality of Service (QoS) experiments using the Transmission Control Protocol/Internet Protocol (TCP/IP) were performed for various link delays. The link delay was set to emulate a Wide Area Network (WAN) and a Satellite Link. The purpose of these experiments was to evaluate the ATM QoS requirements for applications that utilize advance TCP/IP protocols implemented with large windows and Selective ACKnowledgements (SACK). The effects of cell error, cell loss, and random bit errors on throughput were reported. The detailed test plan and test results are presented herein.
Wei, Duo; Bodenreider, Olivier
2015-01-01
Objectives To investigate errors identified in SNOMED CT by human reviewers with help from the Abstraction Network methodology and examine why they had escaped detection by the Description Logic (DL) classifier. Case study; Two examples of errors are presented in detail (one missing IS-A relation and one duplicate concept). After correction, SNOMED CT is reclassified to ensure that no new inconsistency was introduced. Conclusions DL-based auditing techniques built in terminology development environments ensure the logical consistency of the terminology. However, complementary approaches are needed for identifying and addressing other types of errors. PMID:20841848
Wei, Duo; Bodenreider, Olivier
2010-01-01
To investigate errors identified in SNOMED CT by human reviewers with help from the Abstraction Network methodology and examine why they had escaped detection by the Description Logic (DL) classifier. Case study; Two examples of errors are presented in detail (one missing IS-A relation and one duplicate concept). After correction, SNOMED CT is reclassified to ensure that no new inconsistency was introduced. DL-based auditing techniques built in terminology development environments ensure the logical consistency of the terminology. However, complementary approaches are needed for identifying and addressing other types of errors.
Derivation of error sources for experimentally derived heliostat shapes
NASA Astrophysics Data System (ADS)
Cumpston, Jeff; Coventry, Joe
2017-06-01
Data gathered using photogrammetry that represents the surface and structure of a heliostat mirror panel is investigated in detail. A curve-fitting approach that allows the retrieval of four distinct mirror error components, while prioritizing the best fit possible to paraboloidal terms in the curve fitting equation, is presented. The angular errors associated with each of the four surfaces are calculated, and the relative magnitude for each of them is given. It is found that in this case, the mirror had a significant structural twist, and an estimate of the improvement to the mirror surface quality in the case of no twist was made.
FERMI large area telescope observations of the vela-x pulsar wind nebula
Abdo, A. A.; Ackermann, M.; Ajello, M.; ...
2010-03-18
Here, we report on gamma-ray observations in the off-pulse window of the Vela pulsar PSR B0833–45 using 11 months of survey data from the Fermi Large Area Telescope (LAT). This pulsar is located in the 8° diameter Vela supernova remnant, which contains several regions of non-thermal emission detected in the radio, X-ray, and gamma-ray bands. The gamma-ray emission detected by the LAT lies within one of these regions, the 2° × 3° area south of the pulsar known as Vela-X. The LAT flux is significantly spatially extended with a best-fit radius of 0°more » $$_.$$88 ± 0°$$_.$$12 for an assumed radially symmetric uniform disk. The 200 MeV to 20 GeV LAT spectrum of this source is well described by a power law with a spectral index of 2.41 ± 0.09 ± 0.15 and integral flux above 100 MeV of (4.73 ± 0.63 ± 1.32) × 10 –7 cm –2 s –1. The first errors represent the statistical error on the fit parameters, while the second ones are the systematic uncertainties. Detailed morphological and spectral analyses give strong constraints on the energetics and magnetic field of the pulsar wind nebula system and favor a scenario with two distinct electron populations.« less
Detection of the pulsar wind nebula HESS J1825-137 with the Fermi Large Area Telescope
Grondin, M. -H.; Funk, S.; Lemoine-Goumard, M.; ...
2011-08-10
Here, we announce the discovery of 1-100 GeV gamma-ray emission from the archetypal TeV pulsar wind nebula (PWN) HESS J1825–137 using 20 months of survey data from the Fermi-Large Area Telescope (LAT). The gamma-ray emission detected by the LAT is significantly spatially extended, with a best-fit rms extension of σ = 0°.56 ± 0°.07 for an assumed Gaussian model. The 1-100 GeV LAT spectrum of this source is well described by a power law with a spectral index of 1.38 ± 0.12 ± 0.16 and an integral flux above 1 GeV of (6.50 ± 0.21 ± 3.90) × 10 –9more » cm –2 s –1. The first errors represent the statistical errors on the fit parameters, while the second ones are the systematic uncertainties. Detailed morphological and spectral analyses bring new constraints on the energetics and magnetic field of the PWN system. As a result, the spatial extent and hard spectrum of the GeV emission are consistent with the picture of an inverse Compton origin of the GeV-TeV emission in a cooling-limited nebula powered by the pulsar PSR J1826–1334.« less
Evaluating the technique of using inhalation device in COPD and bronchial asthma patients.
Arora, Piyush; Kumar, Lokender; Vohra, Vikram; Sarin, Rohit; Jaiswal, Anand; Puri, M M; Rathee, Deepti; Chakraborty, Pitambar
2014-07-01
In asthma management, poor handling of inhalation devices and wrong inhalation technique are associated with decreased medication delivery and poor disease control. The key to overcome the drawbacks in inhalation technique is to make patients familiar with issues related to correct use and performance of these medical devices. The objective of this study was to evaluate and analyse technique of use of the inhalation device used by patients of COPD and Bronchial Asthma. A total of 300 cases of BA or COPD patients using different types of inhalation devices were included in this observational study. Data were captured using a proforma and were analysed using SPSS version 15.0. Out of total 300 enrolled patients, 247 (82.3%) made at least one error. Maximum errors observed in subjects using MDI (94.3%), followed by DPI (82.3%), MDI with Spacer (78%) while Nebulizer users (70%) made least number of errors (p = 0.005). Illiterate patients showed 95.2% error while post-graduate and professionals showed 33.3%. This difference was statistically significant (p < 0.001). Self-educated patients committed 100% error, while those trained by a doctor made 56.3% error. Majority of patients using inhalation devices made errors while using the device. Proper education to patients on correct usage may not only improve control of the symptoms of the disease but might also allow dose reduction in long term. Copyright © 2014 Elsevier Ltd. All rights reserved.
Validation of Multiple Tools for Flat Plate Photovoltaic Modeling Against Measured Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeman, J.; Whitmore, J.; Blair, N.
2014-08-01
This report expands upon a previous work by the same authors, published in the 40th IEEE Photovoltaic Specialists conference. In this validation study, comprehensive analysis is performed on nine photovoltaic systems for which NREL could obtain detailed performance data and specifications, including three utility-scale systems and six commercial scale systems. Multiple photovoltaic performance modeling tools were used to model these nine systems, and the error of each tool was analyzed compared to quality-controlled measured performance data. This study shows that, excluding identified outliers, all tools achieve annual errors within +/-8% and hourly root mean squared errors less than 7% formore » all systems. It is further shown using SAM that module model and irradiance input choices can change the annual error with respect to measured data by as much as 6.6% for these nine systems, although all combinations examined still fall within an annual error range of +/-8.5%. Additionally, a seasonal variation in monthly error is shown for all tools. Finally, the effects of irradiance data uncertainty and the use of default loss assumptions on annual error are explored, and two approaches to reduce the error inherent in photovoltaic modeling are proposed.« less
Managing human fallibility in critical aerospace situations
NASA Astrophysics Data System (ADS)
Tew, Larry
2014-11-01
Human fallibility is pervasive in the aerospace industry with over 50% of errors attributed to human error. Consider the benefits to any organization if those errors were significantly reduced. Aerospace manufacturing involves high value, high profile systems with significant complexity and often repetitive build, assembly, and test operations. In spite of extensive analysis, planning, training, and detailed procedures, human factors can cause unexpected errors. Handling such errors involves extensive cause and corrective action analysis and invariably schedule slips and cost growth. We will discuss success stories, including those associated with electro-optical systems, where very significant reductions in human fallibility errors were achieved after receiving adapted and specialized training. In the eyes of company and customer leadership, the steps used to achieve these results lead to in a major culture change in both the workforce and the supporting management organization. This approach has proven effective in other industries like medicine, firefighting, law enforcement, and aviation. The roadmap to success and the steps to minimize human error are known. They can be used by any organization willing to accept human fallibility and take a proactive approach to incorporate the steps needed to manage and minimize error.
The study of CD side to side error in line/space pattern caused by post-exposure bake effect
NASA Astrophysics Data System (ADS)
Huang, Jin; Guo, Eric; Ge, Haiming; Lu, Max; Wu, Yijun; Tian, Mingjing; Yan, Shichuan; Wang, Ran
2016-10-01
In semiconductor manufacturing, as the design rule has decreased, the ITRS roadmap requires crucial tighter critical dimension (CD) control. CD uniformity is one of the necessary parameters to assure good performance and reliable functionality of any integrated circuit (IC) [1] [2], and towards the advanced technology nodes, it is a challenge to control CD uniformity well. The study of corresponding CD Uniformity by tuning Post-Exposure bake (PEB) and develop process has some significant progress[3], but CD side to side error happening to some line/space pattern are still found in practical application, and the error has approached to over the uniformity tolerance. After details analysis, even though use several developer types, the CD side to side error has not been found significant relationship to the developing. In addition, it is impossible to correct the CD side to side error by electron beam correction as such error does not appear in all Line/Space pattern masks. In this paper the root cause of CD side to side error is analyzed and the PEB module process are optimized as a main factor for improvement of CD side to side error.
Multiplicity Control in Structural Equation Modeling
ERIC Educational Resources Information Center
Cribbie, Robert A.
2007-01-01
Researchers conducting structural equation modeling analyses rarely, if ever, control for the inflated probability of Type I errors when evaluating the statistical significance of multiple parameters in a model. In this study, the Type I error control, power and true model rates of famsilywise and false discovery rate controlling procedures were…
A Study on Generic Representation of Skeletal Remains Replication of Prehistoric Burial
NASA Astrophysics Data System (ADS)
Shao, C.-W.; Chiu, H.-L.; Chang, S.-K.
2015-08-01
Generic representation of skeletal remains from burials consists of three dimensions which include physical anthropologists, replication technicians, and promotional educators. For the reason that archaeological excavation is irreversible and disruptive, detail documentation and replication technologies are surely needed for many purposes. Unearthed bones during the process of 3D digital scanning need to go through reverse procedure, 3D scanning, digital model superimposition, rapid prototyping, mould making, and the integrated errors generated from the presentation of colours and textures are important issues for the presentation of replicate skeleton remains among professional decisions conducted by physical anthropologists, subjective determination of makers, and the expectations of viewers. This study presents several cases and examines current issues on display and replication technologies for human skeletal remains of prehistoric burials. This study documented detail colour changes of human skeleton over time for the reference of reproduction. The tolerance errors of quantification and required technical qualification is acquired according to the precision of 3D scanning, the specification requirement of rapid prototyping machine, and the mould making process should following the professional requirement for physical anthropological study. Additionally, the colorimeter is adopted to record and analyse the "colour change" of the human skeletal remains from wet to dry condition. Then, the "colure change" is used to evaluate the "real" surface texture and colour presentation of human skeletal remains, and to limit the artistic presentation among the human skeletal remains reproduction. The"Lingdao man No.1", is a well preserved burial of early Neolithic period (8300 B.P.) excavated from Liangdao-Daowei site, Matsu, Taiwan , as the replicating object for this study. In this study, we examined the reproduction procedures step by step for ensuring the surface texture and colour of the replica matches the real human skeletal remains when discovered. The "colour change" of the skeleton documented and quantified in this study could be the reference for the future study and educational exhibition of human skeletal remain reproduction.
Leaf Morphology, Taxonomy and Geometric Morphometrics: A Simplified Protocol for Beginners
Viscosi, Vincenzo; Cardini, Andrea
2011-01-01
Taxonomy relies greatly on morphology to discriminate groups. Computerized geometric morphometric methods for quantitative shape analysis measure, test and visualize differences in form in a highly effective, reproducible, accurate and statistically powerful way. Plant leaves are commonly used in taxonomic analyses and are particularly suitable to landmark based geometric morphometrics. However, botanists do not yet seem to have taken advantage of this set of methods in their studies as much as zoologists have done. Using free software and an example dataset from two geographical populations of sessile oak leaves, we describe in detailed but simple terms how to: a) compute size and shape variables using Procrustes methods; b) test measurement error and the main levels of variation (population and trees) using a hierachical design; c) estimate the accuracy of group discrimination; d) repeat this estimate after controlling for the effect of size differences on shape (i.e., allometry). Measurement error was completely negligible; individual variation in leaf morphology was large and differences between trees were generally bigger than within trees; differences between the two geographic populations were small in both size and shape; despite a weak allometric trend, controlling for the effect of size on shape slighly increased discrimination accuracy. Procrustes based methods for the analysis of landmarks were highly efficient in measuring the hierarchical structure of differences in leaves and in revealing very small-scale variation. In taxonomy and many other fields of botany and biology, the application of geometric morphometrics contributes to increase scientific rigour in the description of important aspects of the phenotypic dimension of biodiversity. Easy to follow but detailed step by step example studies can promote a more extensive use of these numerical methods, as they provide an introduction to the discipline which, for many biologists, is less intimidating than the often inaccessible specialistic literature. PMID:21991324
Evaluation of an in-practice wet-chemistry analyzer using canine and feline serum samples.
Irvine, Katherine L; Burt, Kay; Papasouliotis, Kostas
2016-01-01
A wet-chemistry biochemical analyzer was assessed for in-practice veterinary use. Its small size may mean a cost-effective method for low-throughput in-house biochemical analyses for first-opinion practice. The objectives of our study were to determine imprecision, total observed error, and acceptability of the analyzer for measurement of common canine and feline serum analytes, and to compare clinical sample results to those from a commercial reference analyzer. Imprecision was determined by within- and between-run repeatability for canine and feline pooled samples, and manufacturer-supplied quality control material (QCM). Total observed error (TEobs) was determined for pooled samples and QCM. Performance was assessed for canine and feline pooled samples by sigma metric determination. Agreement and errors between the in-practice and reference analyzers were determined for canine and feline clinical samples by Bland-Altman and Deming regression analyses. Within- and between-run precision was high for most analytes, and TEobs(%) was mostly lower than total allowable error. Performance based on sigma metrics was good (σ > 4) for many analytes and marginal (σ > 3) for most of the remainder. Correlation between the analyzers was very high for most canine analytes and high for most feline analytes. Between-analyzer bias was generally attributed to high constant error. The in-practice analyzer showed good overall performance, with only calcium and phosphate analyses identified as significantly problematic. Agreement for most analytes was insufficient for transposition of reference intervals, and we recommend that in-practice-specific reference intervals be established in the laboratory. © 2015 The Author(s).
Inhibitory saccadic dysfunction is associated with cerebellar injury in multiple sclerosis.
Kolbe, Scott C; Kilpatrick, Trevor J; Mitchell, Peter J; White, Owen; Egan, Gary F; Fielding, Joanne
2014-05-01
Cognitive dysfunction is common in patients with multiple sclerosis (MS). Saccadic eye movement paradigms such as antisaccades (AS) can sensitively interrogate cognitive function, in particular, the executive and attentional processes of response selection and inhibition. Although we have previously demonstrated significant deficits in the generation of AS in MS patients, the neuropathological changes underlying these deficits were not elucidated. In this study, 24 patients with relapsing-remitting MS underwent testing using an AS paradigm. Rank correlation and multiple regression analyses were subsequently used to determine whether AS errors in these patients were associated with: (i) neurological and radiological abnormalities, as measured by standard clinical techniques, (ii) cognitive dysfunction, and (iii) regionally specific cerebral white and gray-matter damage. Although AS error rates in MS patients did not correlate with clinical disability (using the Expanded Disability Status Score), T2 lesion load or brain parenchymal fraction, AS error rate did correlate with performance on the Paced Auditory Serial Addition Task and the Symbol Digit Modalities Test, neuropsychological tests commonly used in MS. Further, voxel-wise regression analyses revealed associations between AS errors and reduced fractional anisotropy throughout most of the cerebellum, and increased mean diffusivity in the cerebellar vermis. Region-wise regression analyses confirmed that AS errors also correlated with gray-matter atrophy in the cerebellum right VI subregion. These results support the use of the AS paradigm as a marker for cognitive dysfunction in MS and implicate structural and microstructural changes to the cerebellum as a contributing mechanism for AS deficits in these patients. Copyright © 2013 Wiley Periodicals, Inc.
Analysis of error-correction constraints in an optical disk.
Roberts, J D; Ryley, A; Jones, D M; Burke, D
1996-07-10
The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.
Analysis of error-correction constraints in an optical disk
NASA Astrophysics Data System (ADS)
Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David
1996-07-01
The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.
Further reply to remarks of R Cross on ‘A comparative study of two types of ball-on-ball collision’
NASA Astrophysics Data System (ADS)
White, Colin
2018-03-01
In this letter, I show how a perceived approximation error in my first letter, White (2017 Phys. Ed. 53 016502) concerning my explanation of the dynamic motion of two interacting Newtons Cradle balls, proves to me insignificant. Although these second and third order errors described are shown to be minimal, they do raise the opportunity to discuss the precise and more intricate ball interactions in finer detail.
Dynamic Wireless Network Based on Open Physical Layer
2011-02-18
would yield the error- exponent optimal solutions. We solved this problem, and the detailed works are reported in [?]. It turns out that when Renyi ...is, during the communication session. A natural set of metrics of interests are the family of Renyi divergences. With a parameter of α that can be...tuned, Renyi entropy of a given distribution corresponds to the Shannon entropy, at α = 1, to the probability of detection error, at α =∞. This gives a
NASA Astrophysics Data System (ADS)
Pan, X.; Yang, Y.; Liu, Y.; Fan, X.; Shan, L.; Zhang, X.
2018-04-01
Error source analyses are critical for the satellite-retrieved surface net radiation (Rn) products. In this study, we evaluate the Rn error sources in the Clouds and the Earth's Radiant Energy System (CERES) project at 43 sites from July in 2007 to December in 2007 in China. The results show that cloud fraction (CF), land surface temperature (LST), atmospheric temperature (AT) and algorithm error dominate the Rn error, with error contributions of -20, 15, 10 and 10 W/m2 (net shortwave (NSW)/longwave (NLW) radiation), respectively. For NSW, the dominant error source is algorithm error (more than 10 W/m2), particularly in spring and summer with abundant cloud. For NLW, due to the high sensitivity of algorithm and large LST/CF error, LST and CF are the largest error sources, especially in northern China. The AT influences the NLW error large in southern China because of the large AT error in there. The total precipitable water has weak influence on Rn error even with the high sensitivity of algorithm. In order to improve Rn quality, CF and LST (AT) error in northern (southern) China should be decreased.
FREIGHT CONTAINER LIFTING STANDARD
DOE Office of Scientific and Technical Information (OSTI.GOV)
POWERS DJ; SCOTT MA; MACKEY TC
2010-01-13
This standard details the correct methods of lifting and handling Series 1 freight containers following ISO-3874 and ISO-1496. The changes within RPP-40736 will allow better reading comprehension, as well as correcting editorial errors.
Ocean data assimilation using optimal interpolation with a quasi-geostrophic model
NASA Technical Reports Server (NTRS)
Rienecker, Michele M.; Miller, Robert N.
1991-01-01
A quasi-geostrophic (QG) stream function is analyzed by optimal interpolation (OI) over a 59-day period in a 150-km-square domain off northern California. Hydrographic observations acquired over five surveys were assimilated into a QG open boundary ocean model. Assimilation experiments were conducted separately for individual surveys to investigate the sensitivity of the OI analyses to parameters defining the decorrelation scale of an assumed error covariance function. The analyses were intercompared through dynamical hindcasts between surveys. The best hindcast was obtained using the smooth analyses produced with assumed error decorrelation scales identical to those of the observed stream function. The rms difference between the hindcast stream function and the final analysis was only 23 percent of the observation standard deviation. The two sets of OI analyses were temporally smoother than the fields from statistical objective analysis and in good agreement with the only independent data available for comparison.
Astrometric "Core-shifts" at the Highest Frequencies
NASA Technical Reports Server (NTRS)
Rioja, Maria; Dodson, Richard
2010-01-01
We discuss the application of a new VLBI astrometric method named "Source/Frequency Phase Referencing" to measurements of "core-shifts" in radio sources used for geodetic observations. We detail the reasons that astrometrical observations of 'core-shifts' have become critical in the era of VLBI2010. We detail how this new method allows the problem to be addressed at the highest frequencies and outline its superior compensation of tropospheric errors.
Officer Career Development: Reactions of Two Unrestricted Line Communities to Detailers
1987-08-01
self - esteem scale ( Rosenberg , 1979) (Cronbach alpha = .82). Evaluation of Job History (Box 2)^ 1. "What is your evaluation of the following...rating scales , which are vulnerable to "leniency error" (Kerlinger, 1965 ). That is, constituents may have evaluated detailers more favorably than they...communication in bargaining. Human Communication Research, 8^, 262-280. Rosenberg , M. (1979). Conceiving the self . New York: Basic Books. Turnbull, A. A
Why Is Rainfall Error Analysis Requisite for Data Assimilation and Climate Modeling?
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.; Zhang, Sara Q.
2004-01-01
Given the large temporal and spatial variability of precipitation processes, errors in rainfall observations are difficult to quantify yet crucial to making effective use of rainfall data for improving atmospheric analysis, weather forecasting, and climate modeling. We highlight the need for developing a quantitative understanding of systematic and random errors in precipitation observations by examining explicit examples of how each type of errors can affect forecasts and analyses in global data assimilation. We characterize the error information needed from the precipitation measurement community and how it may be used to improve data usage within the general framework of analysis techniques, as well as accuracy requirements from the perspective of climate modeling and global data assimilation.
National suicide rates a century after Durkheim: do we know enough to estimate error?
Claassen, Cynthia A; Yip, Paul S; Corcoran, Paul; Bossarte, Robert M; Lawrence, Bruce A; Currier, Glenn W
2010-06-01
Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the most widely used population-level suicide metric today. After reviewing the unique sources of bias incurred during stages of suicide data collection and concatenation, we propose a model designed to uniformly estimate error in future studies. A standardized method of error estimation uniformly applied to mortality data could produce data capable of promoting high quality analyses of cross-national research questions.
Reliability and coverage analysis of non-repairable fault-tolerant memory systems
NASA Technical Reports Server (NTRS)
Cox, G. W.; Carroll, B. D.
1976-01-01
A method was developed for the construction of probabilistic state-space models for nonrepairable systems. Models were developed for several systems which achieved reliability improvement by means of error-coding, modularized sparing, massive replication and other fault-tolerant techniques. From the models developed, sets of reliability and coverage equations for the systems were developed. Comparative analyses of the systems were performed using these equation sets. In addition, the effects of varying subunit reliabilities on system reliability and coverage were described. The results of these analyses indicated that a significant gain in system reliability may be achieved by use of combinations of modularized sparing, error coding, and software error control. For sufficiently reliable system subunits, this gain may far exceed the reliability gain achieved by use of massive replication techniques, yet result in a considerable saving in system cost.
Detailed puncture analyses tank cars : analysis of different impactor threats and impact conditions.
DOT National Transportation Integrated Search
2013-03-01
There has been significant research in recent years to analyze and improve the impact behavior and puncture resistance of railroad tank cars. Much of this research has been performed using detailed nonlinear finite element analyses supported by full ...
Online Error Reporting for Managing Quality Control Within Radiology.
Golnari, Pedram; Forsberg, Daniel; Rosipko, Beverly; Sunshine, Jeffrey L
2016-06-01
Information technology systems within health care, such as picture archiving and communication system (PACS) in radiology, can have a positive impact on production but can also risk compromising quality. The widespread use of PACS has removed the previous feedback loop between radiologists and technologists. Instead of direct communication of quality discrepancies found for an examination, the radiologist submitted a paper-based quality-control report. A web-based issue-reporting tool can help restore some of the feedback loop and also provide possibilities for more detailed analysis of submitted errors. The purpose of this study was to evaluate the hypothesis that data from use of an online error reporting software for quality control can focus our efforts within our department. For the 372,258 radiologic examinations conducted during the 6-month period study, 930 errors (390 exam protocol, 390 exam validation, and 150 exam technique) were submitted, corresponding to an error rate of 0.25 %. Within the category exam protocol, technologist documentation had the highest number of submitted errors in ultrasonography (77 errors [44 %]), while imaging protocol errors were the highest subtype error for computed tomography modality (35 errors [18 %]). Positioning and incorrect accession had the highest errors in the exam technique and exam validation error category, respectively, for nearly all of the modalities. An error rate less than 1 % could signify a system with a very high quality; however, a more likely explanation is that not all errors were detected or reported. Furthermore, staff reception of the error reporting system could also affect the reporting rate.
Simulation of wave propagation in three-dimensional random media
NASA Astrophysics Data System (ADS)
Coles, Wm. A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.
1995-04-01
Quantitative error analyses for the simulation of wave propagation in three-dimensional random media, when narrow angular scattering is assumed, are presented for plane-wave and spherical-wave geometry. This includes the errors that result from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive indices of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared with the spatial spectra of
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Rodgers, E. B.
1977-01-01
An advanced Man-Interactive image and data processing system (AOIPS) was developed to extract basic meteorological parameters from satellite data and to perform further analyses. The errors in the satellite derived cloud wind fields for tropical cyclones are investigated. The propagation of these errors through the AOIPS system and their effects on the analysis of horizontal divergence and relative vorticity are evaluated.
Using CO2:CO Correlations to Improve Inverse Analyses of Carbon Fluxes
NASA Technical Reports Server (NTRS)
Palmer, Paul I.; Suntharalingam, Parvadha; Jones, Dylan B. A.; Jacob, Daniel J.; Streets, David G.; Fu, Qingyan; Vay, Stephanie A.; Sachse, Glen W.
2006-01-01
Observed correlations between atmospheric concentrations of CO2 and CO represent potentially powerful information for improving CO2 surface flux estimates through coupled CO2-CO inverse analyses. We explore the value of these correlations in improving estimates of regional CO2 fluxes in east Asia by using aircraft observations of CO2 and CO from the TRACE-P campaign over the NW Pacific in March 2001. Our inverse model uses regional CO2 and CO surface fluxes as the state vector, separating biospheric and combustion contributions to CO2. CO2-CO error correlation coefficients are included in the inversion as off-diagonal entries in the a priori and observation error covariance matrices. We derive error correlations in a priori combustion source estimates of CO2 and CO by propagating error estimates of fuel consumption rates and emission factors. However, we find that these correlations are weak because CO source uncertainties are mostly determined by emission factors. Observed correlations between atmospheric CO2 and CO concentrations imply corresponding error correlations in the chemical transport model used as the forward model for the inversion. These error correlations in excess of 0.7, as derived from the TRACE-P data, enable a coupled CO2-CO inversion to achieve significant improvement over a CO2-only inversion for quantifying regional fluxes of CO2.
Safety Guided Design Based on Stamp/STPA for Manned Vehicle in Concept Design Phase
NASA Astrophysics Data System (ADS)
Ujiie, Ryo; Katahira, Masafumi; Miyamoto, Yuko; Umeda, Hiroki; Leveson, Nancy; Hoshino, Nobuyuki
2013-09-01
In manned vehicles, such as the Soyuz and the Space Shuttle, the crew and computer system cooperate to succeed in returning to the earth. While computers increase the functionality of system, they also increase the complexity of the interaction between the controllers (human and computer) and the target dynamics. In some cases, the complexity can produce a serious accident. To prevent such losses, traditional hazard analysis such as FTA has been applied to system development, however it can be used after creating a detailed system because it focuses on detailed component failures. As a result, it's more difficult to eliminate hazard cause early in the process when it is most feasible.STAMP/STPA is a new hazard analysis that can be applied from the early development phase, with the analysis being refined as more detailed decisions are made. In essence, the analysis and design decisions are intertwined and go hand-in-hand. We have applied STAMP/STPA to a concept design of a new JAXA manned vehicle and tried safety guided design of the vehicle. As a result of this trial, it has been shown that STAMP/STPA can be accepted easily by system engineers and the design has been made more sophisticated from a safety viewpoint. The result also shows that the consequences of human errors on system safety can be analysed in the early development phase and the system designed to prevent them. Finally, the paper will discuss an effective way to harmonize this safety guided design approach with system engineering process based on the result of this experience in this project.
Medical error and related factors during internship and residency.
Ahmadipour, Habibeh; Nahid, Mortazavi
2015-01-01
It is difficult to determine the real incidence of medical errors due to the lack of a precise definition of errors, as well as the failure to report them under certain circumstances. We carried out a cross- sectional study in Kerman University of Medical Sciences, Iran in 2013. The participants were selected through the census method. The data were collected using a self-administered questionnaire, which consisted of questions on the participants' demographic data and questions on the medical errors committed. The data were analysed by SPSS 19. It was found that 270 participants had committed medical errors. There was no significant difference in the frequency of errors committed by interns and residents. In the case of residents, the most common error was misdiagnosis and in that of interns, errors related to history-taking and physical examination. Considering that medical errors are common in the clinical setting, the education system should train interns and residents to prevent the occurrence of errors. In addition, the system should develop a positive attitude among them so that they can deal better with medical errors.
[Failure modes and effects analysis in the prescription, validation and dispensing process].
Delgado Silveira, E; Alvarez Díaz, A; Pérez Menéndez-Conde, C; Serna Pérez, J; Rodríguez Sagrado, M A; Bermejo Vicedo, T
2012-01-01
To apply a failure modes and effects analysis to the prescription, validation and dispensing process for hospitalised patients. A work group analysed all of the stages included in the process from prescription to dispensing, identifying the most critical errors and establishing potential failure modes which could produce a mistake. The possible causes, their potential effects, and the existing control systems were analysed to try and stop them from developing. The Hazard Score was calculated, choosing those that were ≥ 8, and a Severity Index = 4 was selected independently of the hazard Score value. Corrective measures and an implementation plan were proposed. A flow diagram that describes the whole process was obtained. A risk analysis was conducted of the chosen critical points, indicating: failure mode, cause, effect, severity, probability, Hazard Score, suggested preventative measure and strategy to achieve so. Failure modes chosen: Prescription on the nurse's form; progress or treatment order (paper); Prescription to incorrect patient; Transcription error by nursing staff and pharmacist; Error preparing the trolley. By applying a failure modes and effects analysis to the prescription, validation and dispensing process, we have been able to identify critical aspects, the stages in which errors may occur and the causes. It has allowed us to analyse the effects on the safety of the process, and establish measures to prevent or reduce them. Copyright © 2010 SEFH. Published by Elsevier Espana. All rights reserved.
NASA Astrophysics Data System (ADS)
Bao, Chuanchen; Li, Jiakun; Feng, Qibo; Zhang, Bin
2018-07-01
This paper introduces an error-compensation model for our measurement method to measure five motion errors of a rotary axis based on fibre laser collimation. The error-compensation model is established in a matrix form using the homogeneous coordinate transformation theory. The influences of the installation errors, error crosstalk, and manufacturing errors are analysed. The model is verified by both ZEMAX simulation and measurement experiments. The repeatability values of the radial and axial motion errors are significantly suppressed by more than 50% after compensation. The repeatability experiments of five degrees of freedom motion errors and the comparison experiments of two degrees of freedom motion errors of an indexing table were performed by our measuring device and a standard instrument. The results show that the repeatability values of the angular positioning error ε z and tilt motion error around the Y axis ε y are 1.2″ and 4.4″, and the comparison deviations of the two motion errors are 4.0″ and 4.4″, respectively. The repeatability values of the radial and axial motion errors, δ y and δ z , are 1.3 and 0.6 µm, respectively. The repeatability value of the tilt motion error around the X axis ε x is 3.8″.
A system and method for online high-resolution mapping of gastric slow-wave activity.
Bull, Simon H; O'Grady, Gregory; Du, Peng; Cheng, Leo K
2014-11-01
High-resolution (HR) mapping employs multielectrode arrays to achieve spatially detailed analyses of propagating bioelectrical events. A major current limitation is that spatial analyses must currently be performed "off-line" (after experiments), compromising timely recording feedback and restricting experimental interventions. These problems motivated development of a system and method for "online" HR mapping. HR gastric recordings were acquired and streamed to a novel software client. Algorithms were devised to filter data, identify slow-wave events, eliminate corrupt channels, and cluster activation events. A graphical user interface animated data and plotted electrograms and maps. Results were compared against off-line methods. The online system analyzed 256-channel serosal recordings with no unexpected system terminations with a mean delay 18 s. Activation time marking sensitivity was 0.92; positive predictive value was 0.93. Abnormal slow-wave patterns including conduction blocks, ectopic pacemaking, and colliding wave fronts were reliably identified. Compared to traditional analysis methods, online mapping had comparable results with equivalent coverage of 90% of electrodes, average RMS errors of less than 1 s, and CC of activation maps of 0.99. Accurate slow-wave mapping was achieved in near real-time, enabling monitoring of recording quality and experimental interventions targeted to dysrhythmic onset. This work also advances the translation of HR mapping toward real-time clinical application.
Ensemble-type numerical uncertainty information from single model integrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter
2015-07-01
We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less
A description of medication errors reported by pharmacists in a neonatal intensive care unit.
Pawluk, Shane; Jaam, Myriam; Hazi, Fatima; Al Hail, Moza Sulaiman; El Kassem, Wessam; Khalifa, Hanan; Thomas, Binny; Abdul Rouf, Pallivalappila
2017-02-01
Background Patients in the Neonatal Intensive Care Unit (NICU) are at an increased risk for medication errors. Objective The objective of this study is to describe the nature and setting of medication errors occurring in patients admitted to an NICU in Qatar based on a standard electronic system reported by pharmacists. Setting Neonatal intensive care unit, Doha, Qatar. Method This was a retrospective cross-sectional study on medication errors reported electronically by pharmacists in the NICU between January 1, 2014 and April 30, 2015. Main outcome measure Data collected included patient information, and incident details including error category, medications involved, and follow-up completed. Results A total of 201 NICU pharmacists-reported medication errors were submitted during the study period. All reported errors did not reach the patient and did not cause harm. Of the errors reported, 98.5% occurred in the prescribing phase of the medication process with 58.7% being due to calculation errors. Overall, 53 different medications were documented in error reports with the anti-infective agents being the most frequently cited. The majority of incidents indicated that the primary prescriber was contacted and the error was resolved before reaching the next phase of the medication process. Conclusion Medication errors reported by pharmacists occur most frequently in the prescribing phase of the medication process. Our data suggest that error reporting systems need to be specific to the population involved. Special attention should be paid to frequently used medications in the NICU as these were responsible for the greatest numbers of medication errors.
Guidebook for analysis of tether applications
NASA Technical Reports Server (NTRS)
Carroll, J. A.
1985-01-01
This guidebook is intended as a tool to facilitate initial analyses of proposed tether applications in space. The guiding philosophy is that a brief analysis of all the common problem areas is far more useful than a detailed study in any one area. Such analyses can minimize the waste of resources on elegant but fatally flawed concepts, and can identify the areas where more effort is needed on concepts which do survive the initial analyses. The simplified formulas, approximations, and analytical tools included should be used only for preliminary analyses. For detailed analyses, the references with each topic and in the bibliography may be useful.
Computer calculated dose in paediatric prescribing.
Kirk, Richard C; Li-Meng Goh, Denise; Packia, Jeya; Min Kam, Huey; Ong, Benjamin K C
2005-01-01
Medication errors are an important cause of hospital-based morbidity and mortality. However, only a few medication error studies have been conducted in children. These have mainly quantified errors in the inpatient setting; there is very little data available on paediatric outpatient and emergency department medication errors and none on discharge medication. This deficiency is of concern because medication errors are more common in children and it has been suggested that the risk of an adverse drug event as a consequence of a medication error is higher in children than in adults. The aims of this study were to assess the rate of medication errors in predominantly ambulatory paediatric patients and the effect of computer calculated doses on medication error rates of two commonly prescribed drugs. This was a prospective cohort study performed in a paediatric unit in a university teaching hospital between March 2003 and August 2003. The hospital's existing computer clinical decision support system was modified so that doctors could choose the traditional prescription method or the enhanced method of computer calculated dose when prescribing paracetamol (acetaminophen) or promethazine. All prescriptions issued to children (<16 years of age) at the outpatient clinic, emergency department and at discharge from the inpatient service were analysed. A medication error was defined as to have occurred if there was an underdose (below the agreed value), an overdose (above the agreed value), no frequency of administration specified, no dose given or excessive total daily dose. The medication error rates and the factors influencing medication error rates were determined using SPSS version 12. From March to August 2003, 4281 prescriptions were issued. Seven prescriptions (0.16%) were excluded, hence 4274 prescriptions were analysed. Most prescriptions were issued by paediatricians (including neonatologists and paediatric surgeons) and/or junior doctors. The error rate in the children's emergency department was 15.7%, for outpatients was 21.5% and for discharge medication was 23.6%. Most errors were the result of an underdose (64%; 536/833). The computer calculated dose error rate was 12.6% compared with the traditional prescription error rate of 28.2%. Logistical regression analysis showed that computer calculated dose was an important and independent variable influencing the error rate (adjusted relative risk = 0.436, 95% CI 0.336, 0.520, p < 0.001). Other important independent variables were seniority and paediatric training of the person prescribing and the type of drug prescribed. Medication error, especially underdose, is common in outpatient, emergency department and discharge prescriptions. Computer calculated doses can significantly reduce errors, but other risk factors have to be concurrently addressed to achieve maximum benefit.
MISR: protection from ourselves
NASA Technical Reports Server (NTRS)
Nolan, T.; Varanasi, P.
2004-01-01
Outlines lessons learned by the Instrument Operations Team of NASA/JPL Terra's Multi-angle Imaging SpectroRadiometer mission. It narrates a story of MISR: Protection from Ourselves! and describes, in detail, how the MISR instrument survived operator errors.
The Frame Constraint on Experimentally Elicited Speech Errors in Japanese.
Saito, Akie; Inoue, Tomoyoshi
2017-06-01
The so-called syllable position effect in speech errors has been interpreted as reflecting constraints posed by the frame structure of a given language, which is separately operating from linguistic content during speech production. The effect refers to the phenomenon that when a speech error occurs, replaced and replacing sounds tend to be in the same position within a syllable or word. Most of the evidence for the effect comes from analyses of naturally occurring speech errors in Indo-European languages, and there are few studies examining the effect in experimentally elicited speech errors and in other languages. This study examined whether experimentally elicited sound errors in Japanese exhibits the syllable position effect. In Japanese, the sub-syllabic unit known as "mora" is considered to be a basic sound unit in production. Results showed that the syllable position effect occurred in mora errors, suggesting that the frame constrains the ordering of sounds during speech production.
iGen: An automated generator of simplified models with provable error bounds.
NASA Astrophysics Data System (ADS)
Tang, D.; Dobbie, S.
2009-04-01
Climate models employ various simplifying assumptions and parameterisations in order to increase execution speed. However, in order to draw conclusions about the Earths climate from the results of a climate simulation it is necessary to have information about the error that these assumptions and parameterisations introduce. A novel computer program, called iGen, is being developed which automatically generates fast, simplified models by analysing the source code of a slower, high resolution model. The resulting simplified models have provable bounds on error compared to the high resolution model and execute at speeds that are typically orders of magnitude faster. iGen's input is a definition of the prognostic variables of the simplified model, a set of bounds on acceptable error and the source code of a model that captures the behaviour of interest. In the case of an atmospheric model, for example, this would be a global cloud resolving model with very high resolution. Although such a model would execute far too slowly to be used directly in a climate model, iGen never executes it. Instead, it converts the code of the resolving model into a mathematical expression which is then symbolically manipulated and approximated to form a simplified expression. This expression is then converted back into a computer program and output as a simplified model. iGen also derives and reports formal bounds on the error of the simplified model compared to the resolving model. These error bounds are always maintained below the user-specified acceptable error. Results will be presented illustrating the success of iGen's analysis of a number of example models. These extremely encouraging results have lead on to work which is currently underway to analyse a cloud resolving model and so produce an efficient parameterisation of moist convection with formally bounded error.
Flexible Retrieval: When True Inferences Produce False Memories
Carpenter, Alexis C.; Schacter, Daniel L.
2016-01-01
Episodic memory involves flexible retrieval processes that allow us to link together distinct episodes, make novel inferences across overlapping events, and recombine elements of past experiences when imagining future events. However, the same flexible retrieval and recombination processes that underpin these adaptive functions may also leave memory prone to error or distortion, such as source misattributions in which details of one event are mistakenly attributed to another related event. To determine whether the same recombination-related retrieval mechanism supports both successful inference and source memory errors, we developed a modified version of an associative inference paradigm in which participants encoded everyday scenes comprised of people, objects, and other contextual details. These scenes contained overlapping elements (AB, BC) that could later be linked to support novel inferential retrieval regarding elements that had not appeared together previously (AC). Our critical experimental manipulation concerned whether contextual details were probed before or after the associative inference test, thereby allowing us to assess whether a) false memories increased for successful versus unsuccessful inferences, and b) any such effects were specific to after as compared to before participants received the inference test. In each of four experiments that used variants of this paradigm, participants were more susceptible to false memories for contextual details after successful than unsuccessful inferential retrieval, but only when contextual details were probed after the associative inference test. These results suggest that the retrieval-mediated recombination mechanism that underlies associative inference also contributes to source misattributions that result from combining elements of distinct episodes. PMID:27918169
Patient safety: honoring advanced directives.
Tice, Martha A
2007-02-01
Healthcare providers typically think of patient safety in the context of preventing iatrogenic injury. Prevention of falls and medication or treatment errors is the typical focus of adverse event analyses. If healthcare providers are committed to honoring the wishes of patients, then perhaps failures to honor advanced directives should be viewed as reportable medical errors.
Addressing Misconceptions in Geometry through Written Error Analyses
ERIC Educational Resources Information Center
Kembitzky, Kimberle A.
2009-01-01
This study examined the improvement of students' comprehension of geometric concepts through analytical writing about their own misconceptions using a reflective tool called an ERNIe (acronym for ERror aNalyIsis). The purpose of this study was to determine whether the ERNIe process could be used to correct geometric misconceptions, as well as how…
One way Doppler extractor. Volume 1: Vernier technique
NASA Technical Reports Server (NTRS)
Blasco, R. W.; Klein, S.; Nossen, E. J.; Starner, E. R.; Yanosov, J. A.
1974-01-01
A feasibility analysis, trade-offs, and implementation for a One Way Doppler Extraction system are discussed. A Doppler error analysis shows that quantization error is a primary source of Doppler measurement error. Several competing extraction techniques are compared and a Vernier technique is developed which obtains high Doppler resolution with low speed logic. Parameter trade-offs and sensitivities for the Vernier technique are analyzed, leading to a hardware design configuration. A detailed design, operation, and performance evaluation of the resulting breadboard model is presented which verifies the theoretical performance predictions. Performance tests have verified that the breadboard is capable of extracting Doppler, on an S-band signal, to an accuracy of less than 0.02 Hertz for a one second averaging period. This corresponds to a range rate error of no more than 3 millimeters per second.
Geodetic positioning using a global positioning system of satellites
NASA Technical Reports Server (NTRS)
Fell, P. J.
1980-01-01
Geodetic positioning using range, integrated Doppler, and interferometric observations from a constellation of twenty-four Global Positioning System satellites is analyzed. A summary of the proposals for geodetic positioning and baseline determination is given which includes a description of measurement techniques and comments on rank deficiency and error sources. An analysis of variance comparison of range, Doppler, and interferometric time delay to determine their relative geometric strength for baseline determination is included. An analytic examination to the effect of a priori constraints on positioning using simultaneous observations from two stations is presented. Dynamic point positioning and baseline determination using range and Doppler is examined in detail. Models for the error sources influencing dynamic positioning are developed. Included is a discussion of atomic clock stability, and range and Doppler observation error statistics based on random correlated atomic clock error are derived.
Dialysis Facility Safety: Processes and Opportunities.
Garrick, Renee; Morey, Rishikesh
2015-01-01
Unintentional human errors are the source of most safety breaches in complex, high-risk environments. The environment of dialysis care is extremely complex. Dialysis patients have unique and changing physiology, and the processes required for their routine care involve numerous open-ended interfaces between providers and an assortment of technologically advanced equipment. Communication errors, both within the dialysis facility and during care transitions, and lapses in compliance with policies and procedures are frequent areas of safety risk. Some events, such as air emboli and needle dislodgments occur infrequently, but are serious risks. Other adverse events include medication errors, patient falls, catheter and access-related infections, access infiltrations and prolonged bleeding. A robust safety system should evaluate how multiple, sequential errors might align to cause harm. Systems of care can be improved by sharing the results of root cause analyses, and "good catches." Failure mode effects and analyses can be used to proactively identify and mitigate areas of highest risk, and methods drawn from cognitive psychology, simulation training, and human factor engineering can be used to advance facility safety. © 2015 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Byrnes, D. V.; Carney, P. C.; Underwood, J. W.; Vogt, E. D.
1974-01-01
Development, test, conversion, and documentation of computer software for the mission analysis of missions to halo orbits about libration points in the earth-sun system is reported. The software consisting of two programs called NOMNAL and ERRAN is part of the Space Trajectories Error Analysis Programs (STEAP). The program NOMNAL targets a transfer trajectory from Earth on a given launch date to a specified halo orbit on a required arrival date. Either impulsive or finite thrust insertion maneuvers into halo orbit are permitted by the program. The transfer trajectory is consistent with a realistic launch profile input by the user. The second program ERRAN conducts error analyses of the targeted transfer trajectory. Measurements including range, doppler, star-planet angles, and apparent planet diameter are processed in a Kalman-Schmidt filter to determine the trajectory knowledge uncertainty. Execution errors at injection, midcourse correction and orbit insertion maneuvers are analyzed along with the navigation uncertainty to determine trajectory control uncertainties and fuel-sizing requirements. The program is also capable of generalized covariance analyses.
Patient safety education to change medical students' attitudes and sense of responsibility.
Roh, Hyerin; Park, Seok Ju; Kim, Taekjoong
2015-01-01
This study examined changes in the perceptions and attitudes as well as the sense of individual and collective responsibility in medical students after they received patient safety education. A three-day patient safety curriculum was implemented for third-year medical students shortly before entering their clerkship. Before and after training, we administered a questionnaire, which was analysed quantitatively. Additionally, we asked students to answer questions about their expected behaviours in response to two case vignettes. Their answers were analysed qualitatively. There was improvement in students' concepts of patient safety after training. Before training, they showed good comprehension of the inevitability of error, but most students blamed individuals for errors and expressed a strong sense of individual responsibility. After training, students increasingly attributed errors to system dysfunction and reported more self-confidence in speaking up about colleagues' errors. However, due to the hierarchical culture, students still described difficulties communicating with senior doctors. Patient safety education effectively shifted students' attitudes towards systems-based thinking and increased their sense of collective responsibility. Strategies for improving superior-subordinate communication within a hierarchical culture should be added to the patient safety curriculum.
Verhoeven, Virginie J M; Hysi, Pirro G; Wojciechowski, Robert; Fan, Qiao; Guggenheim, Jeremy A; Höhn, René; MacGregor, Stuart; Hewitt, Alex W; Nag, Abhishek; Cheng, Ching-Yu; Yonova-Doing, Ekaterina; Zhou, Xin; Ikram, M Kamran; Buitendijk, Gabriëlle H S; McMahon, George; Kemp, John P; Pourcain, Beate St; Simpson, Claire L; Mäkelä, Kari-Matti; Lehtimäki, Terho; Kähönen, Mika; Paterson, Andrew D; Hosseini, S Mohsen; Wong, Hoi Suen; Xu, Liang; Jonas, Jost B; Pärssinen, Olavi; Wedenoja, Juho; Yip, Shea Ping; Ho, Daniel W H; Pang, Chi Pui; Chen, Li Jia; Burdon, Kathryn P; Craig, Jamie E; Klein, Barbara E K; Klein, Ronald; Haller, Toomas; Metspalu, Andres; Khor, Chiea-Chuen; Tai, E-Shyong; Aung, Tin; Vithana, Eranga; Tay, Wan-Ting; Barathi, Veluchamy A; Chen, Peng; Li, Ruoying; Liao, Jiemin; Zheng, Yingfeng; Ong, Rick T; Döring, Angela; Evans, David M; Timpson, Nicholas J; Verkerk, Annemieke J M H; Meitinger, Thomas; Raitakari, Olli; Hawthorne, Felicia; Spector, Tim D; Karssen, Lennart C; Pirastu, Mario; Murgia, Federico; Ang, Wei; Mishra, Aniket; Montgomery, Grant W; Pennell, Craig E; Cumberland, Phillippa M; Cotlarciuc, Ioana; Mitchell, Paul; Wang, Jie Jin; Schache, Maria; Janmahasatian, Sarayut; Janmahasathian, Sarayut; Igo, Robert P; Lass, Jonathan H; Chew, Emily; Iyengar, Sudha K; Gorgels, Theo G M F; Rudan, Igor; Hayward, Caroline; Wright, Alan F; Polasek, Ozren; Vatavuk, Zoran; Wilson, James F; Fleck, Brian; Zeller, Tanja; Mirshahi, Alireza; Müller, Christian; Uitterlinden, André G; Rivadeneira, Fernando; Vingerling, Johannes R; Hofman, Albert; Oostra, Ben A; Amin, Najaf; Bergen, Arthur A B; Teo, Yik-Ying; Rahi, Jugnoo S; Vitart, Veronique; Williams, Cathy; Baird, Paul N; Wong, Tien-Yin; Oexle, Konrad; Pfeiffer, Norbert; Mackey, David A; Young, Terri L; van Duijn, Cornelia M; Saw, Seang-Mei; Bailey-Wilson, Joan E; Stambolian, Dwight; Klaver, Caroline C; Hammond, Christopher J
2013-03-01
Refractive error is the most common eye disorder worldwide and is a prominent cause of blindness. Myopia affects over 30% of Western populations and up to 80% of Asians. The CREAM consortium conducted genome-wide meta-analyses, including 37,382 individuals from 27 studies of European ancestry and 8,376 from 5 Asian cohorts. We identified 16 new loci for refractive error in individuals of European ancestry, of which 8 were shared with Asians. Combined analysis identified 8 additional associated loci. The new loci include candidate genes with functions in neurotransmission (GRIA4), ion transport (KCNQ5), retinoic acid metabolism (RDH5), extracellular matrix remodeling (LAMA2 and BMP2) and eye development (SIX6 and PRSS56). We also confirmed previously reported associations with GJD2 and RASGRF1. Risk score analysis using associated SNPs showed a tenfold increased risk of myopia for individuals carrying the highest genetic load. Our results, based on a large meta-analysis across independent multiancestry studies, considerably advance understanding of the mechanisms involved in refractive error and myopia.
Uncertainty in Operational Atmospheric Analyses and Re-Analyses
NASA Astrophysics Data System (ADS)
Langland, R.; Maue, R. N.
2016-12-01
This talk will describe uncertainty in atmospheric analyses of wind and temperature produced by operational forecast models and in re-analysis products. Because the "true" atmospheric state cannot be precisely quantified, there is necessarily error in every atmospheric analysis, and this error can be estimated by computing differences ( variance and bias) between analysis products produced at various centers (e.g., ECMWF, NCEP, U.S Navy, etc.) that use independent data assimilation procedures, somewhat different sets of atmospheric observations and forecast models with different resolutions, dynamical equations, and physical parameterizations. These estimates of analysis uncertainty provide a useful proxy to actual analysis error. For this study, we use a unique multi-year and multi-model data archive developed at NRL-Monterey. It will be shown that current uncertainty in atmospheric analyses is closely correlated with the geographic distribution of assimilated in-situ atmospheric observations, especially those provided by high-accuracy radiosonde and commercial aircraft observations. The lowest atmospheric analysis uncertainty is found over North America, Europe and Eastern Asia, which have the largest numbers of radiosonde and commercial aircraft observations. Analysis uncertainty is substantially larger (by factors of two to three times) in most of the Southern hemisphere, the North Pacific ocean, and under-developed nations of Africa and South America where there are few radiosonde or commercial aircraft data. It appears that in regions where atmospheric analyses depend primarily on satellite radiance observations, analysis uncertainty of both temperature and wind remains relatively high compared to values found over North America and Europe.
Determining the Magnetic Properties of 1 kg Mass Standards
Davis, Richard S.
1995-01-01
Magnetic interactions may lead to errors in precision mass metrology. An analytical description of such magnetic errors is presented in which the roles of both the volume magnetic susceptibility and permanent magnetization are discussed. The same formalism is then used to describe in detail the calibration and operation of a susceptometer developed at the Bureau International des Poids et Mesures (BIPM). The device has been optimized for the determination of the magnetic properties of 1 kg mass standards. PMID:29151735
NASA Technical Reports Server (NTRS)
Zwally, H. Jay; Brenner, Anita C.; Major, Judith A.; Martin, Thomas V.; Bindschadler, Robert A.
1990-01-01
The data-processing methods and ice data products derived from Seasat radar altimeter measurements over the Greenland ice sheet and surrounding sea ice are documented. The corrections derived and applied to the Seasat radar altimeter data over ice are described in detail, including the editing and retracking algorithm to correct for height errors caused by lags in the automatic range tracking circuit. The methods for radial adjustment of the orbits and estimation of the slope-induced errors are given.
NASA Technical Reports Server (NTRS)
Bennett, A.
1973-01-01
A guidance algorithm that provides precise rendezvous in the deterministic case while requiring only relative state information is developed. A navigation scheme employing only onboard relative measurements is built around a Kalman filter set in measurement coordinates. The overall guidance and navigation procedure is evaluated in the face of measurement errors by a detailed numerical simulation. Results indicate that onboard guidance and navigation for the terminal phase of rendezvous is possible with reasonable limits on measurement errors.
Implementation and simulations of the sphere solution in FAST
NASA Astrophysics Data System (ADS)
Murgolo, F. P.; Schirone, M. G.; Lattanzi, M.; Bernacca, P. L.
1989-06-01
The details of the implementation of the sphere solution software in the Fundamental Astronomy by Space Techniques (FAST) consortium, are described. The simulation results for realistic data sets, both with and without grid-step errors are given. Expected errors on the astrometric parameters of the primary stars and the precision of the reference great circle zero points, are provided as a function of mission duration. The design matrix, the diagrams of the context processor and the processors experimental results are given.
Local projection stabilization for linearized Brinkman-Forchheimer-Darcy equation
NASA Astrophysics Data System (ADS)
Skrzypacz, Piotr
2017-09-01
The Local Projection Stabilization (LPS) is presented for the linearized Brinkman-Forchheimer-Darcy equation with high Reynolds numbers. The considered equation can be used to model porous medium flows in chemical reactors of packed bed type. The detailed finite element analysis is presented for the case of nonconstant porosity. The enriched variant of LPS is based on the equal order interpolation for the velocity and pressure. The optimal error bounds for the velocity and pressure errors are justified numerically.
The Coast Artillery Journal. Volume 68, Number 6, June 1928
1928-06-01
text book on the Turkish Army, and two guide books. The maps available were not up to date and contained few details. Writing in his diary under date...The moon shone faintly through the clouds and at 1 :00 A. M. the ships stopped, waiting for the moon to set. While lying here, all six men-of-war...against the firing table value of 63 yards. The prohable error of the prohahle error was then determined in the same manner that we ordinarily compute the
Experimental Characterization of Hysteresis in a Revolute Joint for Precision Deployable Structures
NASA Technical Reports Server (NTRS)
Lake, Mark S.; Fung, Jimmy; Gloss, Kevin; Liechty, Derek S.
1997-01-01
Recent studies of the micro-dynamic behavior of a deployable telescope metering truss have identified instabilities in the equilibrium shape of the truss in response to low-energy dynamic loading. Analyses indicate that these micro-dynamic instabilities arise from stick-slip friction within the truss joints (e.g., hinges and latches). The present study characterizes the low-magnitude quasi-static load cycle response of the precision revolute joints incorporated in the deployable telescope metering truss, and specifically, the hysteretic response of these joints caused by stick-slip friction within the joint. Detailed descriptions are presented of the test setup and data reduction algorithms, including discussions of data-error sources and data-filtering techniques. Test results are presented from thirteen specimens, and the effects of joint preload and manufacturing tolerances are investigated. Using a simplified model of stick-slip friction, a relationship is made between joint load-cycle behavior and micro-dynamic dimensional instabilities in the deployable telescope metering truss.
Analysis of the low-flow characteristics of streams in Louisiana
Lee, Fred N.
1985-01-01
The U.S. Geological Survey, in cooperation with the Louisiana Department of Transportation and Development, Office of Public Works, used geologic maps, soils maps, precipitation data, and low-flow data to define four hydrographic regions in Louisiana having distinct low-flow characteristics. Equations were derived, using regression analyses, to estimate the 7Q2, 7Q10, and 7Q20 flow rates for basically unaltered stream basins smaller than 525 square miles. Independent variables in the equations include drainage area (square miles), mean annual precipitation index (inches), and main channel slope (feet per mile). Average standard errors of regression ranged from +44 to +61 percent. Graphs are given for estimating the 7Q2, 7Q10, and 7Q20 for stream basins for which the drainage area of the most downstream data-collection site is larger than 525 square miles. Detailed examples are given in this report for the use of the equations and graphs.
Bayesian decoding using unsorted spikes in the rat hippocampus
Layton, Stuart P.; Chen, Zhe; Wilson, Matthew A.
2013-01-01
A fundamental task in neuroscience is to understand how neural ensembles represent information. Population decoding is a useful tool to extract information from neuronal populations based on the ensemble spiking activity. We propose a novel Bayesian decoding paradigm to decode unsorted spikes in the rat hippocampus. Our approach uses a direct mapping between spike waveform features and covariates of interest and avoids accumulation of spike sorting errors. Our decoding paradigm is nonparametric, encoding model-free for representing stimuli, and extracts information from all available spikes and their waveform features. We apply the proposed Bayesian decoding algorithm to a position reconstruction task for freely behaving rats based on tetrode recordings of rat hippocampal neuronal activity. Our detailed decoding analyses demonstrate that our approach is efficient and better utilizes the available information in the nonsortable hash than the standard sorting-based decoding algorithm. Our approach can be adapted to an online encoding/decoding framework for applications that require real-time decoding, such as brain-machine interfaces. PMID:24089403
A general geometric theory of attitude determination from directional sensing
NASA Technical Reports Server (NTRS)
Fang, B. T.
1976-01-01
A general geometric theory of spacecraft attitude determination from external reference direction sensors was presented. Outputs of different sensors are reduced to two kinds of basic directional measurements. Errors in these measurement equations are studied in detail. The partial derivatives of measurements with respect to the spacecraft orbit, the spacecraft attitude, and the error parameters form the basis for all orbit and attitude determination schemes and error analysis programs and are presented in a series of tables. The question of attitude observability is studied with the introduction of a graphical construction which provides a great deal of physical insight. The result is applied to the attitude observability of the IMP-8 spacecraft.
NASA Technical Reports Server (NTRS)
Callantine, Todd J.
2002-01-01
This report describes preliminary research on intelligent agents that make errors. Such agents are crucial to the development of novel agent-based techniques for assessing system safety. The agents extend an agent architecture derived from the Crew Activity Tracking System that has been used as the basis for air traffic controller agents. The report first reviews several error taxonomies. Next, it presents an overview of the air traffic controller agents, then details several mechanisms for causing the agents to err in realistic ways. The report presents a performance assessment of the error-generating agents, and identifies directions for further research. The research was supported by the System-Wide Accident Prevention element of the FAA/NASA Aviation Safety Program.
Error in total ozone measurements arising from aerosol attenuation
NASA Technical Reports Server (NTRS)
Thomas, R. W. L.; Basher, R. E.
1979-01-01
A generalized least squares method for deducing both total ozone and aerosol extinction spectrum parameters from Dobson spectrophotometer measurements was developed. An error analysis applied to this system indicates that there is little advantage to additional measurements once a sufficient number of line pairs have been employed to solve for the selected detail in the attenuation model. It is shown that when there is a predominance of small particles (less than about 0.35 microns in diameter) the total ozone from the standard AD system is too high by about one percent. When larger particles are present the derived total ozone may be an overestimate or an underestimate but serious errors occur only for narrow polydispersions.
Sensitivity, optimal scaling and minimum roundoff errors in flexible structure models
NASA Technical Reports Server (NTRS)
Skelton, Robert E.
1987-01-01
Traditional modeling notions presume the existence of a truth model that relates the input to the output, without advanced knowledge of the input. This has led to the evolution of education and research approaches (including the available control and robustness theories) that treat the modeling and control design as separate problems. The paper explores the subtleties of this presumption that the modeling and control problems are separable. A detailed study of the nature of modeling errors is useful to gain insight into the limitations of traditional control and identification points of view. Modeling errors need not be small but simply appropriate for control design. Furthermore, the modeling and control design processes are inevitably iterative in nature.
Emissivity correction for interpreting thermal radiation from a terrestrial surface
NASA Technical Reports Server (NTRS)
Sutherland, R. A.; Bartholic, J. F.; Gerber, J. F.
1979-01-01
A general method of accounting for emissivity in making temperature determinations of graybody surfaces from radiometric data is presented. The method differs from previous treatments in that a simple blackbody calibration and graphical approach is used rather than numerical integrations which require detailed knowledge of an instrument's spectral characteristics. Also, errors caused by approximating instrumental response with the Stephan-Boltzman law rather than with an appropriately weighted Planck integral are examined. In the 8-14 micron wavelength interval, it is shown that errors are at most on the order of 3 C for the extremes of the earth's temperature and emissivity. For more practical limits, however, errors are less than 0.5 C.
2006-07-01
values for statistical analyses in terms of Snellen equivalent VA (Ref 44) and lines gained vs . lost after PRK . The Snellen VA values shown in the...AFRL-SA-BR-TR-2010-0011 THE U.S. AIR FORCE PHOTOREFRACTIVE KERATECTOMY ( PRK ) STUDY: Evaluation of Residual Refractive Error and High...July 2006 4. TITLE AND SUBTITLE THE U.S. AIR FORCE PHOTOREFRACTIVE KERATECTOMY ( PRK ) STUDY: Evaluation of Residual Refractive Error and High- and
Uncertainties in climate data sets
NASA Technical Reports Server (NTRS)
Mcguirk, James P.
1992-01-01
Climate diagnostics are constructed from either analyzed fields or from observational data sets. Those that have been commonly used are normally considered ground truth. However, in most of these collections, errors and uncertainties exist which are generally ignored due to the consistency of usage over time. Examples of uncertainties and errors are described in NMC and ECMWF analyses and in satellite observational sets-OLR, TOVS, and SMMR. It is suggested that these errors can be large, systematic, and not negligible in climate analysis.
Huddle, Thomas S
2010-01-01
The Association of American Medical Colleges (AAMC) is urging academic medical centers to ban pharmaceutical detailing. This policy followed from a consideration of behavioral and neuroeconomics research. I argue that this research did not warrant the conclusions drawn from it. Pharmaceutical detailing carries risks of cognitive error for physicians, as do other forms of information exchange. Physicians may overcome such risks; those determined to do so may ethically engage in pharmaceutical detailing. Whether or not they should do so is a prudential judgment about which reasonable people may disagree. The AAMC's ethical condemnation of detailing is unwarranted and will subvert efforts to maintain a realm of physician discretion in clinical work that is increasingly threatened in our present practice environment.
10 CFR 74.45 - Measurements and measurement control.
Code of Federal Regulations, 2013 CFR
2013-01-01
... measurements, obtaining samples, and performing laboratory analyses for element concentration and isotope... of random error behavior. On a predetermined schedule, the program shall include, as appropriate: (i) Replicate analyses of individual samples; (ii) Analysis of replicate process samples; (iii) Replicate volume...
10 CFR 74.45 - Measurements and measurement control.
Code of Federal Regulations, 2014 CFR
2014-01-01
... measurements, obtaining samples, and performing laboratory analyses for element concentration and isotope... of random error behavior. On a predetermined schedule, the program shall include, as appropriate: (i) Replicate analyses of individual samples; (ii) Analysis of replicate process samples; (iii) Replicate volume...
10 CFR 74.45 - Measurements and measurement control.
Code of Federal Regulations, 2012 CFR
2012-01-01
... measurements, obtaining samples, and performing laboratory analyses for element concentration and isotope... of random error behavior. On a predetermined schedule, the program shall include, as appropriate: (i) Replicate analyses of individual samples; (ii) Analysis of replicate process samples; (iii) Replicate volume...
Controlling false-negative errors in microarray differential expression analysis: a PRIM approach.
Cole, Steve W; Galic, Zoran; Zack, Jerome A
2003-09-22
Theoretical considerations suggest that current microarray screening algorithms may fail to detect many true differences in gene expression (Type II analytic errors). We assessed 'false negative' error rates in differential expression analyses by conventional linear statistical models (e.g. t-test), microarray-adapted variants (e.g. SAM, Cyber-T), and a novel strategy based on hold-out cross-validation. The latter approach employs the machine-learning algorithm Patient Rule Induction Method (PRIM) to infer minimum thresholds for reliable change in gene expression from Boolean conjunctions of fold-induction and raw fluorescence measurements. Monte Carlo analyses based on four empirical data sets show that conventional statistical models and their microarray-adapted variants overlook more than 50% of genes showing significant up-regulation. Conjoint PRIM prediction rules recover approximately twice as many differentially expressed transcripts while maintaining strong control over false-positive (Type I) errors. As a result, experimental replication rates increase and total analytic error rates decline. RT-PCR studies confirm that gene inductions detected by PRIM but overlooked by other methods represent true changes in mRNA levels. PRIM-based conjoint inference rules thus represent an improved strategy for high-sensitivity screening of DNA microarrays. Freestanding JAVA application at http://microarray.crump.ucla.edu/focus
Angular rate optimal design for the rotary strapdown inertial navigation system.
Yu, Fei; Sun, Qian
2014-04-22
Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS.
Fault detection and isolation in motion monitoring system.
Kim, Duk-Jin; Suk, Myoung Hoon; Prabhakaran, B
2012-01-01
Pervasive computing becomes very active research field these days. A watch that can trace human movement to record motion boundary as well as to study of finding social life pattern by one's localized visiting area. Pervasive computing also helps patient monitoring. A daily monitoring system helps longitudinal study of patient monitoring such as Alzheimer's and Parkinson's or obesity monitoring. Due to the nature of monitoring sensor (on-body wireless sensor), however, signal noise or faulty sensors errors can be present at any time. Many research works have addressed these problems any with a large amount of sensor deployment. In this paper, we present the faulty sensor detection and isolation using only two on-body sensors. We have been investigating three different types of sensor errors: the SHORT error, the CONSTANT error, and the NOISY SENSOR error (see more details on section V). Our experimental results show that the success rate of isolating faulty signals are an average of over 91.5% on fault type 1, over 92% on fault type 2, and over 99% on fault type 3 with the fault prior of 30% sensor errors.
Human factors in aircraft incidents - Results of a 7-year study (Andre Allard Memorial Lecture)
NASA Technical Reports Server (NTRS)
Billings, C. E.; Reynard, W. D.
1984-01-01
It is pointed out that nearly all fatal aircraft accidents are preventable, and that most such accidents are due to human error. The present discussion is concerned with the results of a seven-year study of the data collected by the NASA Aviation Safety Reporting System (ASRS). The Aviation Safety Reporting System was designed to stimulate as large a flow as possible of information regarding errors and operational problems in the conduct of air operations. It was implemented in April, 1976. In the following 7.5 years, 35,000 reports have been received from pilots, controllers, and the armed forces. Human errors are found in more than 80 percent of these reports. Attention is given to the types of events reported, possible causal factors in incidents, the relationship of incidents and accidents, and sources of error in the data. ASRS reports include sufficient detail to permit authorities to institute changes in the national aviation system designed to minimize the likelihood of human error, and to insulate the system against the effects of errors.
Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers
Sun, Ting; Xing, Fei; You, Zheng
2013-01-01
The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527
Cohen, Michael R.; Smetzer, Judy L.
2015-01-01
These medication errors have occurred in health care facilities at least once. They will happen again—perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers’ names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters’ wishes as to the level of detail included in publications. PMID:26715797
NASA Technical Reports Server (NTRS)
Yang, Shu-Chih; Rienecker, Michele; Keppenne, Christian
2010-01-01
This study investigates the impact of four different ocean analyses on coupled forecasts of the 2006 El Nino event. Forecasts initialized in June 2006 using ocean analyses from an assimilation that uses flow-dependent background error covariances are compared with those using static error covariances that are not flow dependent. The flow-dependent error covariances reflect the error structures related to the background ENSO instability and are generated by the coupled breeding method. The ocean analyses used in this study result from the assimilation of temperature and salinity, with the salinity data available from Argo floats. Of the analyses, the one using information from the coupled bred vectors (BV) replicates the observed equatorial long wave propagation best and exhibits more warming features leading to the 2006 El Nino event. The forecasts initialized from the BV-based analysis agree best with the observations in terms of the growth of the warm anomaly through two warming phases. This better performance is related to the impact of the salinity analysis on the state evolution in the equatorial thermocline. The early warming is traced back to salinity differences in the upper ocean of the equatorial central Pacific, while the second warming, corresponding to the mature phase, is associated with the effect of the salinity assimilation on the depth of the thermocline in the western equatorial Pacific. The series of forecast experiments conducted here show that the structure of the salinity in the initial conditions is important to the forecasts of the extension of the warm pool and the evolution of the 2006 El Ni o event.
Determination of nutritional parameters of yoghurts by FT Raman spectroscopy
NASA Astrophysics Data System (ADS)
Czaja, Tomasz; Baranowska, Maria; Mazurek, Sylwester; Szostak, Roman
2018-05-01
FT-Raman quantitative analysis of nutritional parameters of yoghurts was performed with the help of partial least squares models. The relative standard errors of prediction for fat, lactose and protein determination in the quantified commercial samples equalled to 3.9, 3.2 and 3.6%, respectively. Models based on attenuated total reflectance spectra of the liquid yoghurt samples and of dried yoghurt films collected with the single reflection diamond accessory showed relative standard errors of prediction values of 1.6-5.0% and 2.7-5.2%, respectively, for the analysed components. Despite a relatively low signal-to-noise ratio in the obtained spectra, Raman spectroscopy, combined with chemometrics, constitutes a fast and powerful tool for macronutrients quantification in yoghurts. Errors received for attenuated total reflectance method were found to be relatively higher than those for Raman spectroscopy due to inhomogeneity of the analysed samples.
NASA Astrophysics Data System (ADS)
Gilles, Luc; Wang, Lianqi; Ellerbroek, Brent
2008-07-01
This paper describes the modeling effort undertaken to derive the wavefront error (WFE) budget for the Narrow Field Infrared Adaptive Optics System (NFIRAOS), which is the facility, laser guide star (LGS), dual-conjugate adaptive optics (AO) system for the Thirty Meter Telescope (TMT). The budget describes the expected performance of NFIRAOS at zenith, and has been decomposed into (i) first-order turbulence compensation terms (120 nm on-axis), (ii) opto-mechanical implementation errors (84 nm), (iii) AO component errors and higher-order effects (74 nm) and (iv) tip/tilt (TT) wavefront errors at 50% sky coverage at the galactic pole (61 nm) with natural guide star (NGS) tip/tilt/focus/astigmatism (TTFA) sensing in J band. A contingency of about 66 nm now exists to meet the observatory requirement document (ORD) total on-axis wavefront error of 187 nm, mainly on account of reduced TT errors due to updated windshake modeling and a low read-noise NGS wavefront sensor (WFS) detector. A detailed breakdown of each of these top-level terms is presented, together with a discussion on its evaluation using a mix of high-order zonal and low-order modal Monte Carlo simulations.
Investigation of Primary Mirror Segment's Residual Errors for the Thirty Meter Telescope
NASA Technical Reports Server (NTRS)
Seo, Byoung-Joon; Nissly, Carl; Angeli, George; MacMynowski, Doug; Sigrist, Norbert; Troy, Mitchell; Williams, Eric
2009-01-01
The primary mirror segment aberrations after shape corrections with warping harness have been identified as the single largest error term in the Thirty Meter Telescope (TMT) image quality error budget. In order to better understand the likely errors and how they will impact the telescope performance we have performed detailed simulations. We first generated unwarped primary mirror segment surface shapes that met TMT specifications. Then we used the predicted warping harness influence functions and a Shack-Hartmann wavefront sensor model to determine estimates for the 492 corrected segment surfaces that make up the TMT primary mirror. Surface and control parameters, as well as the number of subapertures were varied to explore the parameter space. The corrected segment shapes were then passed to an optical TMT model built using the Jet Propulsion Laboratory (JPL) developed Modeling and Analysis for Controlled Optical Systems (MACOS) ray-trace simulator. The generated exit pupil wavefront error maps provided RMS wavefront error and image-plane characteristics like the Normalized Point Source Sensitivity (PSSN). The results have been used to optimize the segment shape correction and wavefront sensor designs as well as provide input to the TMT systems engineering error budgets.
A Comprehensive Radial Velocity Error Budget for Next Generation Doppler Spectrometers
NASA Technical Reports Server (NTRS)
Halverson, Samuel; Ryan, Terrien; Mahadevan, Suvrath; Roy, Arpita; Bender, Chad; Stefansson, Guomundur Kari; Monson, Andrew; Levi, Eric; Hearty, Fred; Blake, Cullen;
2016-01-01
We describe a detailed radial velocity error budget for the NASA-NSF Extreme Precision Doppler Spectrometer instrument concept NEID (NN-explore Exoplanet Investigations with Doppler spectroscopy). Such an instrument performance budget is a necessity for both identifying the variety of noise sources currently limiting Doppler measurements, and estimating the achievable performance of next generation exoplanet hunting Doppler spectrometers. For these instruments, no single source of instrumental error is expected to set the overall measurement floor. Rather, the overall instrumental measurement precision is set by the contribution of many individual error sources. We use a combination of numerical simulations, educated estimates based on published materials, extrapolations of physical models, results from laboratory measurements of spectroscopic subsystems, and informed upper limits for a variety of error sources to identify likely sources of systematic error and construct our global instrument performance error budget. While natively focused on the performance of the NEID instrument, this modular performance budget is immediately adaptable to a number of current and future instruments. Such an approach is an important step in charting a path towards improving Doppler measurement precisions to the levels necessary for discovering Earth-like planets.
Avoiding Human Error in Mission Operations: Cassini Flight Experience
NASA Technical Reports Server (NTRS)
Burk, Thomas A.
2012-01-01
Operating spacecraft is a never-ending challenge and the risk of human error is ever- present. Many missions have been significantly affected by human error on the part of ground controllers. The Cassini mission at Saturn has not been immune to human error, but Cassini operations engineers use tools and follow processes that find and correct most human errors before they reach the spacecraft. What is needed are skilled engineers with good technical knowledge, good interpersonal communications, quality ground software, regular peer reviews, up-to-date procedures, as well as careful attention to detail and the discipline to test and verify all commands that will be sent to the spacecraft. Two areas of special concern are changes to flight software and response to in-flight anomalies. The Cassini team has a lot of practical experience in all these areas and they have found that well-trained engineers with good tools who follow clear procedures can catch most errors before they get into command sequences to be sent to the spacecraft. Finally, having a robust and fault-tolerant spacecraft that allows ground controllers excellent visibility of its condition is the most important way to ensure human error does not compromise the mission.
Levin, Bruce; Thompson, John L P; Chakraborty, Bibhas; Levy, Gilberto; MacArthur, Robert; Haley, E Clarke
2011-08-01
TNK-S2B, an innovative, randomized, seamless phase II/III trial of tenecteplase versus rt-PA for acute ischemic stroke, terminated for slow enrollment before regulatory approval of use of phase II patients in phase III. (1) To review the trial design and comprehensive type I error rate simulations and (2) to discuss issues raised during regulatory review, to facilitate future approval of similar designs. In phase II, an early (24-h) outcome and adaptive sequential procedure selected one of three tenecteplase doses for phase III comparison with rt-PA. Decision rules comparing this dose to rt-PA would cause stopping for futility at phase II end, or continuation to phase III. Phase III incorporated two co-primary hypotheses, allowing for a treatment effect at either end of the trichotomized Rankin scale. Assuming no early termination, four interim analyses and one final analysis of 1908 patients provided an experiment-wise type I error rate of <0.05. Over 1,000 distribution scenarios, each involving 40,000 replications, the maximum type I error in phase III was 0.038. Inflation from the dose selection was more than offset by the one-half continuity correction in the test statistics. Inflation from repeated interim analyses was more than offset by the reduction from the clinical stopping rules for futility at the first interim analysis. Design complexity and evolving regulatory requirements lengthened the review process. (1) The design was innovative and efficient. Per protocol, type I error was well controlled for the co-primary phase III hypothesis tests, and experiment-wise. (2a) Time must be allowed for communications with regulatory reviewers from first design stages. (2b) Adequate type I error control must be demonstrated. (2c) Greater clarity is needed on (i) whether this includes demonstration of type I error control if the protocol is violated and (ii) whether simulations of type I error control are acceptable. (2d) Regulatory agency concerns that protocols for futility stopping may not be followed may be allayed by submitting interim analysis results to them as these analyses occur.
Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals
NASA Astrophysics Data System (ADS)
Goswami, S.; Flury, J.
2016-12-01
In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.
Cohen, Michael R; Smetzer, Judy L
2014-07-01
These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.
Ji, Yue; Xu, Mengjie; Li, Xingfei; Wu, Tengfei; Tuo, Weixiao; Wu, Jun; Dong, Jiuzhi
2018-06-13
The magnetohydrodynamic (MHD) angular rate sensor (ARS) with low noise level in ultra-wide bandwidth is developed in lasing and imaging applications, especially the line-of-sight (LOS) system. A modified MHD ARS combined with the Coriolis effect was studied in this paper to expand the sensor’s bandwidth at low frequency (<1 Hz), which is essential for precision LOS pointing and wide-bandwidth LOS jitter suppression. The model and the simulation method were constructed and a comprehensive solving method based on the magnetic and electric interaction methods was proposed. The numerical results on the Coriolis effect and the frequency response of the modified MHD ARS were detailed. In addition, according to the experimental results of the designed sensor consistent with the simulation results, an error analysis of model errors was discussed. Our study provides an error analysis method of MHD ARS combined with the Coriolis effect and offers a framework for future studies to minimize the error.
ISMP Medication Error Report Analysis
Cohen, Michael R.; Smetzer, Judy L.
2017-01-01
These medication errors have occurred in health care facilities at least once. They will happen again—perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications. PMID:28179735
Effect of phase errors in stepped-frequency radar systems
NASA Astrophysics Data System (ADS)
Vanbrundt, H. E.
1988-04-01
Stepped-frequency waveforms are being considered for inverse synthetic aperture radar (ISAR) imaging from ship and airborne platforms and for detailed radar cross section (RCS) measurements of ships and aircraft. These waveforms make it possible to achieve resolutions of 1.0 foot by using existing radar designs and processing technology. One problem not yet fully resolved in using stepped-frequency waveform for ISAR imaging is the deterioration in signal level caused by random frequency error. Random frequency error of the stepped-frequency source results in reduced peak responses and increased null responses. The resulting reduced signal-to-noise ratio is range dependent. Two of the major concerns addressed in this report are radar range limitations for ISAR and the error in calibration for RCS measurements caused by differences in range between a passive reflector used for an RCS reference and the target to be measured. In addressing these concerns, NOSC developed an analysis to assess the tolerable frequency error in terms of resulting power loss in signal power and signal-to-phase noise.
ISMP Medication Error Report Analysis
Cohen, Michael R.; Smetzer, Judy L.
2017-01-01
These medication errors have occurred in health care facilities at least once. They will happen again—perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your in-service training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers’ names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters’ wishes as to the level of detail included in publications. PMID:29276260
ISMP Medication Error Report Analysis
Cohen, Michael R.; Smetzer, Judy L.
2016-01-01
These medication errors have occurred in health care facilities at least once. They will happen again—perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications. PMID:28057945
ISMP Medication Error Report Analysis
Cohen, Michael R.; Smetzer, Judy L.
2016-01-01
ABSTRACT These medication errors have occurred in health care facilities at least once. They will happen again—perhaps where you work. Through education and alertness of personnel and procedural-safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were receivedthrough the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications. PMID:27928183
Improving Patient Safety With Error Identification in Chemotherapy Orders by Verification Nurses.
Baldwin, Abigail; Rodriguez, Elizabeth S
2016-02-01
The prevalence of medication errors associated with chemotherapy administration is not precisely known. Little evidence exists concerning the extent or nature of errors; however, some evidence demonstrates that errors are related to prescribing. This article demonstrates how the review of chemotherapy orders by a designated nurse known as a verification nurse (VN) at a National Cancer Institute-designated comprehensive cancer center helps to identify prescribing errors that may prevent chemotherapy administration mistakes and improve patient safety in outpatient infusion units. This article will describe the role of the VN and details of the verification process. To identify benefits of the VN role, a retrospective review and analysis of chemotherapy near-miss events from 2009-2014 was performed. A total of 4,282 events related to chemotherapy were entered into the Reporting to Improve Safety and Quality system. A majority of the events were categorized as near-miss events, or those that, because of chance, did not result in patient injury, and were identified at the point of prescribing.
Correcting AUC for Measurement Error.
Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang
2015-12-01
Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.
Analysis of measured data of human body based on error correcting frequency
NASA Astrophysics Data System (ADS)
Jin, Aiyan; Peipei, Gao; Shang, Xiaomei
2014-04-01
Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.
Suba, Eric J; Pfeifer, John D; Raab, Stephen S
2007-10-01
Patient identification errors in surgical pathology often involve switches of prostate or breast needle core biopsy specimens among patients. We assessed strategies for decreasing the occurrence of these uncommon and yet potentially catastrophic events. Root cause analyses were performed following 3 cases of patient identification error involving prostate needle core biopsy specimens. Patient identification errors in surgical pathology result from slips and lapses of automatic human action that may occur at numerous steps during pre-laboratory, laboratory and post-laboratory work flow processes. Patient identification errors among prostate needle biopsies may be difficult to entirely prevent through the optimization of work flow processes. A DNA time-out, whereby DNA polymorphic microsatellite analysis is used to confirm patient identification before radiation therapy or radical surgery, may eliminate patient identification errors among needle biopsies.
Polishing and polishing materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, P.C.
1977-11-10
The current approach to optical surface finishing indicates a general dependence on trial and error methods. This paper discusses some of the detailed research that is being done and advances the need for a more scientific approach by the opticians.
Analyzing temozolomide medication errors: potentially fatal.
Letarte, Nathalie; Gabay, Michael P; Bressler, Linda R; Long, Katie E; Stachnik, Joan M; Villano, J Lee
2014-10-01
The EORTC-NCIC regimen for glioblastoma requires different dosing of temozolomide (TMZ) during radiation and maintenance therapy. This complexity is exacerbated by the availability of multiple TMZ capsule strengths. TMZ is an alkylating agent and the major toxicity of this class is dose-related myelosuppression. Inadvertent overdose can be fatal. The websites of the Institute for Safe Medication Practices (ISMP), and the Food and Drug Administration (FDA) MedWatch database were reviewed. We searched the MedWatch database for adverse events associated with TMZ and obtained all reports including hematologic toxicity submitted from 1st November 1997 to 30th May 2012. The ISMP describes errors with TMZ resulting from the positioning of information on the label of the commercial product. The strength and quantity of capsules on the label were in close proximity to each other, and this has been changed by the manufacturer. MedWatch identified 45 medication errors. Patient errors were the most common, accounting for 21 or 47% of errors, followed by dispensing errors, which accounted for 13 or 29%. Seven reports or 16% were errors in the prescribing of TMZ. Reported outcomes ranged from reversible hematological adverse events (13%), to hospitalization for other adverse events (13%) or death (18%). Four error reports lacked detail and could not be categorized. Although the FDA issued a warning in 2003 regarding fatal medication errors and the product label warns of overdosing, errors in TMZ dosing occur for various reasons and involve both healthcare professionals and patients. Overdosing errors can be fatal.
Configuration study for a 30 GHz monolithic receive array, volume 1
NASA Technical Reports Server (NTRS)
Nester, W. H.; Cleaveland, B.; Edward, B.; Gotkis, S.; Hesserbacker, G.; Loh, J.; Mitchell, B.
1984-01-01
Gregorian, Cassegrain, and single reflector systems were analyzed in configuration studies for communications satellite receive antennas. Parametric design and performance curves were generated. A preliminary design of each reflector/feed system was derived including radiating elements, beam-former network, beamsteering system, and MMIC module architecture. Performance estimates and component requirements were developed for each design. A recommended design was selected for both the scanning beam and the fixed beam case. Detailed design and performance analysis results are presented for the selected Cassegrain configurations. The final design point is characterized in detail and performance measures evaluated in terms of gain, sidelobe level, noise figure, carrier-to-interference ratio, prime power, and beamsteering. The effects of mutual coupling and excitation errors (including phase and amplitude quantization errors) are evaluated. Mechanical assembly drawings are given for the final design point. Thermal design requirements are addressed in the mechanical design.
Correlation techniques to determine model form in robust nonlinear system realization/identification
NASA Technical Reports Server (NTRS)
Stry, Greselda I.; Mook, D. Joseph
1991-01-01
The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.
Prototype Development of a Geostationary Synthetic Thinned Aperture Radiometer, GeoSTAR
NASA Technical Reports Server (NTRS)
Tanner, Alan B.; Wilson, William J.; Kangaslahti, Pekka P.; Lambrigsten, Bjorn H.; Dinardo, Steven J.; Piepmeier, Jeffrey R.; Ruf, Christopher S.; Rogacki, Steven; Gross, S. M.; Musko, Steve
2004-01-01
Preliminary details of a 2-D synthetic aperture radiometer prototype operating from 50 to 58 GHz will be presented. The instrument is being developed as a laboratory testbed, and the goal of this work is to demonstrate the technologies needed to do atmospheric soundings with high spatial resolution from Geostationary orbit. The concept is to deploy a large sparse aperture Y-array from a geostationary satellite, and to use aperture synthesis to obtain images of the earth without the need for a large mechanically scanned antenna. The laboratory prototype consists of a Y-array of 24 horn antennas, MMIC receivers, and a digital cross-correlation sub-system. System studies are discussed, including an error budget which has been derived from numerical simulations. The error budget defines key requirements, such as null offsets, phase calibration, and antenna pattern knowledge. Details of the instrument design are discussed in the context of these requirements.
A new systematic calibration method of ring laser gyroscope inertial navigation system
NASA Astrophysics Data System (ADS)
Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Xiong, Zhenyu; Long, Xingwu
2016-10-01
Inertial navigation system has been the core component of both military and civil navigation systems. Before the INS is put into application, it is supposed to be calibrated in the laboratory in order to compensate repeatability error caused by manufacturing. Discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed theories of error inspiration and separation in detail and presented a new systematic calibration method for ring laser gyroscope inertial navigation system. Error models and equations of calibrated Inertial Measurement Unit are given. Then proper rotation arrangement orders are depicted in order to establish the linear relationships between the change of velocity errors and calibrated parameter errors. Experiments have been set up to compare the systematic errors calculated by filtering calibration result with those obtained by discrete calibration result. The largest position error and velocity error of filtering calibration result are only 0.18 miles and 0.26m/s compared with 2 miles and 1.46m/s of discrete calibration result. These results have validated the new systematic calibration method and proved its importance for optimal design and accuracy improvement of calibration of mechanically dithered ring laser gyroscope inertial navigation system.
Acoustic holography as a metrological tool for characterizing medical ultrasound sources and fields
Sapozhnikov, Oleg A.; Tsysar, Sergey A.; Khokhlova, Vera A.; Kreider, Wayne
2015-01-01
Acoustic holography is a powerful technique for characterizing ultrasound sources and the fields they radiate, with the ability to quantify source vibrations and reduce the number of required measurements. These capabilities are increasingly appealing for meeting measurement standards in medical ultrasound; however, associated uncertainties have not been investigated systematically. Here errors associated with holographic representations of a linear, continuous-wave ultrasound field are studied. To facilitate the analysis, error metrics are defined explicitly, and a detailed description of a holography formulation based on the Rayleigh integral is provided. Errors are evaluated both for simulations of a typical therapeutic ultrasound source and for physical experiments with three different ultrasound sources. Simulated experiments explore sampling errors introduced by the use of a finite number of measurements, geometric uncertainties in the actual positions of acquired measurements, and uncertainties in the properties of the propagation medium. Results demonstrate the theoretical feasibility of keeping errors less than about 1%. Typical errors in physical experiments were somewhat larger, on the order of a few percent; comparison with simulations provides specific guidelines for improving the experimental implementation to reduce these errors. Overall, results suggest that holography can be implemented successfully as a metrological tool with small, quantifiable errors. PMID:26428789
Topographical gradients of semantics and phonology revealed by temporal lobe stimulation.
Miozzo, Michele; Williams, Alicia C; McKhann, Guy M; Hamberger, Marla J
2017-02-01
Word retrieval is a fundamental component of oral communication, and it is well established that this function is supported by left temporal cortex. Nevertheless, the specific temporal areas mediating word retrieval and the particular linguistic processes these regions support have not been well delineated. Toward this end, we analyzed over 1000 naming errors induced by left temporal cortical stimulation in epilepsy surgery patients. Errors were primarily semantic (lemon → "pear"), phonological (horn → "corn"), non-responses, and delayed responses (correct responses after a delay), and each error type appeared predominantly in a specific region: semantic errors in mid-middle temporal gyrus (TG), phonological errors and delayed responses in middle and posterior superior TG, and non-responses in anterior inferior TG. To the extent that semantic errors, phonological errors and delayed responses reflect disruptions in different processes, our results imply topographical specialization of semantic and phonological processing. Specifically, results revealed an inferior-to-superior gradient, with more superior regions associated with phonological processing. Further, errors were increasingly semantically related to targets toward posterior temporal cortex. We speculate that detailed semantic input is needed to support phonological retrieval, and thus, the specificity of semantic input increases progressively toward posterior temporal regions implicated in phonological processing. Hum Brain Mapp 38:688-703, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Teamwork and error in the operating room: analysis of skills and roles.
Catchpole, K; Mishra, A; Handa, A; McCulloch, P
2008-04-01
To analyze the effects of surgical, anesthetic, and nursing teamwork skills on technical outcomes. The value of team skills in reducing adverse events in the operating room is presently receiving considerable attention. Current work has not yet identified in detail how the teamwork and communication skills of surgeons, anesthetists, and nurses affect the course of an operation. Twenty-six laparoscopic cholecystectomies and 22 carotid endarterectomies were studied using direct observation methods. For each operation, teams' skills were scored for the whole team, and for nursing, surgical, and anesthetic subteams on 4 dimensions (leadership and management [LM]; teamwork and cooperation; problem solving and decision making; and situation awareness). Operating time, errors in surgical technique, and other procedural problems and errors were measured as outcome parameters for each operation. The relationships between teamwork scores and these outcome parameters within each operation were examined using analysis of variance and linear regression. Surgical (F(2,42) = 3.32, P = 0.046) and anesthetic (F(2,42) = 3.26, P = 0.048) LM had significant but opposite relationships with operating time in each operation: operating time increased significantly with higher anesthetic but decreased with higher surgical LM scores. Errors in surgical technique had a strong association with surgical situation awareness (F(2,42) = 7.93, P < 0.001) in each operation. Other procedural problems and errors were related to the intraoperative LM skills of the nurses (F(5,1) = 3.96, P = 0.027). Detailed analysis of team interactions and dimensions is feasible and valuable, yielding important insights into relationships between nontechnical skills, technical performance, and operative duration. These results support the concept that interventions designed to improve teamwork and communication may have beneficial effects on technical performance and patient outcome.
Broberg, Per
2013-07-19
One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.
Abraham, Raymond J; Griffiths, Lee; Perez, Manuel
2013-03-01
The (1)H spectra of 37 amides in CDCl(3) solvent were analysed and the chemical shifts obtained. The molecular geometries and conformational analysis of these amides were considered in detail. The NMR spectral assignments are of interest, e.g. the assignments of the formamide NH(2) protons reverse in going from CDCl(3) to more polar solvents. The substituent chemical shifts of the amide group in both aliphatic and aromatic amides were analysed using an approach based on neural network data for near (≤3 bonds removed) protons and the electric field, magnetic anisotropy, steric and for aromatic systems π effects of the amide group for more distant protons. The electric field is calculated from the partial atomic charges on the N.C═O atoms of the amide group. The magnetic anisotropy of the carbonyl group was reproduced with the asymmetric magnetic anisotropy acting at the midpoint of the carbonyl bond. The values of the anisotropies Δχ(parl) and Δχ(perp) were for the aliphatic amides 10.53 and -23.67 (×10(-6) Å(3)/molecule) and for the aromatic amides 2.12 and -10.43 (×10(-6) Å(3)/molecule). The nitrogen anisotropy was 7.62 (×10(-6) Å(3)/molecule). These values are compared with previous literature values. The (1)H chemical shifts were calculated from the semi-empirical approach and also by gauge-independent atomic orbital calculations with the density functional theory method and B3LYP/6-31G(++) (d,p) basis set. The semi-empirical approach gave good agreement with root mean square error of 0.081 ppm for the data set of 280 entries. The gauge-independent atomic orbital approach was generally acceptable, but significant errors (ca. 1 ppm) were found for the NH and CHO protons and also for some other protons. Copyright © 2013 John Wiley & Sons, Ltd.
Cognitive function is associated with risk aversion in community-based older persons.
Boyle, Patricia A; Yu, Lei; Buchman, Aron S; Laibson, David I; Bennett, David A
2011-09-11
Emerging data from younger and middle-aged persons suggest that cognitive ability is negatively associated with risk aversion, but this association has not been studied among older persons who are at high risk of experiencing loss of cognitive function. Using data from 369 community-dwelling older persons without dementia from the Rush Memory and Aging Project, an ongoing longitudinal epidemiologic study of aging, we examined the correlates of risk aversion and tested the hypothesis that cognition is negatively associated with risk aversion. Global cognition and five specific cognitive abilities were measured via detailed cognitive testing, and risk aversion was measured using standard behavioral economics questions in which participants were asked to choose between a certain monetary payment ($15) versus a gamble in which they could gain more than $15 or gain nothing; potential gamble gains ranged from $21.79 to $151.19 with the gain amounts varied randomly over questions. We first examined the bivariate associations of age, education, sex, income and cognition with risk aversion. Next, we examined the associations between cognition and risk aversion via mixed models adjusted for age, sex, education, and income. Finally, we conducted sensitivity analyses to ensure that our results were not driven by persons with preclinical cognitive impairment. In bivariate analyses, sex, education, income and global cognition were associated with risk aversion. However, in a mixed effect model, only sex (estimate = -1.49, standard error (SE) = 0.39, p < 0.001) and global cognitive function (estimate = -1.05, standard error (SE) = 0.34, p < 0.003) were significantly inversely associated with risk aversion. Thus, a lower level of global cognitive function and female sex were associated with greater risk aversion. Moreover, performance on four out of the five cognitive domains was negatively related to risk aversion (i.e., semantic memory, episodic memory, working memory, and perceptual speed); performance on visuospatial abilities was not. A lower level of cognitive ability and female sex are associated with greater risk aversion in advanced age.
Linguistic pattern analysis of misspellings of typically developing writers in grades 1-9.
Bahr, Ruth Huntley; Sillian, Elaine R; Berninger, Virginia W; Dow, Michael
2012-12-01
A mixed-methods approach, evaluating triple word-form theory, was used to describe linguistic patterns of misspellings. Spelling errors were taken from narrative and expository writing samples provided by 888 typically developing students in Grades 1-9. Errors were coded by category (phonological, orthographic, and morphological) and specific linguistic feature affected. Grade-level effects were analyzed with trend analysis. Qualitative analyses determined frequent error types and how use of specific linguistic features varied across grades. Phonological, orthographic, and morphological errors were noted across all grades, but orthographic errors predominated. Linear trends revealed developmental shifts in error proportions for the orthographic and morphological categories between Grades 4 and 5. Similar error types were noted across age groups, but the nature of linguistic feature error changed with age. Triple word-form theory was supported. By Grade 1, orthographic errors predominated, and phonological and morphological error patterns were evident. Morphological errors increased in relative frequency in older students, probably due to a combination of word-formation issues and vocabulary growth. These patterns suggest that normal spelling development reflects nonlinear growth and that it takes a long time to develop a robust orthographic lexicon that coordinates phonology, orthography, and morphology and supports word-specific, conventional spelling.
Vehicle antenna for the mobile satellite experiment
NASA Technical Reports Server (NTRS)
Peng, Sheng Y.; Chung, H. H.; Leggiere, D.; Foy, W.; Schaffner, G.; Nelson, J.; Pagels, W.; Vayner, M.; Faller, H. L.; Messer, L.
1988-01-01
A low profile, low cost, printed circuit, electronically steered, right hand circularly polarized phase array antenna system has been developed for the Mobile Satellite Experiment (MSAT-X) Program. The success of this antenna is based upon the development of a crossed-slot element array and detailed trade-off analyses for both the phased array and pointing system design. The optimized system provides higher gain at low elevation angles (20 degrees above the horizon) and broader frequency coverage (approximately 8 1/2 percent bandwidth) than is possible with a patch array. Detailed analysis showed that optimum performance could be achieved with a 19 element array of a triangular lattice geometry of 3.9 inch element spacing. This configuration has the effect of minimizing grating lobes at large scan angles plus it improves the intersatellite isolation. The array has an aperture 20 inches in diameter and is 0.75 inch thick overall, exclusive of the RF and power connector. The pointing system employs a hybrid approach that operates with both an external rate sensor and an internal error signal as a means of fine tuning the beam acquisition and track. Steering the beam is done electronically via 18, 3-bit diode phase shifters. A nineteenth phase shifter is not required as the center element serves as a reference only. Measured patterns and gain show that the array meets the stipulated performance specifications everywhere except at some low elevation angles.
Radar-rain-gauge rainfall estimation for hydrological applications in small catchments
NASA Astrophysics Data System (ADS)
Gabriele, Salvatore; Chiaravalloti, Francesco; Procopio, Antonio
2017-07-01
The accurate evaluation of the precipitation's time-spatial structure is a critical step for rainfall-runoff modelling. Particularly for small catchments, the variability of rainfall can lead to mismatched results. Large errors in flow evaluation may occur during convective storms, responsible for most of the flash floods in small catchments in the Mediterranean area. During such events, we may expect large spatial and temporal variability. Therefore, using rain-gauge measurements only can be insufficient in order to adequately depict extreme rainfall events. In this work, a double-level information approach, based on rain gauges and weather radar measurements, is used to improve areal rainfall estimations for hydrological applications. In order to highlight the effect that precipitation fields with different level of spatial details have on hydrological modelling, two kinds of spatial rainfall fields were computed for precipitation data collected during 2015, considering both rain gauges only and their merging with radar information. The differences produced by these two precipitation fields in the computation of the areal mean rainfall accumulation were evaluated considering 999 basins of the region Calabria, southern Italy. Moreover, both of the two precipitation fields were used to carry out rainfall-runoff simulations at catchment scale for main precipitation events that occurred during 2015 and the differences between the scenarios obtained in the two cases were analysed. A representative case study is presented in detail.
NASTRAN maintenance and enhancement experiences
NASA Technical Reports Server (NTRS)
Schmitz, R. P.
1975-01-01
The current capability is described which includes isoparametric elements, optimization of grid point sequencing, and eigenvalue routine. Overlay and coding errors were corrected for cyclic symmetry, transient response, and differential stiffness rigid formats. Error corrections and program enhancements are discussed along with developments scheduled for the current year and a brief description of analyses being performed using the program.
Increased Error-Related Negativity (ERN) in Childhood Anxiety Disorders: ERP and Source Localization
ERIC Educational Resources Information Center
Ladouceur, Cecile D.; Dahl, Ronald E.; Birmaher, Boris; Axelson, David A.; Ryan, Neal D.
2006-01-01
Background: In this study we used event-related potentials (ERPs) and source localization analyses to track the time course of neural activity underlying response monitoring in children diagnosed with an anxiety disorder compared to age-matched low-risk normal controls. Methods: High-density ERPs were examined following errors on a flanker task…
Land use surveys by means of automatic interpretation of LANDSAT system data
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Lombardo, M. A.; Novo, E. M. L. D.; Niero, M.; Foresti, C.
1981-01-01
Analyses for seven land-use classes are presented. The classes are: urban area, industrial area, bare soil, cultivated area, pastureland, reforestation, and natural vegetation. The automatic classification of LANDSAT MSS data using a maximum likelihood algorithm shows a 39% average error of emission and a 3.45 error of commission for the seven classes.
Trial Sequential Analysis in systematic reviews with meta-analysis.
Wetterslev, Jørn; Jakobsen, Janus Christian; Gluud, Christian
2017-03-06
Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors) and too many false negative conclusions (type II errors). We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D 2 ) measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in systematic reviews with traditional meta-analyses can be reduced using Trial Sequential Analysis. Several empirical studies have demonstrated that the Trial Sequential Analysis provides better control of type I errors and of type II errors than the traditional naïve meta-analysis. Trial Sequential Analysis represents analysis of meta-analytic data, with transparent assumptions, and better control of type I and type II errors than the traditional meta-analysis using naïve unadjusted confidence intervals.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.
White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K
2016-12-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems
Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.
2016-01-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060
Using voluntary reports from physicians to learn from diagnostic errors in emergency medicine.
Okafor, Nnaemeka; Payne, Velma L; Chathampally, Yashwant; Miller, Sara; Doshi, Pratik; Singh, Hardeep
2016-04-01
Diagnostic errors are common in the emergency department (ED), but few studies have comprehensively evaluated their types and origins. We analysed incidents reported by ED physicians to determine disease conditions, contributory factors and patient harm associated with ED-related diagnostic errors. Between 1 March 2009 and 31 December 2013, ED physicians reported 509 incidents using a department-specific voluntary incident-reporting system that we implemented at two large academic hospital-affiliated EDs. For this study, we analysed 209 incidents related to diagnosis. A quality assurance team led by an ED physician champion reviewed each incident and interviewed physicians when necessary to confirm the presence/absence of diagnostic error and to determine the contributory factors. We generated descriptive statistics quantifying disease conditions involved, contributory factors and patient harm from errors. Among the 209 incidents, we identified 214 diagnostic errors associated with 65 unique diseases/conditions, including sepsis (9.6%), acute coronary syndrome (9.1%), fractures (8.6%) and vascular injuries (8.6%). Contributory factors included cognitive (n=317), system related (n=192) and non-remedial (n=106). Cognitive factors included faulty information verification (41.3%) and faulty information processing (30.6%) whereas system factors included high workload (34.4%) and inefficient ED processes (40.1%). Non-remediable factors included atypical presentation (31.3%) and the patients' inability to provide a history (31.3%). Most errors (75%) involved multiple factors. Major harm was associated with 34/209 (16.3%) of reported incidents. Most diagnostic errors in ED appeared to relate to common disease conditions. While sustaining diagnostic error reporting programmes might be challenging, our analysis reveals the potential value of such systems in identifying targets for improving patient safety in the ED. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Medical error and systems of signaling: conceptual and linguistic definition.
Smorti, Andrea; Cappelli, Francesco; Zarantonello, Roberta; Tani, Franca; Gensini, Gian Franco
2014-09-01
In recent years the issue of patient safety has been the subject of detailed investigations, particularly as a result of the increasing attention from the patients and the public on the problem of medical error. The purpose of this work is firstly to define the classification of medical errors, which are distinguished between two perspectives: those that are personal, and those that are caused by the system. Furthermore we will briefly review some of the main methods used by healthcare organizations to identify and analyze errors. During this discussion it has been determined that, in order to constitute a practical, coordinated and shared action to counteract the error, it is necessary to promote an analysis that considers all elements (human, technological and organizational) that contribute to the occurrence of a critical event. Therefore, it is essential to create a culture of constructive confrontation that encourages an open and non-punitive debate about the causes that led to error. In conclusion we have thus underlined that in health it is essential to affirm a system discussion that considers the error as a learning source, and as a result of the interaction between the individual and the organization. In this way, one should encourage a non-guilt bearing discussion on evident errors and on those which are not immediately identifiable, in order to create the conditions that recognize and corrects the error even before it produces negative consequences.
Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant
Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar
2015-01-01
Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485
Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleijnen, J.P.C.; Helton, J.C.
1999-04-01
The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are consideredmore » for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.« less
Modelling Complex Fenestration Systems using physical and virtual models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thanachareonkit, Anothai; Scartezzini, Jean-Louis
2010-04-15
Physical or virtual models are commonly used to visualize the conceptual ideas of architects, lighting designers and researchers; they are also employed to assess the daylighting performance of buildings, particularly in cases where Complex Fenestration Systems (CFS) are considered. Recent studies have however revealed a general tendency of physical models to over-estimate this performance, compared to those of real buildings; these discrepancies can be attributed to several reasons. In order to identify the main error sources, a series of comparisons in-between a real building (a single office room within a test module) and the corresponding physical and virtual models wasmore » undertaken. The physical model was placed in outdoor conditions, which were strictly identical to those of the real building, as well as underneath a scanning sky simulator. The virtual model simulations were carried out by way of the Radiance program using the GenSky function; an alternative evaluation method, named Partial Daylight Factor method (PDF method), was also employed with the physical model together with sky luminance distributions acquired by a digital sky scanner during the monitoring of the real building. The overall daylighting performance of physical and virtual models were assessed and compared. The causes of discrepancies between the daylighting performance of the real building and the models were analysed. The main identified sources of errors are the reproduction of building details, the CFS modelling and the mocking-up of the geometrical and photometrical properties. To study the impact of these errors on daylighting performance assessment, computer simulation models created using the Radiance program were also used to carry out a sensitivity analysis of modelling errors. The study of the models showed that large discrepancies can occur in daylighting performance assessment. In case of improper mocking-up of the glazing for instance, relative divergences of 25-40% can be found in different room locations, suggesting that more light is entering than actually monitored in the real building. All these discrepancies can however be reduced by making an effort to carefully mock up the geometry and photometry of the real building. A synthesis is presented in this article which can be used as guidelines for daylighting designers to avoid or estimate errors during CFS daylighting performance assessment. (author)« less
Inverse consistent non-rigid image registration based on robust point set matching
2014-01-01
Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number. The determinant of the Jacobian matrix of the deformation field is used to analyse the smoothness of the forward and backward transformations. The forward and backward transformations estimated by our algorithm are smooth for small deformation. For registration of lung slices and individual brain slices, large or small determinant of the Jacobian matrix of the deformation fields are observed. Conclusions Results indicate the improvement of the proposed algorithm in bi-directional image registration and the decrease of the inverse consistent errors of the forward and the reverse transformations between two images. PMID:25559889
Technology and medication errors: impact in nursing homes.
Baril, Chantal; Gascon, Viviane; St-Pierre, Liette; Lagacé, Denis
2014-01-01
The purpose of this paper is to study a medication distribution technology's (MDT) impact on medication errors reported in public nursing homes in Québec Province. The work was carried out in six nursing homes (800 patients). Medication error data were collected from nursing staff through a voluntary reporting process before and after MDT was implemented. The errors were analysed using: totals errors; medication error type; severity and patient consequences. A statistical analysis verified whether there was a significant difference between the variables before and after introducing MDT. The results show that the MDT detected medication errors. The authors' analysis also indicates that errors are detected more rapidly resulting in less severe consequences for patients. MDT is a step towards safer and more efficient medication processes. Our findings should convince healthcare administrators to implement technology such as electronic prescriber or bar code medication administration systems to improve medication processes and to provide better healthcare to patients. Few studies have been carried out in long-term healthcare facilities such as nursing homes. The authors' study extends what is known about MDT's impact on medication errors in nursing homes.
Risk managers, physicians, and disclosure of harmful medical errors.
Loren, David J; Garbutt, Jane; Dunagan, W Claiborne; Bommarito, Kerry M; Ebers, Alison G; Levinson, Wendy; Waterman, Amy D; Fraser, Victoria J; Summy, Elizabeth A; Gallagher, Thomas H
2010-03-01
Physicians are encouraged to disclose medical errors to patients, which often requires close collaboration between physicians and risk managers. An anonymous national survey of 2,988 healthcare facility-based risk managers was conducted between November 2004 and March 2005, and results were compared with those of a previous survey (conducted between July 2003 and March 2004) of 1,311 medical physicians in Washington and Missouri. Both surveys included an error-disclosure scenario for an obvious and a less obvious error with scripted response options. More risk managers than physicians were aware that an error-reporting system was present at their hospital (81% versus 39%, p < .001) and believed that mechanisms to inform physicians about errors in their hospital were adequate (51% versus 17%, p < .001). More risk managers than physicians strongly agreed that serious errors should be disclosed to patients (70% versus 49%, p < .001). Across both error scenario, risk managers were more likely than physicians to definitely recommend that the error be disclosed (76% versus 50%, p < .001) and to provide full details about how the error would be prevented in the future (62% versus 51%, p < .001). However, physicians were more likely than risk managers to provide a full apology recognizing the harm caused by the error (39% versus 21%, p < .001). Risk managers have more favorable attitudes about disclosing errors to patients compared with physicians but are less supportive of providing a full apology. These differences may create conflicts between risk managers and physicians regarding disclosure. Health care institutions should promote greater collaboration between these two key participants in disclosure conversations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-04-11
This manual is intended primarily for use as a reference by analysts applying the WORLD model to regional studies. It also provides overview information on WORLD features of potential interest to managers and analysts. Broadly, the manual covers WORLD model features in progressively increasing detail. Section 2 provides an overview of the WORLD model, how it has evolved, what its design goals are, what it produces, and where it can be taken with further enhancements. Section 3 reviews model management covering data sources, managing over-optimization, calibration and seasonality, check-points for case construction and common errors. Section 4 describes in detailmore » the WORLD system, including: data and program systems in overview; details of mainframe and PC program control and files;model generation, size management, debugging and error analysis; use with different optimizers; and reporting and results analysis. Section 5 provides a detailed description of every WORLD model data table, covering model controls, case and technology data. Section 6 goes into the details of WORLD matrix structure. It provides an overview, describes how regional definitions are controlled and defines the naming conventions for-all model rows, columns, right-hand sides, and bounds. It also includes a discussion of the formulation of product blending and specifications in WORLD. Several Appendices supplement the main sections.« less
NASA Technical Reports Server (NTRS)
Solloway, C. B.; Wakeland, W.
1976-01-01
First-order Markov model developed on digital computer for population with specific characteristics. System is user interactive, self-documenting, and does not require user to have complete understanding of underlying model details. Contains thorough error-checking algorithms on input and default capabilities.
Meiotic Divisions: No Place for Gender Equality.
El Yakoubi, Warif; Wassmann, Katja
2017-01-01
In multicellular organisms the fusion of two gametes with a haploid set of chromosomes leads to the formation of the zygote, the first cell of the embryo. Accurate execution of the meiotic cell division to generate a female and a male gamete is required for the generation of healthy offspring harboring the correct number of chromosomes. Unfortunately, meiosis is error prone. This has severe consequences for fertility and under certain circumstances, health of the offspring. In humans, female meiosis is extremely error prone. In this chapter we will compare male and female meiosis in humans to illustrate why and at which frequency errors occur, and describe how this affects pregnancy outcome and health of the individual. We will first introduce key notions of cell division in meiosis and how they differ from mitosis, followed by a detailed description of the events that are prone to errors during the meiotic divisions.
Quality Issues of Court Reporters and Transcriptionists for Qualitative Research
Hennink, Monique; Weber, Mary Beth
2015-01-01
Transcription is central to qualitative research, yet few researchers identify the quality of different transcription methods. We described the quality of verbatim transcripts from traditional transcriptionists and court reporters by reviewing 16 transcripts from 8 focus group discussions using four criteria: transcription errors, cost and time of transcription, and effect on study participants. Transcriptionists made fewer errors, captured colloquial dialogue, and errors were largely influenced by the quality of the recording. Court reporters made more errors, particularly in the omission of topical content and contextual detail and were less able to produce a verbatim transcript; however the potential immediacy of the transcript was advantageous. In terms of cost, shorter group discussions favored a transcriptionist and longer groups a court reporter. Study participants reported no effect by either method of recording. Understanding the benefits and limitations of each method of transcription can help researchers select an appropriate method for each study. PMID:23512435
Frequent methodological errors in clinical research.
Silva Aycaguer, L C
2018-03-07
Several errors that are frequently present in clinical research are listed, discussed and illustrated. A distinction is made between what can be considered an "error" arising from ignorance or neglect, from what stems from a lack of integrity of researchers, although it is recognized and documented that it is not easy to establish when we are in a case and when in another. The work does not intend to make an exhaustive inventory of such problems, but focuses on those that, while frequent, are usually less evident or less marked in the various lists that have been published with this type of problems. It has been a decision to develop in detail the examples that illustrate the problems identified, instead of making a list of errors accompanied by an epidermal description of their characteristics. Copyright © 2018 Elsevier España, S.L.U. y SEMICYUC. All rights reserved.
Airborne gravimetry, altimetry, and GPS navigation errors
NASA Technical Reports Server (NTRS)
Colombo, Oscar L.
1992-01-01
Proper interpretation of airborne gravimetry and altimetry requires good knowledge of aircraft trajectory. Recent advances in precise navigation with differential GPS have made it possible to measure gravity from the air with accuracies of a few milligals, and to obtain altimeter profiles of terrain or sea surface correct to one decimeter. These developments are opening otherwise inaccessible regions to detailed geophysical mapping. Navigation with GPS presents some problems that grow worse with increasing distance from a fixed receiver: the effect of errors in tropospheric refraction correction, GPS ephemerides, and the coordinates of the fixed receivers. Ionospheric refraction and orbit error complicate ambiguity resolution. Optimal navigation should treat all error sources as unknowns, together with the instantaneous vehicle position. To do so, fast and reliable numerical techniques are needed: efficient and stable Kalman filter-smoother algorithms, together with data compression and, sometimes, the use of simplified dynamics.