Sample records for detailed error analysis

  1. Error Analysis of Brailled Instructional Materials Produced by Public School Personnel in Texas

    ERIC Educational Resources Information Center

    Herzberg, Tina

    2010-01-01

    In this study, a detailed error analysis was performed to determine if patterns of errors existed in braille transcriptions. The most frequently occurring errors were the insertion of letters or words that were not contained in the original print material; the incorrect usage of the emphasis indicator; and the incorrect formatting of titles,…

  2. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human errormore » analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.« less

  3. Space shuttle navigation analysis

    NASA Technical Reports Server (NTRS)

    Jones, H. L.; Luders, G.; Matchett, G. A.; Sciabarrasi, J. E.

    1976-01-01

    A detailed analysis of space shuttle navigation for each of the major mission phases is presented. A covariance analysis program for prelaunch IMU calibration and alignment for the orbital flight tests (OFT) is described, and a partial error budget is presented. The ascent, orbital operations and deorbit maneuver study considered GPS-aided inertial navigation in the Phase III GPS (1984+) time frame. The entry and landing study evaluated navigation performance for the OFT baseline system. Detailed error budgets and sensitivity analyses are provided for both the ascent and entry studies.

  4. Attitude Determination Error Analysis System (ADEAS) mathematical specifications document

    NASA Technical Reports Server (NTRS)

    Nicholson, Mark; Markley, F.; Seidewitz, E.

    1988-01-01

    The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.

  5. C-band radar pulse Doppler error: Its discovery, modeling, and elimination

    NASA Technical Reports Server (NTRS)

    Krabill, W. B.; Dempsey, D. J.

    1978-01-01

    The discovery of a C Band radar pulse Doppler error is discussed and use of the GEOS 3 satellite's coherent transponder to isolate the error source is described. An analysis of the pulse Doppler tracking loop is presented and a mathematical model for the error was developed. Error correction techniques were developed and are described including implementation details.

  6. Pilot-controller communication errors : an analysis of Aviation Safety Reporting System (ASRS) reports

    DOT National Transportation Integrated Search

    1998-08-01

    The purpose of this study was to identify the factors that contribute to pilot-controller communication errors. Resports submitted to the Aviation Safety Reporting System (ASRS) offer detailed accounts of specific types of errors and a great deal of ...

  7. Lexico-Semantic Errors of the Learners of English: A Survey of Standard Seven Keiyo-Speaking Primary School Pupils in Keiyo District, Kenya

    ERIC Educational Resources Information Center

    Jeptarus, Kipsamo E.; Ngene, Patrick K.

    2016-01-01

    The purpose of this research was to study the Lexico-semantic errors of the Keiyo-speaking standard seven primary school learners of English as a Second Language (ESL) in Keiyo District, Kenya. This study was guided by two related theories: Error Analysis Theory/Approach by Corder (1971) which approaches L2 learning through a detailed analysis of…

  8. Detailed Uncertainty Analysis for Ares I Ascent Aerodynamics Wind Tunnel Database

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.; Hanke, Jeremy L.; Walker, Eric L.; Houlden, Heather P.

    2008-01-01

    A detailed uncertainty analysis for the Ares I ascent aero 6-DOF wind tunnel database is described. While the database itself is determined using only the test results for the latest configuration, the data used for the uncertainty analysis comes from four tests on two different configurations at the Boeing Polysonic Wind Tunnel in St. Louis and the Unitary Plan Wind Tunnel at NASA Langley Research Center. Four major error sources are considered: (1) systematic errors from the balance calibration curve fits and model + balance installation, (2) run-to-run repeatability, (3) boundary-layer transition fixing, and (4) tunnel-to-tunnel reproducibility.

  9. Error Patterns in Young German Children's "Wh"-Questions

    ERIC Educational Resources Information Center

    Schmerse, Daniel; Lieven, Elena; Tomasello, Michael

    2013-01-01

    In this article we report two studies: a detailed longitudinal analysis of errors in "wh"-questions from six German-learning children (age 2 ; 0-3 ; 0) and an analysis of the prosodic characteristics of "wh"-questions in German child-directed speech. The results of the first study demonstrate that German-learning children…

  10. Detailed Uncertainty Analysis of the ZEM-3 Measurement System

    NASA Technical Reports Server (NTRS)

    Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred

    2014-01-01

    The measurement of Seebeck coefficient and electrical resistivity are critical to the investigation of all thermoelectric systems. Therefore, it stands that the measurement uncertainty must be well understood to report ZT values which are accurate and trustworthy. A detailed uncertainty analysis of the ZEM-3 measurement system has been performed. The uncertainty analysis calculates error in the electrical resistivity measurement as a result of sample geometry tolerance, probe geometry tolerance, statistical error, and multi-meter uncertainty. The uncertainty on Seebeck coefficient includes probe wire correction factors, statistical error, multi-meter uncertainty, and most importantly the cold-finger effect. The cold-finger effect plagues all potentiometric (four-probe) Seebeck measurement systems, as heat parasitically transfers through thermocouple probes. The effect leads to an asymmetric over-estimation of the Seebeck coefficient. A thermal finite element analysis allows for quantification of the phenomenon, and provides an estimate on the uncertainty of the Seebeck coefficient. The thermoelectric power factor has been found to have an uncertainty of +9-14 at high temperature and 9 near room temperature.

  11. Error and Symmetry Analysis of Misner's Algorithm for Spherical Harmonic Decomposition on a Cubic Grid

    NASA Technical Reports Server (NTRS)

    Fiske, David R.

    2004-01-01

    In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.

  12. Error analysis and correction of discrete solutions from finite element codes

    NASA Technical Reports Server (NTRS)

    Thurston, G. A.; Stein, P. A.; Knight, N. F., Jr.; Reissner, J. E.

    1984-01-01

    Many structures are an assembly of individual shell components. Therefore, results for stresses and deflections from finite element solutions for each shell component should agree with the equations of shell theory. This paper examines the problem of applying shell theory to the error analysis and the correction of finite element results. The general approach to error analysis and correction is discussed first. Relaxation methods are suggested as one approach to correcting finite element results for all or parts of shell structures. Next, the problem of error analysis of plate structures is examined in more detail. The method of successive approximations is adapted to take discrete finite element solutions and to generate continuous approximate solutions for postbuckled plates. Preliminary numerical results are included.

  13. Analysis of DSN software anomalies

    NASA Technical Reports Server (NTRS)

    Galorath, D. D.; Hecht, H.; Hecht, M.; Reifer, D. J.

    1981-01-01

    A categorized data base of software errors which were discovered during the various stages of development and operational use of the Deep Space Network DSN/Mark 3 System was developed. A study team identified several existing error classification schemes (taxonomies), prepared a detailed annotated bibliography of the error taxonomy literature, and produced a new classification scheme which was tuned to the DSN anomaly reporting system and encapsulated the work of others. Based upon the DSN/RCI error taxonomy, error data on approximately 1000 reported DSN/Mark 3 anomalies were analyzed, interpreted and classified. Next, error data are summarized and histograms were produced highlighting key tendencies.

  14. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    PubMed Central

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  15. An Analysis of U.S. Army Fratricide Incidents during the Global War on Terror (11 September 2001 to 31 March 2008)

    DTIC Science & Technology

    2010-03-15

    Swiss cheese model of human error causation. ................................................................... 3  2. Results for the classification of...based on Reason’s “ Swiss cheese ” model of human error (1990). Figure 1 describes how an accident is likely to occur when all of the errors, or “holes...align. A detailed description of HFACS can be found in Wiegmann and Shappell (2003). Figure 1. The Swiss cheese model of human error

  16. Circular Array of Magnetic Sensors for Current Measurement: Analysis for Error Caused by Position of Conductor.

    PubMed

    Yu, Hao; Qian, Zheng; Liu, Huayi; Qu, Jiaqi

    2018-02-14

    This paper analyzes the measurement error, caused by the position of the current-carrying conductor, of a circular array of magnetic sensors for current measurement. The circular array of magnetic sensors is an effective approach for AC or DC non-contact measurement, as it is low-cost, light-weight, has a large linear range, wide bandwidth, and low noise. Especially, it has been claimed that such structure has excellent reduction ability for errors caused by the position of the current-carrying conductor, crosstalk current interference, shape of the conduction cross-section, and the Earth's magnetic field. However, the positions of the current-carrying conductor-including un-centeredness and un-perpendicularity-have not been analyzed in detail until now. In this paper, for the purpose of having minimum measurement error, a theoretical analysis has been proposed based on vector inner and exterior product. In the presented mathematical model of relative error, the un-center offset distance, the un-perpendicular angle, the radius of the circle, and the number of magnetic sensors are expressed in one equation. The comparison of the relative error caused by the position of the current-carrying conductor between four and eight sensors is conducted. Tunnel magnetoresistance (TMR) sensors are used in the experimental prototype to verify the mathematical model. The analysis results can be the reference to design the details of the circular array of magnetic sensors for current measurement in practical situations.

  17. Design and analysis of a sub-aperture scanning machine for the transmittance measurements of large-aperture optical system

    NASA Astrophysics Data System (ADS)

    He, Yingwei; Li, Ping; Feng, Guojin; Cheng, Li; Wang, Yu; Wu, Houping; Liu, Zilong; Zheng, Chundi; Sha, Dingguo

    2010-11-01

    For measuring large-aperture optical system transmittance, a novel sub-aperture scanning machine with double-rotating arms (SSMDA) was designed to obtain sub-aperture beam spot. Optical system full-aperture transmittance measurements can be achieved by applying sub-aperture beam spot scanning technology. The mathematical model of the SSMDA based on a homogeneous coordinate transformation matrix is established to develop a detailed methodology for analyzing the beam spot scanning errors. The error analysis methodology considers two fundamental sources of scanning errors, namely (1) the length systematic errors and (2) the rotational systematic errors. As the systematic errors of the parameters are given beforehand, computational results of scanning errors are between -0.007~0.028mm while scanning radius is not lager than 400.000mm. The results offer theoretical and data basis to the research on transmission characteristics of large optical system.

  18. An analysis of input errors in precipitation-runoff models using regression with errors in the independent variables

    USGS Publications Warehouse

    Troutman, Brent M.

    1982-01-01

    Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.

  19. Dynamic assertion testing of flight control software

    NASA Technical Reports Server (NTRS)

    Andrews, D. M.; Mahmood, A.; Mccluskey, E. J.

    1985-01-01

    An experiment in using assertions to dynamically test fault tolerant flight software is described. The experiment showed that 87% of typical errors introduced into the program would be detected by assertions. Detailed analysis of the test data showed that the number of assertions needed to detect those errors could be reduced to a minimal set. The analysis also revealed that the most effective assertions tested program parameters that provided greater indirect (collateral) testing of other parameters.

  20. Error Analysis and Performance Data from an Automated Azimuth Measuring System,

    DTIC Science & Technology

    1981-02-17

    microprocessors, tape drives, input and i NM. A detailed error analysis of the output hardware, a dual-axis tiltmeter ystem and methods to improve...performance mounted on the azimuth gimbal of each ALS, and accuracy are presented. Discussion and six tiltmeters arranged on an optical includes selected...velocity air flowing through tubes along the optical paths to each target. 1 . Introduction Temperature sensors are located in each To accurately and

  1. Intelligence/Electronic Warfare (IEW) direction-finding and fix estimation analysis report. Volume 2: Trailblazer

    NASA Technical Reports Server (NTRS)

    Gardner, Robert; Gillis, James W.; Griesel, Ann; Pardo, Bruce

    1985-01-01

    An analysis of the direction finding (DF) and fix estimation algorithms in TRAILBLAZER is presented. The TRAILBLAZER software analyzed is old and not currently used in the field. However, the algorithms analyzed are used in other current IEW systems. The underlying algorithm assumptions (including unmodeled errors) are examined along with their appropriateness for TRAILBLAZER. Coding and documentation problems are then discussed. A detailed error budget is presented.

  2. Skeletal mechanism generation for surrogate fuels using directed relation graph with error propagation and sensitivity analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niemeyer, Kyle E.; Sung, Chih-Jen; Raju, Mandhapati P.

    2010-09-15

    A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination ofmore » the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane generated using the DRGEP, DRGASA, and DRGEPSA methods are presented and compared to illustrate the improvement of DRGEPSA. From a detailed reaction mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions, two skeletal mechanisms for n-decane generated using DRGEPSA, one covering a comprehensive range of temperature, pressure, and equivalence ratio conditions for autoignition and the other limited to high temperatures, are presented and validated. The comprehensive skeletal mechanism consists of 202 species and 846 reactions and the high-temperature skeletal mechanism consists of 51 species and 256 reactions. Both mechanisms are further demonstrated to well reproduce the results of the detailed mechanism in perfectly-stirred reactor and laminar flame simulations over a wide range of conditions. The comprehensive and high-temperature n-decane skeletal mechanisms are included as supplementary material with this article. (author)« less

  3. Developing a model for the adequate description of electronic communication in hospitals.

    PubMed

    Saboor, Samrend; Ammenwerth, Elske

    2011-01-01

    Adequate information and communication systems (ICT) can help to improve the communication in hospitals. Changes to the ICT-infrastructure of hospitals must be planed carefully. In order to support a comprehensive planning, we presented a classification of 81 common errors of the electronic communication on the MIE 2008 congress. Our objective now was to develop a data model that defines specific requirements for an adequate description of electronic communication processes We first applied the method of explicating qualitative content analysis on the error categorization in order to determine the essential process details. After this, we applied the method of subsuming qualitative content analysis on the results of the first step. A data model for the adequate description of electronic communication. This model comprises 61 entities and 91 relationships. The data model comprises and organizes all details that are necessary for the detection of the respective errors. It can be for either used to extend the capabilities of existing modeling methods or as a basis for the development of a new approach.

  4. Probabilistic wind/tornado/missile analyses for hazard and fragility evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Y.J.; Reich, M.

    Detailed analysis procedures and examples are presented for the probabilistic evaluation of hazard and fragility against high wind, tornado, and tornado-generated missiles. In the tornado hazard analysis, existing risk models are modified to incorporate various uncertainties including modeling errors. A significant feature of this paper is the detailed description of the Monte-Carlo simulation analyses of tornado-generated missiles. A simulation procedure, which includes the wind field modeling, missile injection, solution of flight equations, and missile impact analysis, is described with application examples.

  5. Space shuttle post-entry and landing analysis. Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Crawford, B. S.; Duiven, E. M.

    1973-01-01

    Four candidate navigation systems for the space shuttle orbiter approach and landing phase are evaluated in detail. These include three conventional navaid systems and a single-station one-way Doppler system. In each case, a Kalman filter is assumed to be mechanized in the onboard computer, blending the navaid data with IMU and altimeter data. Filter state dimensions ranging from 6 to 24 are involved in the candidate systems. Comprehensive truth models with state dimensions ranging from 63 to 82 are formulated and used to generate detailed error budgets and sensitivity curves illustrating the effect of variations in the size of individual error sources on touchdown accuracy. The projected overall performance of each system is shown in the form of time histories of position and velocity error components.

  6. Mechanism reduction for multicomponent surrogates: A case study using toluene reference fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niemeyer, Kyle E.; Sung, Chih-Jen

    Strategies and recommendations for performing skeletal reductions of multicomponent surrogate fuels are presented, through the generation and validation of skeletal mechanisms for a three-component toluene reference fuel. Using the directed relation graph with error propagation and sensitivity analysis method followed by a further unimportant reaction elimination stage, skeletal mechanisms valid over comprehensive and high-temperature ranges of conditions were developed at varying levels of detail. These skeletal mechanisms were generated based on autoignition simulations, and validation using ignition delay predictions showed good agreement with the detailed mechanism in the target range of conditions. When validated using phenomena other than autoignition, suchmore » as perfectly stirred reactor and laminar flame propagation, tight error control or more restrictions on the reduction during the sensitivity analysis stage were needed to ensure good agreement. In addition, tight error limits were needed for close prediction of ignition delay when varying the mixture composition away from that used for the reduction. In homogeneous compression-ignition engine simulations, the skeletal mechanisms closely matched the point of ignition and accurately predicted species profiles for lean to stoichiometric conditions. Furthermore, the efficacy of generating a multicomponent skeletal mechanism was compared to combining skeletal mechanisms produced separately for neat fuel components; using the same error limits, the latter resulted in a larger skeletal mechanism size that also lacked important cross reactions between fuel components. Based on the present results, general guidelines for reducing detailed mechanisms for multicomponent fuels are discussed.« less

  7. Mechanism reduction for multicomponent surrogates: A case study using toluene reference fuels

    DOE PAGES

    Niemeyer, Kyle E.; Sung, Chih-Jen

    2014-11-01

    Strategies and recommendations for performing skeletal reductions of multicomponent surrogate fuels are presented, through the generation and validation of skeletal mechanisms for a three-component toluene reference fuel. Using the directed relation graph with error propagation and sensitivity analysis method followed by a further unimportant reaction elimination stage, skeletal mechanisms valid over comprehensive and high-temperature ranges of conditions were developed at varying levels of detail. These skeletal mechanisms were generated based on autoignition simulations, and validation using ignition delay predictions showed good agreement with the detailed mechanism in the target range of conditions. When validated using phenomena other than autoignition, suchmore » as perfectly stirred reactor and laminar flame propagation, tight error control or more restrictions on the reduction during the sensitivity analysis stage were needed to ensure good agreement. In addition, tight error limits were needed for close prediction of ignition delay when varying the mixture composition away from that used for the reduction. In homogeneous compression-ignition engine simulations, the skeletal mechanisms closely matched the point of ignition and accurately predicted species profiles for lean to stoichiometric conditions. Furthermore, the efficacy of generating a multicomponent skeletal mechanism was compared to combining skeletal mechanisms produced separately for neat fuel components; using the same error limits, the latter resulted in a larger skeletal mechanism size that also lacked important cross reactions between fuel components. Based on the present results, general guidelines for reducing detailed mechanisms for multicomponent fuels are discussed.« less

  8. An Analysis of the Plumbing Occupation.

    ERIC Educational Resources Information Center

    Carlton, Earnest L.; Hollar, Charles E.

    The occupational analysis contains a brief job description, presenting for the occupation of plumbing 12 detailed task statements which specify job duties (tools, equipment, materials, objects acted upon, performance knowledge, safety considerations/hazards, decisions, cues, and errors) and learning skills (science, mathematics/number systems, and…

  9. Analysis of the Medical Assisting Occupation.

    ERIC Educational Resources Information Center

    Keir, Lucille; And Others

    The occupational analysis contains a brief job description, presenting for the occupation of medical assistant 113 detailed task statements which specify job duties (tools, equipment, materials, objects acted upon, performance knowledge, safety consideration/hazards, decisions, cues, and errors) and learning skills (science, mathematics/number…

  10. Error Analysis of Magnetohydrodynamic Angular Rate Sensor Combing with Coriolis Effect at Low Frequency.

    PubMed

    Ji, Yue; Xu, Mengjie; Li, Xingfei; Wu, Tengfei; Tuo, Weixiao; Wu, Jun; Dong, Jiuzhi

    2018-06-13

    The magnetohydrodynamic (MHD) angular rate sensor (ARS) with low noise level in ultra-wide bandwidth is developed in lasing and imaging applications, especially the line-of-sight (LOS) system. A modified MHD ARS combined with the Coriolis effect was studied in this paper to expand the sensor’s bandwidth at low frequency (<1 Hz), which is essential for precision LOS pointing and wide-bandwidth LOS jitter suppression. The model and the simulation method were constructed and a comprehensive solving method based on the magnetic and electric interaction methods was proposed. The numerical results on the Coriolis effect and the frequency response of the modified MHD ARS were detailed. In addition, according to the experimental results of the designed sensor consistent with the simulation results, an error analysis of model errors was discussed. Our study provides an error analysis method of MHD ARS combined with the Coriolis effect and offers a framework for future studies to minimize the error.

  11. Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant

    PubMed Central

    Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar

    2015-01-01

    Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485

  12. Imaging phased telescope array study

    NASA Technical Reports Server (NTRS)

    Harvey, James E.

    1989-01-01

    The problems encountered in obtaining a wide field-of-view with large, space-based direct imaging phased telescope arrays were considered. After defining some of the critical systems issues, previous relevant work in the literature was reviewed and summarized. An extensive list was made of potential error sources and the error sources were categorized in the form of an error budget tree including optical design errors, optical fabrication errors, assembly and alignment errors, and environmental errors. After choosing a top level image quality requirment as a goal, a preliminary tops-down error budget allocation was performed; then, based upon engineering experience, detailed analysis, or data from the literature, a bottoms-up error budget reallocation was performed in an attempt to achieve an equitable distribution of difficulty in satisfying the various allocations. This exercise provided a realistic allocation for residual off-axis optical design errors in the presence of state-of-the-art optical fabrication and alignment errors. Three different computational techniques were developed for computing the image degradation of phased telescope arrays due to aberrations of the individual telescopes. Parametric studies and sensitivity analyses were then performed for a variety of subaperture configurations and telescope design parameters in an attempt to determine how the off-axis performance of a phased telescope array varies as the telescopes are scaled up in size. The Air Force Weapons Laboratory (AFWL) multipurpose telescope testbed (MMTT) configuration was analyzed in detail with regard to image degradation due to field curvature and distortion of the individual telescopes as they are scaled up in size.

  13. General model for the pointing error analysis of Risley-prism system based on ray direction deviation in light refraction

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen; Bai, Qing

    2016-09-01

    The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.

  14. Methods, analysis, and the treatment of systematic errors for the electron electric dipole moment search in thorium monoxide

    NASA Astrophysics Data System (ADS)

    Baron, J.; Campbell, W. C.; DeMille, D.; Doyle, J. M.; Gabrielse, G.; Gurevich, Y. V.; Hess, P. W.; Hutzler, N. R.; Kirilov, E.; Kozyryev, I.; O'Leary, B. R.; Panda, C. D.; Parsons, M. F.; Spaun, B.; Vutha, A. C.; West, A. D.; West, E. P.; ACME Collaboration

    2017-07-01

    We recently set a new limit on the electric dipole moment of the electron (eEDM) (J Baron et al and ACME collaboration 2014 Science 343 269-272), which represented an order-of-magnitude improvement on the previous limit and placed more stringent constraints on many charge-parity-violating extensions to the standard model. In this paper we discuss the measurement in detail. The experimental method and associated apparatus are described, together with the techniques used to isolate the eEDM signal. In particular, we detail the way experimental switches were used to suppress effects that can mimic the signal of interest. The methods used to search for systematic errors, and models explaining observed systematic errors, are also described. We briefly discuss possible improvements to the experiment.

  15. Analysis of Solar Spectral Irradiance Measurements from the SBUV/2-Series and the SSBUV Instruments

    NASA Technical Reports Server (NTRS)

    Cebula, Richard P.; DeLand, Matthew T.; Hilsenrath, Ernest

    1997-01-01

    During this period of performance, 1 March 1997 - 31 August 1997, the NOAA-11 SBUV/2 solar spectral irradiance data set was validated using both internal and external assessments. Initial quality checking revealed minor problems with the data (e.g. residual goniometric errors, that were manifest as differences between the two scans acquired each day). The sources of these errors were determined and the errors were corrected. Time series were constructed for selected wavelengths and the solar irradiance changes measured by the instrument were compared to a Mg II proxy-based model of short- and long-term solar irradiance variations. This analysis suggested that errors due to residual, uncorrected long-term instrument drift have been reduced to less than 1-2% over the entire 5.5 year NOAA-11 data record. Detailed statistical analysis was performed. This analysis, which will be documented in a manuscript now in preparation, conclusively demonstrates the evolution of solar rotation periodicity and strength during solar cycle 22.

  16. Combining task analysis and fault tree analysis for accident and incident analysis: a case study from Bulgaria.

    PubMed

    Doytchev, Doytchin E; Szwillus, Gerd

    2009-11-01

    Understanding the reasons for incident and accident occurrence is important for an organization's safety. Different methods have been developed to achieve this goal. To better understand the human behaviour in incident occurrence we propose an analysis concept that combines Fault Tree Analysis (FTA) and Task Analysis (TA). The former method identifies the root causes of an accident/incident, while the latter analyses the way people perform the tasks in their work environment and how they interact with machines or colleagues. These methods were complemented with the use of the Human Error Identification in System Tools (HEIST) methodology and the concept of Performance Shaping Factors (PSF) to deepen the insight into the error modes of an operator's behaviour. HEIST shows the external error modes that caused the human error and the factors that prompted the human to err. To show the validity of the approach, a case study at a Bulgarian Hydro power plant was carried out. An incident - the flooding of the plant's basement - was analysed by combining the afore-mentioned methods. The case study shows that Task Analysis in combination with other methods can be applied successfully to human error analysis, revealing details about erroneous actions in a realistic situation.

  17. On the use of log-transformation vs. nonlinear regression for analyzing biological power laws.

    PubMed

    Xiao, Xiao; White, Ethan P; Hooten, Mevin B; Durham, Susan L

    2011-10-01

    Power-law relationships are among the most well-studied functional relationships in biology. Recently the common practice of fitting power laws using linear regression (LR) on log-transformed data has been criticized, calling into question the conclusions of hundreds of studies. It has been suggested that nonlinear regression (NLR) is preferable, but no rigorous comparison of these two methods has been conducted. Using Monte Carlo simulations, we demonstrate that the error distribution determines which method performs better, with NLR better characterizing data with additive, homoscedastic, normal error and LR better characterizing data with multiplicative, heteroscedastic, lognormal error. Analysis of 471 biological power laws shows that both forms of error occur in nature. While previous analyses based on log-transformation appear to be generally valid, future analyses should choose methods based on a combination of biological plausibility and analysis of the error distribution. We provide detailed guidelines and associated computer code for doing so, including a model averaging approach for cases where the error structure is uncertain.

  18. An Analysis of the Waste Water Treatment Operator Occupation.

    ERIC Educational Resources Information Center

    Clark, Anthony B.; And Others

    The occupational analysis contains a brief job description for the waste water treatment occupations of operator and maintenance mechanic and 13 detailed task statements which specify job duties (tools, equipment, materials, objects acted upon, performance knowledge, safety considerations/hazards, decisions, cues, and errors) and learning skills…

  19. Accuracy and Spatial Variability in GPS Surveying for Landslide Mapping on Road Inventories at a Semi-Detailed Scale: the Case in Colombia

    NASA Astrophysics Data System (ADS)

    Murillo Feo, C. A.; Martnez Martinez, L. J.; Correa Muñoz, N. A.

    2016-06-01

    The accuracy of locating attributes on topographic surfaces when, using GPS in mountainous areas, is affected by obstacles to wave propagation. As part of this research on the semi-automatic detection of landslides, we evaluate the accuracy and spatial distribution of the horizontal error in GPS positioning in the tertiary road network of six municipalities located in mountainous areas in the department of Cauca, Colombia, using geo-referencing with GPS mapping equipment and static-fast and pseudo-kinematic methods. We obtained quality parameters for the GPS surveys with differential correction, using a post-processing method. The consolidated database underwent exploratory analyses to determine the statistical distribution, a multivariate analysis to establish relationships and partnerships between the variables, and an analysis of the spatial variability and calculus of accuracy, considering the effect of non-Gaussian distribution errors. The evaluation of the internal validity of the data provide metrics with a confidence level of 95% between 1.24 and 2.45 m in the static-fast mode and between 0.86 and 4.2 m in the pseudo-kinematic mode. The external validity had an absolute error of 4.69 m, indicating that this descriptor is more critical than precision. Based on the ASPRS standard, the scale obtained with the evaluated equipment was in the order of 1:20000, a level of detail expected in the landslide-mapping project. Modelling the spatial variability of the horizontal errors from the empirical semi-variogram analysis showed predictions errors close to the external validity of the devices.

  20. Error analysis of numerical gravitational waveforms from coalescing binary black holes

    NASA Astrophysics Data System (ADS)

    Fong, Heather; Chu, Tony; Kumar, Prayush; Pfeiffer, Harald; Boyle, Michael; Hemberger, Daniel; Kidder, Lawrence; Scheel, Mark; Szilagyi, Bela; SXS Collaboration

    2016-03-01

    The Advanced Laser Interferometer Gravitational-wave Observatory (Advanced LIGO) has finished a successful first observation run and will commence its second run this summer. Detection of compact object binaries utilizes matched-filtering, which requires a vast collection of highly accurate gravitational waveforms. This talk will present a set of about 100 new aligned-spin binary black hole simulations. I will discuss their properties, including a detailed error analysis, which demonstrates that the numerical waveforms are sufficiently accurate for gravitational wave detection purposes, as well as for parameter estimation purposes.

  1. One way Doppler extractor. Volume 1: Vernier technique

    NASA Technical Reports Server (NTRS)

    Blasco, R. W.; Klein, S.; Nossen, E. J.; Starner, E. R.; Yanosov, J. A.

    1974-01-01

    A feasibility analysis, trade-offs, and implementation for a One Way Doppler Extraction system are discussed. A Doppler error analysis shows that quantization error is a primary source of Doppler measurement error. Several competing extraction techniques are compared and a Vernier technique is developed which obtains high Doppler resolution with low speed logic. Parameter trade-offs and sensitivities for the Vernier technique are analyzed, leading to a hardware design configuration. A detailed design, operation, and performance evaluation of the resulting breadboard model is presented which verifies the theoretical performance predictions. Performance tests have verified that the breadboard is capable of extracting Doppler, on an S-band signal, to an accuracy of less than 0.02 Hertz for a one second averaging period. This corresponds to a range rate error of no more than 3 millimeters per second.

  2. Space shuttle entry and landing navigation analysis

    NASA Technical Reports Server (NTRS)

    Jones, H. L.; Crawford, B. S.

    1974-01-01

    A navigation system for the entry phase of a Space Shuttle mission which is an aided-inertial system which uses a Kalman filter to mix IMU data with data derived from external navigation aids is evaluated. A drag pseudo-measurement used during radio blackout is treated as an additional external aid. A comprehensive truth model with 101 states is formulated and used to generate detailed error budgets at several significant time points -- end-of-blackout, start of final approach, over runway threshold, and touchdown. Sensitivity curves illustrating the effect of variations in the size of individual error sources on navigation accuracy are presented. The sensitivity of the navigation system performance to filter modifications is analyzed. The projected overall performance is shown in the form of time histories of position and velocity error components. The detailed results are summarized and interpreted, and suggestions are made concerning possible software improvements.

  3. Performance of Modified Test Statistics in Covariance and Correlation Structure Analysis under Conditions of Multivariate Nonnormality.

    ERIC Educational Resources Information Center

    Fouladi, Rachel T.

    2000-01-01

    Provides an overview of standard and modified normal theory and asymptotically distribution-free covariance and correlation structure analysis techniques and details Monte Carlo simulation results on Type I and Type II error control. Demonstrates through the simulation that robustness and nonrobustness of structure analysis techniques vary as a…

  4. Evaluating suggestibility to additive and contradictory misinformation following explicit error detection in younger and older adults.

    PubMed

    Huff, Mark J; Umanath, Sharda

    2018-06-01

    In 2 experiments, we assessed age-related suggestibility to additive and contradictory misinformation (i.e., remembering of false details from an external source). After reading a fictional story, participants answered questions containing misleading details that were either additive (misleading details that supplemented an original event) or contradictory (errors that changed original details). On a final test, suggestibility was greater for additive than contradictory misinformation, and older adults endorsed fewer false contradictory details than younger adults. To mitigate suggestibility in Experiment 2, participants were warned about potential errors, instructed to detect errors, or instructed to detect errors after exposure to examples of additive and contradictory details. Again, suggestibility to additive misinformation was greater than contradictory, and older adults endorsed less contradictory misinformation. Only after detection instructions with misinformation examples were younger adults able to reduce contradictory misinformation effects and reduced these effects to the level of older adults. Additive misinformation however, was immune to all warning and detection instructions. Thus, older adults were less susceptible to contradictory misinformation errors, and younger adults could match this misinformation rate when warning/detection instructions were strong. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  5. Assessing the Predictability of Convection using Ensemble Data Assimilation of Simulated Radar Observations in an LETKF system

    NASA Astrophysics Data System (ADS)

    Lange, Heiner; Craig, George

    2014-05-01

    This study uses the Local Ensemble Transform Kalman Filter (LETKF) to perform storm-scale Data Assimilation of simulated Doppler radar observations into the non-hydrostatic, convection-permitting COSMO model. In perfect model experiments (OSSEs), it is investigated how the limited predictability of convective storms affects precipitation forecasts. The study compares a fine analysis scheme with small RMS errors to a coarse scheme that allows for errors in position, shape and occurrence of storms in the ensemble. The coarse scheme uses superobservations, a coarser grid for analysis weights, a larger localization radius and larger observation error that allow a broadening of the Gaussian error statistics. Three hour forecasts of convective systems (with typical lifetimes exceeding 6 hours) from the detailed analyses of the fine scheme are found to be advantageous to those of the coarse scheme during the first 1-2 hours, with respect to the predicted storm positions. After 3 hours in the convective regime used here, the forecast quality of the two schemes appears indiscernible, judging by RMSE and verification methods for rain-fields and objects. It is concluded that, for operational assimilation systems, the analysis scheme might not necessarily need to be detailed to the grid scale of the model. Depending on the forecast lead time, and on the presence of orographic or synoptic forcing that enhance the predictability of storm occurrences, analyses from a coarser scheme might suffice.

  6. Skeletal Mechanism Generation of Surrogate Jet Fuels for Aeropropulsion Modeling

    NASA Astrophysics Data System (ADS)

    Sung, Chih-Jen; Niemeyer, Kyle E.

    2010-05-01

    A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with skeletal reductions of two important hydrocarbon components, n-heptane and n-decane, relevant to surrogate jet fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each previous method, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal.

  7. Managing human fallibility in critical aerospace situations

    NASA Astrophysics Data System (ADS)

    Tew, Larry

    2014-11-01

    Human fallibility is pervasive in the aerospace industry with over 50% of errors attributed to human error. Consider the benefits to any organization if those errors were significantly reduced. Aerospace manufacturing involves high value, high profile systems with significant complexity and often repetitive build, assembly, and test operations. In spite of extensive analysis, planning, training, and detailed procedures, human factors can cause unexpected errors. Handling such errors involves extensive cause and corrective action analysis and invariably schedule slips and cost growth. We will discuss success stories, including those associated with electro-optical systems, where very significant reductions in human fallibility errors were achieved after receiving adapted and specialized training. In the eyes of company and customer leadership, the steps used to achieve these results lead to in a major culture change in both the workforce and the supporting management organization. This approach has proven effective in other industries like medicine, firefighting, law enforcement, and aviation. The roadmap to success and the steps to minimize human error are known. They can be used by any organization willing to accept human fallibility and take a proactive approach to incorporate the steps needed to manage and minimize error.

  8. New Methods for Assessing and Reducing Uncertainty in Microgravity Studies

    NASA Astrophysics Data System (ADS)

    Giniaux, J. M.; Hooper, A. J.; Bagnardi, M.

    2017-12-01

    Microgravity surveying, also known as dynamic or 4D gravimetry is a time-dependent geophysical method used to detect mass fluctuations within the shallow crust, by analysing temporal changes in relative gravity measurements. We present here a detailed uncertainty analysis of temporal gravity measurements, considering for the first time all possible error sources, including tilt, error in drift estimations and timing errors. We find that some error sources that are actually ignored, can have a significant impact on the total error budget and it is therefore likely that some gravity signals may have been misinterpreted in previous studies. Our analysis leads to new methods for reducing some of the uncertainties associated with residual gravity estimation. In particular, we propose different approaches for drift estimation and free air correction depending on the survey set up. We also provide formulae to recalculate uncertainties for past studies and lay out a framework for best practice in future studies. We demonstrate our new approach on volcanic case studies, which include Kilauea in Hawaii and Askja in Iceland.

  9. Teamwork and error in the operating room: analysis of skills and roles.

    PubMed

    Catchpole, K; Mishra, A; Handa, A; McCulloch, P

    2008-04-01

    To analyze the effects of surgical, anesthetic, and nursing teamwork skills on technical outcomes. The value of team skills in reducing adverse events in the operating room is presently receiving considerable attention. Current work has not yet identified in detail how the teamwork and communication skills of surgeons, anesthetists, and nurses affect the course of an operation. Twenty-six laparoscopic cholecystectomies and 22 carotid endarterectomies were studied using direct observation methods. For each operation, teams' skills were scored for the whole team, and for nursing, surgical, and anesthetic subteams on 4 dimensions (leadership and management [LM]; teamwork and cooperation; problem solving and decision making; and situation awareness). Operating time, errors in surgical technique, and other procedural problems and errors were measured as outcome parameters for each operation. The relationships between teamwork scores and these outcome parameters within each operation were examined using analysis of variance and linear regression. Surgical (F(2,42) = 3.32, P = 0.046) and anesthetic (F(2,42) = 3.26, P = 0.048) LM had significant but opposite relationships with operating time in each operation: operating time increased significantly with higher anesthetic but decreased with higher surgical LM scores. Errors in surgical technique had a strong association with surgical situation awareness (F(2,42) = 7.93, P < 0.001) in each operation. Other procedural problems and errors were related to the intraoperative LM skills of the nurses (F(5,1) = 3.96, P = 0.027). Detailed analysis of team interactions and dimensions is feasible and valuable, yielding important insights into relationships between nontechnical skills, technical performance, and operative duration. These results support the concept that interventions designed to improve teamwork and communication may have beneficial effects on technical performance and patient outcome.

  10. Pressure Measurements Using an Airborne Differential Absorption Lidar. Part 1; Analysis of the Systematic Error Sources

    NASA Technical Reports Server (NTRS)

    Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.

    1999-01-01

    Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.

  11. On the use of log-transformation vs. nonlinear regression for analyzing biological power laws

    USGS Publications Warehouse

    Xiao, X.; White, E.P.; Hooten, M.B.; Durham, S.L.

    2011-01-01

    Power-law relationships are among the most well-studied functional relationships in biology. Recently the common practice of fitting power laws using linear regression (LR) on log-transformed data has been criticized, calling into question the conclusions of hundreds of studies. It has been suggested that nonlinear regression (NLR) is preferable, but no rigorous comparison of these two methods has been conducted. Using Monte Carlo simulations, we demonstrate that the error distribution determines which method performs better, with NLR better characterizing data with additive, homoscedastic, normal error and LR better characterizing data with multiplicative, heteroscedastic, lognormal error. Analysis of 471 biological power laws shows that both forms of error occur in nature. While previous analyses based on log-transformation appear to be generally valid, future analyses should choose methods based on a combination of biological plausibility and analysis of the error distribution. We provide detailed guidelines and associated computer code for doing so, including a model averaging approach for cases where the error structure is uncertain. ?? 2011 by the Ecological Society of America.

  12. Error analysis and algorithm implementation for an improved optical-electric tracking device based on MEMS

    NASA Astrophysics Data System (ADS)

    Sun, Hong; Wu, Qian-zhong

    2013-09-01

    In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.

  13. Geometric Accuracy Analysis of Worlddem in Relation to AW3D30, Srtm and Aster GDEM2

    NASA Astrophysics Data System (ADS)

    Bayburt, S.; Kurtak, A. B.; Büyüksalih, G.; Jacobsen, K.

    2017-05-01

    In a project area close to Istanbul the quality of WorldDEM, AW3D30, SRTM DSM and ASTER GDEM2 have been analyzed in relation to a reference aerial LiDAR DEM and to each other. The random and the systematic height errors have been separated. The absolute offset for all height models in X, Y and Z is within the expectation. The shifts have been respected in advance for a satisfying estimation of the random error component. All height models are influenced by some tilts, different in size. In addition systematic deformations can be seen not influencing the standard deviation too much. The delivery of WorldDEM includes information about the height error map which is based on the interferometric phase errors, and the number and location of coverage's from different orbits. A dependency of the height accuracy from the height error map information and the number of coverage's can be seen, but it is smaller as expected. WorldDEM is more accurate as the other investigated height models and with 10 m point spacing it includes more morphologic details, visible at contour lines. The morphologic details are close to the details based on the LiDAR digital surface model (DSM). As usual a dependency of the accuracy from the terrain slope can be seen. In forest areas the canopy definition of InSAR X- and C-band height models as well as for the height models based on optical satellite images is not the same as the height definition by LiDAR. In addition the interferometric phase uncertainty over forest areas is larger. Both effects lead to lower height accuracy in forest areas, also visible in the height error map.

  14. Counting OCR errors in typeset text

    NASA Astrophysics Data System (ADS)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  15. Sunrise/sunset thermal shock disturbance analysis and simulation for the TOPEX satellite

    NASA Technical Reports Server (NTRS)

    Dennehy, C. J.; Welch, R. V.; Zimbelman, D. F.

    1990-01-01

    It is shown here that during normal on-orbit operations the TOPEX low-earth orbiting satellite is subjected to an impulsive disturbance torque caused by rapid heating of its solar array when entering and exiting the earth's shadow. Error budgets and simulation results are used to demonstrate that this sunrise/sunset torque disturbance is the dominant Normal Mission Mode (NMM) attitude error source. The detailed thermomechanical modeling, analysis, and simulation of this torque is described, and the predicted on-orbit performance of the NMM attitude control system in the face of the sunrise/sunset disturbance is presented. The disturbance results in temporary attitude perturbations that exceed NMM pointing requirements. However, they are below the maximum allowable pointing error which would cause the radar altimeter to break lock.

  16. Evaluation of Hand Written and Computerized Out-Patient Prescriptions in Urban Part of Central Gujarat

    PubMed Central

    Buch, Jatin; Kothari, Nitin; Shah, Nishal

    2016-01-01

    Introduction Prescription order is an important therapeutic transaction between physician and patient. A good quality prescription is an extremely important factor for minimizing errors in dispensing medication and it should be adherent to guidelines for prescription writing for benefit of the patient. Aim To evaluate frequency and type of prescription errors in outpatient prescriptions and find whether prescription writing abides with WHO standards of prescription writing. Materials and Methods A cross-sectional observational study was conducted at Anand city. Allopathic private practitioners practising at Anand city of different specialities were included in study. Collection of prescriptions was started a month after the consent to minimize bias in prescription writing. The prescriptions were collected from local pharmacy stores of Anand city over a period of six months. Prescriptions were analysed for errors in standard information, according to WHO guide to good prescribing. Statistical Analysis Descriptive analysis was performed to estimate frequency of errors, data were expressed as numbers and percentage. Results Total 749 (549 handwritten and 200 computerised) prescriptions were collected. Abundant omission errors were identified in handwritten prescriptions e.g., OPD number was mentioned in 6.19%, patient’s age was mentioned in 25.50%, gender in 17.30%, address in 9.29% and weight of patient mentioned in 11.29%, while in drug items only 2.97% drugs were prescribed by generic name. Route and Dosage form was mentioned in 77.35%-78.15%, dose mentioned in 47.25%, unit in 13.91%, regimens were mentioned in 72.93% while signa (direction for drug use) in 62.35%. Total 4384 errors out of 549 handwritten prescriptions and 501 errors out of 200 computerized prescriptions were found in clinicians and patient details. While in drug item details, total number of errors identified were 5015 and 621 in handwritten and computerized prescriptions respectively. Conclusion As compared to handwritten prescriptions, computerized prescriptions appeared to be associated with relatively lower rates of error. Since out-patient prescription errors are abundant and often occur in handwritten prescriptions, prescribers need to adapt themselves to computerized prescription order entry in their daily practice. PMID:27504305

  17. Developing Performance Estimates for High Precision Astrometry with TMT

    NASA Astrophysics Data System (ADS)

    Schoeck, Matthias; Do, Tuan; Ellerbroek, Brent; Herriot, Glen; Meyer, Leo; Suzuki, Ryuji; Wang, Lianqi; Yelda, Sylvana

    2013-12-01

    Adaptive optics on Extremely Large Telescopes will open up many new science cases or expand existing science into regimes unattainable with the current generation of telescopes. One example of this is high-precision astrometry, which has requirements in the range from 10 to 50 micro-arc-seconds for some instruments and science cases. Achieving these requirements imposes stringent constraints on the design of the entire observatory, but also on the calibration procedures, observing sequences and the data analysis techniques. This paper summarizes our efforts to develop a top down astrometry error budget for TMT. It is predominantly developed for the first-light AO system, NFIRAOS, and the IRIS instrument, but many terms are applicable to other configurations as well. Astrometry error sources are divided into 5 categories: Reference source and catalog errors, atmospheric refraction correction errors, other residual atmospheric effects, opto-mechanical errors and focal plane measurement errors. Results are developed in parametric form whenever possible. However, almost every error term in the error budget depends on the details of the astrometry observations, such as whether absolute or differential astrometry is the goal, whether one observes a sparse or crowded field, what the time scales of interest are, etc. Thus, it is not possible to develop a single error budget that applies to all science cases and separate budgets are developed and detailed for key astrometric observations. Our error budget is consistent with the requirements for differential astrometry of tens of micro-arc-seconds for certain science cases. While no show stoppers have been found, the work has resulted in several modifications to the NFIRAOS optical surface specifications and reference source design that will help improve the achievable astrometry precision even further.

  18. Fatal flaws in a recent meta-analysis on abortion and mental health.

    PubMed

    Steinberg, Julia R; Trussell, James; Hall, Kelli S; Guthrie, Kate

    2012-11-01

    Similar to other reviews within the last 4 years, a thorough review by the Royal College of Psychiatrists, published in December 2011, found that compared to delivery of an unintended pregnancy, abortion does not increase women's risk of mental health problems. In contrast, a meta-analysis published in September 2011 concluded that abortion increases women's risk of mental health problems by 81% and that 10% of mental health problems are attributable to abortions. Like others, we strongly question the quality of this meta-analysis and its conclusions. Here we detail seven errors of this meta-analysis and three significant shortcomings of the included studies because policy, practice and the public have been misinformed. These errors and shortcomings render the meta-analysis' conclusions invalid. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Space Trajectories Error Analysis (STEAP) Programs. Volume 1: Analytic manual, update

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Manual revisions are presented for the modified and expanded STEAP series. The STEAP 2 is composed of three independent but related programs: NOMAL for the generation of n-body nominal trajectories performing a number of deterministic guidance events; ERRAN for the linear error analysis and generalized covariance analysis along specific targeted trajectories; and SIMUL for testing the mathematical models used in the navigation and guidance process. The analytic manual provides general problem description, formulation, and solution and the detailed analysis of subroutines. The programmers' manual gives descriptions of the overall structure of the programs as well as the computational flow and analysis of the individual subroutines. The user's manual provides information on the input and output quantities of the programs. These are updates to N69-36472 and N69-36473.

  20. Sensitivity of planetary cruise navigation to earth orientation calibration errors

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Folkner, W. M.

    1995-01-01

    A detailed analysis was conducted to determine the sensitivity of spacecraft navigation errors to the accuracy and timeliness of Earth orientation calibrations. Analyses based on simulated X-band (8.4-GHz) Doppler and ranging measurements acquired during the interplanetary cruise segment of the Mars Pathfinder heliocentric trajectory were completed for the nominal trajectory design and for an alternative trajectory with a longer transit time. Several error models were developed to characterize the effect of Earth orientation on navigational accuracy based on current and anticipated Deep Space Network calibration strategies. The navigational sensitivity of Mars Pathfinder to calibration errors in Earth orientation was computed for each candidate calibration strategy with the Earth orientation parameters included as estimated parameters in the navigation solution. In these cases, the calibration errors contributed 23 to 58% of the total navigation error budget, depending on the calibration strategy being assessed. Navigation sensitivity calculations were also performed for cases in which Earth orientation calibration errors were not adjusted in the navigation solution. In these cases, Earth orientation calibration errors contributed from 26 to as much as 227% of the total navigation error budget. The final analysis suggests that, not only is the method used to calibrate Earth orientation vitally important for precision navigation of Mars Pathfinder, but perhaps equally important is the method for inclusion of the calibration errors in the navigation solutions.

  1. Considerations for analysis of time-to-event outcomes measured with error: Bias and correction with SIMEX.

    PubMed

    Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A

    2018-04-15

    For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Monopulse azimuth measurement in the ATC Radar Beacon System

    DOT National Transportation Integrated Search

    1971-12-01

    A review is made of the application of sum-difference beam : techniques to the ATC Radar Beacon System. A detailed error analysis : is presented for the case of a monopulse azimuth measurement based : on the existing beacon antenna with a modified fe...

  3. Sensitivity Analysis of Nuclide Importance to One-Group Neutron Cross Sections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sekimoto, Hiroshi; Nemoto, Atsushi; Yoshimura, Yoshikane

    The importance of nuclides is useful when investigating nuclide characteristics in a given neutron spectrum. However, it is derived using one-group microscopic cross sections, which may contain large errors or uncertainties. The sensitivity coefficient shows the effect of these errors or uncertainties on the importance.The equations for calculating sensitivity coefficients of importance to one-group nuclear constants are derived using the perturbation method. Numerical values are also evaluated for some important cases for fast and thermal reactor systems.Many characteristics of the sensitivity coefficients are derived from the derived equations and numerical results. The matrix of sensitivity coefficients seems diagonally dominant. However,more » it is not always satisfied in a detailed structure. The detailed structure of the matrix and the characteristics of coefficients are given.By using the obtained sensitivity coefficients, some demonstration calculations have been performed. The effects of error and uncertainty of nuclear data and of the change of one-group cross-section input caused by fuel design changes through the neutron spectrum are investigated. These calculations show that the sensitivity coefficient is useful when evaluating error or uncertainty of nuclide importance caused by the cross-section data error or uncertainty and when checking effectiveness of fuel cell or core design change for improving neutron economy.« less

  4. Local projection stabilization for linearized Brinkman-Forchheimer-Darcy equation

    NASA Astrophysics Data System (ADS)

    Skrzypacz, Piotr

    2017-09-01

    The Local Projection Stabilization (LPS) is presented for the linearized Brinkman-Forchheimer-Darcy equation with high Reynolds numbers. The considered equation can be used to model porous medium flows in chemical reactors of packed bed type. The detailed finite element analysis is presented for the case of nonconstant porosity. The enriched variant of LPS is based on the equal order interpolation for the velocity and pressure. The optimal error bounds for the velocity and pressure errors are justified numerically.

  5. Study on optical 3D angular deformations measurement

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Wang, Xingshu; Huang, Zongsheng; Yang, Jinliang

    2013-12-01

    3D angular deformations will be inevitable when ships are sailing, due to the changes of the environmental temperature and external stresses. The measurement of 3D angular deformations is one of the most critical and difficult issues in navy and shipbuilding industry around the world. In this paper, we propose an optical method to measure 3D ship angular deformations and discuss the measurement errors in detail. Theoretical analysis shows that the measured errors of the pitching and yawing deformations are induced by the installation errors of the image aperture, and the measured error of the rolling deformation depends on the subpixel location algorithm in image processing. It indicates that the measured errors of the optical measurement proposed in this paper are at the magnitude of angular seconds, when the elaborated installation and precise image processing technology are both performed.

  6. Use of modeling to identify vulnerabilities to human error in laparoscopy.

    PubMed

    Funk, Kenneth H; Bauer, James D; Doolen, Toni L; Telasha, David; Nicolalde, R Javier; Reeber, Miriam; Yodpijit, Nantakrit; Long, Myra

    2010-01-01

    This article describes an exercise to investigate the utility of modeling and human factors analysis in understanding surgical processes and their vulnerabilities to medical error. A formal method to identify error vulnerabilities was developed and applied to a test case of Veress needle insertion during closed laparoscopy. A team of 2 surgeons, a medical assistant, and 3 engineers used hierarchical task analysis and Integrated DEFinition language 0 (IDEF0) modeling to create rich models of the processes used in initial port creation. Using terminology from a standardized human performance database, detailed task descriptions were written for 4 tasks executed in the process of inserting the Veress needle. Key terms from the descriptions were used to extract from the database generic errors that could occur. Task descriptions with potential errors were translated back into surgical terminology. Referring to the process models and task descriptions, the team used a modified failure modes and effects analysis (FMEA) to consider each potential error for its probability of occurrence, its consequences if it should occur and be undetected, and its probability of detection. The resulting likely and consequential errors were prioritized for intervention. A literature-based validation study confirmed the significance of the top error vulnerabilities identified using the method. Ongoing work includes design and evaluation of procedures to correct the identified vulnerabilities and improvements to the modeling and vulnerability identification methods. Copyright 2010 AAGL. Published by Elsevier Inc. All rights reserved.

  7. BEATBOX v1.0: Background Error Analysis Testbed with Box Models

    NASA Astrophysics Data System (ADS)

    Knote, Christoph; Barré, Jérôme; Eckl, Max

    2018-02-01

    The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.

  8. Analysis of all-optical temporal integrator employing phased-shifted DFB-SOA.

    PubMed

    Jia, Xin-Hong; Ji, Xiao-Ling; Xu, Cong; Wang, Zi-Nan; Zhang, Wei-Li

    2014-11-17

    All-optical temporal integrator using phase-shifted distributed-feedback semiconductor optical amplifier (DFB-SOA) is investigated. The influences of system parameters on its energy transmittance and integration error are explored in detail. The numerical analysis shows that, enhanced energy transmittance and integration time window can be simultaneously achieved by increased injected current in the vicinity of lasing threshold. We find that the range of input pulse-width with lower integration error is highly sensitive to the injected optical power, due to gain saturation and induced detuning deviation mechanism. The initial frequency detuning should also be carefully chosen to suppress the integration deviation with ideal waveform output.

  9. A Unified Approach to Measurement Error and Missing Data: Details and Extensions

    ERIC Educational Resources Information Center

    Blackwell, Matthew; Honaker, James; King, Gary

    2017-01-01

    We extend a unified and easy-to-use approach to measurement error and missing data. In our companion article, Blackwell, Honaker, and King give an intuitive overview of the new technique, along with practical suggestions and empirical applications. Here, we offer more precise technical details, more sophisticated measurement error model…

  10. Data Transfer Efficiency Over Satellite Circuits Using a Multi-Socket Extension to the File Transfer Protocol (FTP)

    NASA Technical Reports Server (NTRS)

    Allman, Mark; Ostermann, Shawn; Kruse, Hans

    1996-01-01

    In several experiments using NASA's Advanced Communications Technology Satellite (ACTS), investigators have reported disappointing throughput using the transmission control protocol/Internet protocol (TCP/IP) protocol suite over 1.536Mbit/sec (T1) satellite circuits. A detailed analysis of file transfer protocol (FTP) file transfers reveals that both the TCP window size and the TCP 'slow starter' algorithm contribute to the observed limits in throughput. In this paper we summarize the experimental and and theoretical analysis of the throughput limit imposed by TCP on the satellite circuit. We then discuss in detail the implementation of a multi-socket FTP, XFTP client and server. XFTP has been tested using the ACTS system. Finally, we discuss a preliminary set of tests on a link with non-zero bit error rates. XFTP shows promising performance under these conditions, suggesting the possibility that a multi-socket application may be less effected by bit errors than a single, large-window TCP connection.

  11. Waffle mode error in the AEOS adaptive optics point-spread function

    NASA Astrophysics Data System (ADS)

    Makidon, Russell B.; Sivaramakrishnan, Anand; Roberts, Lewis C., Jr.; Oppenheimer, Ben R.; Graham, James R.

    2003-02-01

    Adaptive optics (AO) systems have improved astronomical imaging capabilities significantly over the last decade, and have the potential to revolutionize the kinds of science done with 4-5m class ground-based telescopes. However, provided sufficient detailed study and analysis, existing AO systems can be improved beyond their original specified error budgets. Indeed, modeling AO systems has been a major activity in the past decade: sources of noise in the atmosphere and the wavefront sensing WFS) control loop have received a great deal of attention, and many detailed and sophisticated control-theoretic and numerical models predicting AO performance are already in existence. However, in terms of AO system performance improvements, wavefront reconstruction (WFR) and wavefront calibration techniques have commanded relatively little attention. We elucidate the nature of some of these reconstruction problems, and demonstrate their existence in data from the AEOS AO system. We simulate the AO correction of AEOS in the I-band, and show that the magnitude of the `waffle mode' error in the AEOS reconstructor is considerably larger than expected. We suggest ways of reducing the magnitude of this error, and, in doing so, open up ways of understanding how wavefront reconstruction might handle bad actuators and partially-illuminated WFS subapertures.

  12. Error mechanism analyses of an ultra-precision stage for high speed scan motion over a large stroke

    NASA Astrophysics Data System (ADS)

    Wang, Shaokai; Tan, Jiubin; Cui, Jiwen

    2015-02-01

    Reticle Stage (RS) is designed to complete scan motion with high speed in nanometer-scale over a large stroke. Comparing with the allowable scan accuracy of a few nanometers, errors caused by any internal or external disturbances are critical and must not be ignored. In this paper, RS is firstly introduced in aspects of mechanical structure, forms of motion, and controlling method. Based on that, mechanisms of disturbances transferred to final servo-related error in scan direction are analyzed, including feedforward error, coupling between the large stroke stage (LS) and the short stroke stage (SS), and movement of measurement reference. Especially, different forms of coupling between SS and LS are discussed in detail. After theoretical analysis above, the contributions of these disturbances to final error are simulated numerically. The residual positioning error caused by feedforward error in acceleration process is about 2 nm after settling time, the coupling between SS and LS about 2.19 nm, and the movements of MF about 0.6 nm.

  13. Finding External Indicators of Load on a Web Server via Analysis of Black-Box Performance Measurements

    ERIC Educational Resources Information Center

    Chiarini, Marc A.

    2010-01-01

    Traditional methods for system performance analysis have long relied on a mix of queuing theory, detailed system knowledge, intuition, and trial-and-error. These approaches often require construction of incomplete gray-box models that can be costly to build and difficult to scale or generalize. In this thesis, we present a black-box analysis…

  14. EIA model documentation: World oil refining logistics demand model,``WORLD`` reference manual. Version 1.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-04-11

    This manual is intended primarily for use as a reference by analysts applying the WORLD model to regional studies. It also provides overview information on WORLD features of potential interest to managers and analysts. Broadly, the manual covers WORLD model features in progressively increasing detail. Section 2 provides an overview of the WORLD model, how it has evolved, what its design goals are, what it produces, and where it can be taken with further enhancements. Section 3 reviews model management covering data sources, managing over-optimization, calibration and seasonality, check-points for case construction and common errors. Section 4 describes in detailmore » the WORLD system, including: data and program systems in overview; details of mainframe and PC program control and files;model generation, size management, debugging and error analysis; use with different optimizers; and reporting and results analysis. Section 5 provides a detailed description of every WORLD model data table, covering model controls, case and technology data. Section 6 goes into the details of WORLD matrix structure. It provides an overview, describes how regional definitions are controlled and defines the naming conventions for-all model rows, columns, right-hand sides, and bounds. It also includes a discussion of the formulation of product blending and specifications in WORLD. Several Appendices supplement the main sections.« less

  15. Analysis on the dynamic error for optoelectronic scanning coordinate measurement network

    NASA Astrophysics Data System (ADS)

    Shi, Shendong; Yang, Linghui; Lin, Jiarui; Guo, Siyang; Ren, Yongjie

    2018-01-01

    Large-scale dynamic three-dimension coordinate measurement technique is eagerly demanded in equipment manufacturing. Noted for advantages of high accuracy, scale expandability and multitask parallel measurement, optoelectronic scanning measurement network has got close attention. It is widely used in large components jointing, spacecraft rendezvous and docking simulation, digital shipbuilding and automated guided vehicle navigation. At present, most research about optoelectronic scanning measurement network is focused on static measurement capacity and research about dynamic accuracy is insufficient. Limited by the measurement principle, the dynamic error is non-negligible and restricts the application. The workshop measurement and positioning system is a representative which can realize dynamic measurement function in theory. In this paper we conduct deep research on dynamic error resources and divide them two parts: phase error and synchronization error. Dynamic error model is constructed. Based on the theory above, simulation about dynamic error is carried out. Dynamic error is quantized and the rule of volatility and periodicity has been found. Dynamic error characteristics are shown in detail. The research result lays foundation for further accuracy improvement.

  16. FORTRAN program for induction motor analysis

    NASA Technical Reports Server (NTRS)

    Bollenbacher, G.

    1976-01-01

    A FORTRAN program for induction motor analysis is described. The analysis includes calculations of torque-speed characteristics, efficiency, losses, magnetic flux densities, weights, and various electrical parameters. The program is limited to three-phase Y-connected, squirrel-cage motors. Detailed instructions for using the program are given. The analysis equations are documented, and the sources of the equations are referenced. The appendixes include a FORTRAN symbol list, a complete explanation of input requirements, and a list of error messages.

  17. Addressee Errors in ATC Communications: The Call Sign Problem

    NASA Technical Reports Server (NTRS)

    Monan, W. P.

    1983-01-01

    Communication errors involving aircraft call signs were portrayed in reports of 462 hazardous incidents voluntarily submitted to the ASRS during an approximate four-year period. These errors resulted in confusion, disorder, and uncoordinated traffic conditions and produced the following types of operational anomalies: altitude deviations, wrong-way headings, aborted takeoffs, go arounds, runway incursions, missed crossing altitude restrictions, descents toward high terrain, and traffic conflicts in flight and on the ground. Analysis of the report set resulted in identification of five categories of errors involving call signs: (1) faulty radio usage techniques, (2) call sign loss or smearing due to frequency congestion, (3) confusion resulting from similar sounding call signs, (4) airmen misses of call signs leading to failures to acknowledge or readback, and (5) controller failures regarding confirmation of acknowledgements or readbacks. These error categories are described in detail and several associated hazard mitigating measures that might be aken are considered.

  18. Application of Monte-Carlo Analyses for the Microwave Anisotropy Probe (MAP) Mission

    NASA Technical Reports Server (NTRS)

    Mesarch, Michael A.; Rohrbaugh, David; Schiff, Conrad; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    The Microwave Anisotropy Probe (MAP) is the third launch in the National Aeronautics and Space Administration's (NASA's) a Medium Class Explorers (MIDEX) program. MAP will measure, in greater detail, the cosmic microwave background radiation from an orbit about the Sun-Earth-Moon L2 Lagrangian point. Maneuvers will be required to transition MAP from it's initial highly elliptical orbit to a lunar encounter which will provide the remaining energy to send MAP out to a lissajous orbit about L2. Monte-Carlo analysis methods were used to evaluate the potential maneuver error sources and determine their effect of the fixed MAP propellant budget. This paper will discuss the results of the analyses on three separate phases of the MAP mission - recovering from launch vehicle errors, responding to phasing loop maneuver errors, and evaluating the effect of maneuver execution errors and orbit determination errors on stationkeeping maneuvers at L2.

  19. The GEOS Ozone Data Assimilation System: Specification of Error Statistics

    NASA Technical Reports Server (NTRS)

    Stajner, Ivanka; Riishojgaard, Lars Peter; Rood, Richard B.

    2000-01-01

    A global three-dimensional ozone data assimilation system has been developed at the Data Assimilation Office of the NASA/Goddard Space Flight Center. The Total Ozone Mapping Spectrometer (TOMS) total ozone and the Solar Backscatter Ultraviolet (SBUV) or (SBUV/2) partial ozone profile observations are assimilated. The assimilation, into an off-line ozone transport model, is done using the global Physical-space Statistical Analysis Scheme (PSAS). This system became operational in December 1999. A detailed description of the statistical analysis scheme, and in particular, the forecast and observation error covariance models is given. A new global anisotropic horizontal forecast error correlation model accounts for a varying distribution of observations with latitude. Correlations are largest in the zonal direction in the tropics where data is sparse. Forecast error variance model is proportional to the ozone field. The forecast error covariance parameters were determined by maximum likelihood estimation. The error covariance models are validated using x squared statistics. The analyzed ozone fields in the winter 1992 are validated against independent observations from ozone sondes and HALOE. There is better than 10% agreement between mean Halogen Occultation Experiment (HALOE) and analysis fields between 70 and 0.2 hPa. The global root-mean-square (RMS) difference between TOMS observed and forecast values is less than 4%. The global RMS difference between SBUV observed and analyzed ozone between 50 and 3 hPa is less than 15%.

  20. A general geometric theory of attitude determination from directional sensing

    NASA Technical Reports Server (NTRS)

    Fang, B. T.

    1976-01-01

    A general geometric theory of spacecraft attitude determination from external reference direction sensors was presented. Outputs of different sensors are reduced to two kinds of basic directional measurements. Errors in these measurement equations are studied in detail. The partial derivatives of measurements with respect to the spacecraft orbit, the spacecraft attitude, and the error parameters form the basis for all orbit and attitude determination schemes and error analysis programs and are presented in a series of tables. The question of attitude observability is studied with the introduction of a graphical construction which provides a great deal of physical insight. The result is applied to the attitude observability of the IMP-8 spacecraft.

  1. Error in total ozone measurements arising from aerosol attenuation

    NASA Technical Reports Server (NTRS)

    Thomas, R. W. L.; Basher, R. E.

    1979-01-01

    A generalized least squares method for deducing both total ozone and aerosol extinction spectrum parameters from Dobson spectrophotometer measurements was developed. An error analysis applied to this system indicates that there is little advantage to additional measurements once a sufficient number of line pairs have been employed to solve for the selected detail in the attenuation model. It is shown that when there is a predominance of small particles (less than about 0.35 microns in diameter) the total ozone from the standard AD system is too high by about one percent. When larger particles are present the derived total ozone may be an overestimate or an underestimate but serious errors occur only for narrow polydispersions.

  2. Multiple comparison analysis testing in ANOVA.

    PubMed

    McHugh, Mary L

    2011-01-01

    The Analysis of Variance (ANOVA) test has long been an important tool for researchers conducting studies on multiple experimental groups and one or more control groups. However, ANOVA cannot provide detailed information on differences among the various study groups, or on complex combinations of study groups. To fully understand group differences in an ANOVA, researchers must conduct tests of the differences between particular pairs of experimental and control groups. Tests conducted on subsets of data tested previously in another analysis are called post hoc tests. A class of post hoc tests that provide this type of detailed information for ANOVA results are called "multiple comparison analysis" tests. The most commonly used multiple comparison analysis statistics include the following tests: Tukey, Newman-Keuls, Scheffee, Bonferroni and Dunnett. These statistical tools each have specific uses, advantages and disadvantages. Some are best used for testing theory while others are useful in generating new theory. Selection of the appropriate post hoc test will provide researchers with the most detailed information while limiting Type 1 errors due to alpha inflation.

  3. Errors in logic and statistics plague a meta-analysis

    USDA-ARS?s Scientific Manuscript database

    The non-target effects of transgenic insecticidal crops has been a topic of debate for over a decade and many laboratory and field studies have addressed the issue in numerous countries. In 2009 Lovei et al. (Transgenic Insecticidal Crops and Natural Enemies: A Detailed Review of Laboratory Studies)...

  4. Low Probability Tail Event Analysis and Mitigation in BPA Control Area: Task 2 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Shuai; Makarov, Yuri V.; McKinstry, Craig A.

    Task report detailing low probability tail event analysis and mitigation in BPA control area. Tail event refers to the situation in a power system when unfavorable forecast errors of load and wind are superposed onto fast load and wind ramps, or non-wind generators falling short of scheduled output, causing the imbalance between generation and load to become very significant.

  5. Second-order shaped pulsed for solid-state quantum computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Pinaki

    2008-01-01

    We present the construction and detailed analysis of highly optimized self-refocusing pulse shapes for several rotation angles. We characterize the constructed pulses by the coefficients appearing in the Magnus expansion up to second order. This allows a semianalytical analysis of the performance of the constructed shapes in sequences and composite pulses by computing the corresponding leading-order error operators. Higher orders can be analyzed with the numerical technique suggested by us previously. We illustrate the technique by analyzing several composite pulses designed to protect against pulse amplitude errors, and on decoupling sequences for potentially long chains of qubits with on-site andmore » nearest-neighbor couplings.« less

  6. Development of a flight software testing methodology

    NASA Technical Reports Server (NTRS)

    Mccluskey, E. J.; Andrews, D. M.

    1985-01-01

    The research to develop a testing methodology for flight software is described. An experiment was conducted in using assertions to dynamically test digital flight control software. The experiment showed that 87% of typical errors introduced into the program would be detected by assertions. Detailed analysis of the test data showed that the number of assertions needed to detect those errors could be reduced to a minimal set. The analysis also revealed that the most effective assertions tested program parameters that provided greater indirect (collateral) testing of other parameters. In addition, a prototype watchdog task system was built to evaluate the effectiveness of executing assertions in parallel by using the multitasking features of Ada.

  7. A First Look at the Navigation Design and Analysis for the Orion Exploration Mission 2

    NASA Technical Reports Server (NTRS)

    D'Souza, Chris D.; Zenetti, Renato

    2017-01-01

    This paper will detail the navigation and dispersion design and analysis of the first Orion crewed mission. The optical navigation measurement model will be described. The vehicle noise includes the residual acceleration from attitude deadbanding, attitude maneuvers, CO2 venting, wastewater venting, ammonia sublimator venting and solar radiation pressure. The maneuver execution errors account for the contribution of accelerometer scale-factor on the accuracy of the maneuver execution. Linear covariance techniques are used to obtain the navigation errors and the trajectory dispersions as well as the DV performance. Particular attention will be paid to the accuracy of the delivery at Earth Entry Interface and at the Lunar Flyby.

  8. Online beam energy measurement of Beijing electron positron collider II linear accelerator

    NASA Astrophysics Data System (ADS)

    Wang, S.; Iqbal, M.; Liu, R.; Chi, Y.

    2016-02-01

    This paper describes online beam energy measurement of Beijing Electron Positron Collider upgraded version II linear accelerator (linac) adequately. It presents the calculation formula, gives the error analysis in detail, discusses the realization in practice, and makes some verification. The method mentioned here measures the beam energy by acquiring the horizontal beam position with three beam position monitors (BPMs), which eliminates the effect of orbit fluctuation, and is much better than the one using the single BPM. The error analysis indicates that this online measurement has further potential usage such as a part of beam energy feedback system. The reliability of this method is also discussed and demonstrated in this paper.

  9. Online beam energy measurement of Beijing electron positron collider II linear accelerator.

    PubMed

    Wang, S; Iqbal, M; Liu, R; Chi, Y

    2016-02-01

    This paper describes online beam energy measurement of Beijing Electron Positron Collider upgraded version II linear accelerator (linac) adequately. It presents the calculation formula, gives the error analysis in detail, discusses the realization in practice, and makes some verification. The method mentioned here measures the beam energy by acquiring the horizontal beam position with three beam position monitors (BPMs), which eliminates the effect of orbit fluctuation, and is much better than the one using the single BPM. The error analysis indicates that this online measurement has further potential usage such as a part of beam energy feedback system. The reliability of this method is also discussed and demonstrated in this paper.

  10. Research on effects of phase error in phase-shifting interferometer

    NASA Astrophysics Data System (ADS)

    Wang, Hongjun; Wang, Zhao; Zhao, Hong; Tian, Ailing; Liu, Bingcai

    2007-12-01

    Referring to phase-shifting interferometry technology, the phase shifting error from the phase shifter is the main factor that directly affects the measurement accuracy of the phase shifting interferometer. In this paper, the resources and sorts of phase shifting error were introduction, and some methods to eliminate errors were mentioned. Based on the theory of phase shifting interferometry, the effects of phase shifting error were analyzed in detail. The Liquid Crystal Display (LCD) as a new shifter has advantage as that the phase shifting can be controlled digitally without any mechanical moving and rotating element. By changing coded image displayed on LCD, the phase shifting in measuring system was induced. LCD's phase modulation characteristic was analyzed in theory and tested. Based on Fourier transform, the effect model of phase error coming from LCD was established in four-step phase shifting interferometry. And the error range was obtained. In order to reduce error, a new error compensation algorithm was put forward. With this method, the error can be obtained by process interferogram. The interferogram can be compensated, and the measurement results can be obtained by four-step phase shifting interferogram. Theoretical analysis and simulation results demonstrate the feasibility of this approach to improve measurement accuracy.

  11. Analysis of Video-Based Microscopic Particle Trajectories Using Kalman Filtering

    PubMed Central

    Wu, Pei-Hsun; Agarwal, Ashutosh; Hess, Henry; Khargonekar, Pramod P.; Tseng, Yiider

    2010-01-01

    Abstract The fidelity of the trajectories obtained from video-based particle tracking determines the success of a variety of biophysical techniques, including in situ single cell particle tracking and in vitro motility assays. However, the image acquisition process is complicated by system noise, which causes positioning error in the trajectories derived from image analysis. Here, we explore the possibility of reducing the positioning error by the application of a Kalman filter, a powerful algorithm to estimate the state of a linear dynamic system from noisy measurements. We show that the optimal Kalman filter parameters can be determined in an appropriate experimental setting, and that the Kalman filter can markedly reduce the positioning error while retaining the intrinsic fluctuations of the dynamic process. We believe the Kalman filter can potentially serve as a powerful tool to infer a trajectory of ultra-high fidelity from noisy images, revealing the details of dynamic cellular processes. PMID:20550894

  12. Development of TPS flight test and operational instrumentation

    NASA Technical Reports Server (NTRS)

    Carnahan, K. R.; Hartman, G. J.; Neuner, G. J.

    1975-01-01

    Thermal and flow sensor instrumentation was developed for use as an integral part of the space shuttle orbiter reusable thermal protection system. The effort was performed in three tasks: a study to determine the optimum instruments and instrument installations for the space shuttle orbiter RSI and RCC TPS; tests and/or analysis to determine the instrument installations to minimize measurement errors; and analysis using data from the test program for comparison to analytical methods. A detailed review of existing state of the art instrumentation in industry was performed to determine the baseline for the departure of the research effort. From this information, detailed criteria for thermal protection system instrumentation were developed.

  13. Geodetic positioning using a global positioning system of satellites

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1980-01-01

    Geodetic positioning using range, integrated Doppler, and interferometric observations from a constellation of twenty-four Global Positioning System satellites is analyzed. A summary of the proposals for geodetic positioning and baseline determination is given which includes a description of measurement techniques and comments on rank deficiency and error sources. An analysis of variance comparison of range, Doppler, and interferometric time delay to determine their relative geometric strength for baseline determination is included. An analytic examination to the effect of a priori constraints on positioning using simultaneous observations from two stations is presented. Dynamic point positioning and baseline determination using range and Doppler is examined in detail. Models for the error sources influencing dynamic positioning are developed. Included is a discussion of atomic clock stability, and range and Doppler observation error statistics based on random correlated atomic clock error are derived.

  14. Report on Automated Semantic Analysis of Scientific and Engineering Codes

    NASA Technical Reports Server (NTRS)

    Stewart. Maark E. M.; Follen, Greg (Technical Monitor)

    2001-01-01

    The loss of the Mars Climate Orbiter due to a software error reveals what insiders know: software development is difficult and risky because, in part, current practices do not readily handle the complex details of software. Yet, for scientific software development the MCO mishap represents the tip of the iceberg; few errors are so public, and many errors are avoided with a combination of expertise, care, and testing during development and modification. Further, this effort consumes valuable time and resources even when hardware costs and execution time continually decrease. Software development could use better tools! This lack of tools has motivated the semantic analysis work explained in this report. However, this work has a distinguishing emphasis; the tool focuses on automated recognition of the fundamental mathematical and physical meaning of scientific code. Further, its comprehension is measured by quantitatively evaluating overall recognition with practical codes. This emphasis is necessary if software errors-like the MCO error-are to be quickly and inexpensively avoided in the future. This report evaluates the progress made with this problem. It presents recommendations, describes the approach, the tool's status, the challenges, related research, and a development strategy.

  15. Testing accelerometer rectification error caused by multidimensional composite inputs with double turntable centrifuge.

    PubMed

    Guan, W; Meng, X F; Dong, X M

    2014-12-01

    Rectification error is a critical characteristic of inertial accelerometers. Accelerometers working in operational situations are stimulated by composite inputs, including constant acceleration and vibration, from multiple directions. However, traditional methods for evaluating rectification error only use one-dimensional vibration. In this paper, a double turntable centrifuge (DTC) was utilized to produce the constant acceleration and vibration simultaneously and we tested the rectification error due to the composite accelerations. At first, we deduced the expression of the rectification error with the output of the DTC and a static model of the single-axis pendulous accelerometer under test. Theoretical investigation and analysis were carried out in accordance with the rectification error model. Then a detailed experimental procedure and testing results were described. We measured the rectification error with various constant accelerations at different frequencies and amplitudes of the vibration. The experimental results showed the distinguished characteristics of the rectification error caused by the composite accelerations. The linear relation between the constant acceleration and the rectification error was proved. The experimental procedure and results presented in this context can be referenced for the investigation of the characteristics of accelerometer with multiple inputs.

  16. Performance analysis of a new positron camera geometry for high speed, fine particle tracking

    NASA Astrophysics Data System (ADS)

    Sovechles, J. M.; Boucher, D.; Pax, R.; Leadbeater, T.; Sasmito, A. P.; Waters, K. E.

    2017-09-01

    A new positron camera arrangement was assembled using 16 ECAT951 modular detector blocks. A closely packed, cross pattern arrangement was selected to produce a highly sensitive cylindrical region for tracking particles with low activities and high speeds. To determine the capabilities of this system a comprehensive analysis of the tracking performance was conducted to determine the 3D location error and location frequency as a function of tracer activity and speed. The 3D error was found to range from 0.54 mm for a stationary particle, consistent for all tracer activities, up to 4.33 mm for a tracer with an activity of 3 MBq and a speed of 4 m · s-1. For lower activity tracers (<10-2 MBq), the error was more sensitive to increases in speed, increasing to 28 mm (at 4 m · s-1), indicating that at these conditions a reliable trajectory is not possible. These results expanded on, but correlated well with, previous literature that only contained location errors for tracer speeds up to 1.5 m · s-1. The camera was also used to track directly activated mineral particles inside a two-inch hydrocyclone and a 142 mm diameter flotation cell. A detailed trajectory, inside the hydrocyclone, of a  -212  +  106 µm (10-1 MBq) quartz particle displayed the expected spiralling motion towards the apex. This was the first time a mineral particle of this size had been successfully traced within a hydrocyclone, however more work is required to develop detailed velocity fields.

  17. Spatial variability in sensitivity of reference crop ET to accuracy of climate data in the Texas High Plains

    USDA-ARS?s Scientific Manuscript database

    A detailed sensitivity analysis was conducted to determine the relative effects of measurement errors in climate data input parameters on the accuracy of calculated reference crop evapotranspiration (ET) using the ASCE-EWRI Standardized Reference ET Equation. Data for the period of 1995 to 2008, fro...

  18. Multidisciplinary optimization of an HSCT wing using a response surface methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giunta, A.A.; Grossman, B.; Mason, W.H.

    1994-12-31

    Aerospace vehicle design is traditionally divided into three phases: conceptual, preliminary, and detailed. Each of these design phases entails a particular level of accuracy and computational expense. While there are several computer programs which perform inexpensive conceptual-level aircraft multidisciplinary design optimization (MDO), aircraft MDO remains prohibitively expensive using preliminary- and detailed-level analysis tools. This occurs due to the expense of computational analyses and because gradient-based optimization requires the analysis of hundreds or thousands of aircraft configurations to estimate design sensitivity information. A further hindrance to aircraft MDO is the problem of numerical noise which occurs frequently in engineering computations. Computermore » models produce numerical noise as a result of the incomplete convergence of iterative processes, round-off errors, and modeling errors. Such numerical noise is typically manifested as a high frequency, low amplitude variation in the results obtained from the computer models. Optimization attempted using noisy computer models may result in the erroneous calculation of design sensitivities and may slow or prevent convergence to an optimal design.« less

  19. GRAVSAT/GEOPAUSE covariance analysis including geopotential aliasing

    NASA Technical Reports Server (NTRS)

    Koch, D. W.

    1975-01-01

    A conventional covariance analysis for the GRAVSAT/GEOPAUSE mission is described in which the uncertainties of approximately 200 parameters, including the geopotential coefficients to degree and order 12, are estimated over three different tracking intervals. The estimated orbital uncertainties for both GRAVSAT and GEOPAUSE reach levels more accurate than presently available. The adjusted measurement bias errors approach the mission goal. Survey errors in the low centimeter range are achieved after ten days of tracking. The ability of the mission to obtain accuracies of geopotential terms to (12, 12) one to two orders of magnitude superior to present accuracy levels is clearly shown. A unique feature of this report is that the aliasing structure of this (12, 12) field is examined. It is shown that uncertainties for unadjusted terms to (12, 12) still exert a degrading effect upon the adjusted error of an arbitrarily selected term of lower degree and order. Finally, the distribution of the aliasing from the unestimated uncertainty of a particular high degree and order geopotential term upon the errors of all remaining adjusted terms is listed in detail.

  20. Configuration study for a 30 GHz monolithic receive array, volume 1

    NASA Technical Reports Server (NTRS)

    Nester, W. H.; Cleaveland, B.; Edward, B.; Gotkis, S.; Hesserbacker, G.; Loh, J.; Mitchell, B.

    1984-01-01

    Gregorian, Cassegrain, and single reflector systems were analyzed in configuration studies for communications satellite receive antennas. Parametric design and performance curves were generated. A preliminary design of each reflector/feed system was derived including radiating elements, beam-former network, beamsteering system, and MMIC module architecture. Performance estimates and component requirements were developed for each design. A recommended design was selected for both the scanning beam and the fixed beam case. Detailed design and performance analysis results are presented for the selected Cassegrain configurations. The final design point is characterized in detail and performance measures evaluated in terms of gain, sidelobe level, noise figure, carrier-to-interference ratio, prime power, and beamsteering. The effects of mutual coupling and excitation errors (including phase and amplitude quantization errors) are evaluated. Mechanical assembly drawings are given for the final design point. Thermal design requirements are addressed in the mechanical design.

  1. Description and Sensitivity Analysis of the SOLSE/LORE-2 and SAGE III Limb Scattering Ozone Retrieval Algorithms

    NASA Technical Reports Server (NTRS)

    Loughman, R.; Flittner, D.; Herman, B.; Bhartia, P.; Hilsenrath, E.; McPeters, R.; Rault, D.

    2002-01-01

    The SOLSE (Shuttle Ozone Limb Sounding Experiment) and LORE (Limb Ozone Retrieval Experiment) instruments are scheduled for reflight on Space Shuttle flight STS-107 in July 2002. In addition, the SAGE III (Stratospheric Aerosol and Gas Experiment) instrument will begin to make limb scattering measurements during Spring 2002. The optimal estimation technique is used to analyze visible and ultraviolet limb scattered radiances and produce a retrieved ozone profile. The algorithm used to analyze data from the initial flight of the SOLSE/LORE instruments (on Space Shuttle flight STS-87 in November 1997) forms the basis of the current algorithms, with expansion to take advantage of the increased multispectral information provided by SOLSE/LORE-2 and SAGE III. We also present detailed sensitivity analysis for these ozone retrieval algorithms. The primary source of ozone retrieval error is tangent height misregistration (i.e., instrument pointing error), which is relevant throughout the altitude range of interest, and can produce retrieval errors on the order of 10-20 percent due to a tangent height registration error of 0.5 km at the tangent point. Other significant sources of error are sensitivity to stratospheric aerosol and sensitivity to error in the a priori ozone estimate (given assumed instrument signal-to-noise = 200). These can produce errors up to 10 percent for the ozone retrieval at altitudes less than 20 km, but produce little error above that level.

  2. First order error corrections in common introductory physics experiments

    NASA Astrophysics Data System (ADS)

    Beckey, Jacob; Baker, Andrew; Aravind, Vasudeva; Clarion Team

    As a part of introductory physics courses, students perform different standard lab experiments. Almost all of these experiments are prone to errors owing to factors like friction, misalignment of equipment, air drag, etc. Usually these types of errors are ignored by students and not much thought is paid to the source of these errors. However, paying attention to these factors that give rise to errors help students make better physics models and understand physical phenomena behind experiments in more detail. In this work, we explore common causes of errors in introductory physics experiment and suggest changes that will mitigate the errors, or suggest models that take the sources of these errors into consideration. This work helps students build better and refined physical models and understand physics concepts in greater detail. We thank Clarion University undergraduate student grant for financial support involving this project.

  3. Factors that influence the generation of autobiographical memory conjunction errors

    PubMed Central

    Devitt, Aleea L.; Monk-Fromont, Edwin; Schacter, Daniel L.; Addis, Donna Rose

    2015-01-01

    The constructive nature of memory is generally adaptive, allowing us to efficiently store, process and learn from life events, and simulate future scenarios to prepare ourselves for what may come. However, the cost of a flexibly constructive memory system is the occasional conjunction error, whereby the components of an event are authentic, but the combination of those components is false. Using a novel recombination paradigm, it was demonstrated that details from one autobiographical memory may be incorrectly incorporated into another, forming autobiographical memory conjunction errors that elude typical reality monitoring checks. The factors that contribute to the creation of these conjunction errors were examined across two experiments. Conjunction errors were more likely to occur when the corresponding details were partially rather than fully recombined, likely due to increased plausibility and ease of simulation of partially recombined scenarios. Brief periods of imagination increased conjunction error rates, in line with the imagination inflation effect. Subjective ratings suggest that this inflation is due to similarity of phenomenological experience between conjunction and authentic memories, consistent with a source monitoring perspective. Moreover, objective scoring of memory content indicates that increased perceptual detail may be particularly important for the formation of autobiographical memory conjunction errors. PMID:25611492

  4. Factors that influence the generation of autobiographical memory conjunction errors.

    PubMed

    Devitt, Aleea L; Monk-Fromont, Edwin; Schacter, Daniel L; Addis, Donna Rose

    2016-01-01

    The constructive nature of memory is generally adaptive, allowing us to efficiently store, process and learn from life events, and simulate future scenarios to prepare ourselves for what may come. However, the cost of a flexibly constructive memory system is the occasional conjunction error, whereby the components of an event are authentic, but the combination of those components is false. Using a novel recombination paradigm, it was demonstrated that details from one autobiographical memory (AM) may be incorrectly incorporated into another, forming AM conjunction errors that elude typical reality monitoring checks. The factors that contribute to the creation of these conjunction errors were examined across two experiments. Conjunction errors were more likely to occur when the corresponding details were partially rather than fully recombined, likely due to increased plausibility and ease of simulation of partially recombined scenarios. Brief periods of imagination increased conjunction error rates, in line with the imagination inflation effect. Subjective ratings suggest that this inflation is due to similarity of phenomenological experience between conjunction and authentic memories, consistent with a source monitoring perspective. Moreover, objective scoring of memory content indicates that increased perceptual detail may be particularly important for the formation of AM conjunction errors.

  5. Validation of Multiple Tools for Flat Plate Photovoltaic Modeling Against Measured Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, J.; Whitmore, J.; Blair, N.

    2014-08-01

    This report expands upon a previous work by the same authors, published in the 40th IEEE Photovoltaic Specialists conference. In this validation study, comprehensive analysis is performed on nine photovoltaic systems for which NREL could obtain detailed performance data and specifications, including three utility-scale systems and six commercial scale systems. Multiple photovoltaic performance modeling tools were used to model these nine systems, and the error of each tool was analyzed compared to quality-controlled measured performance data. This study shows that, excluding identified outliers, all tools achieve annual errors within +/-8% and hourly root mean squared errors less than 7% formore » all systems. It is further shown using SAM that module model and irradiance input choices can change the annual error with respect to measured data by as much as 6.6% for these nine systems, although all combinations examined still fall within an annual error range of +/-8.5%. Additionally, a seasonal variation in monthly error is shown for all tools. Finally, the effects of irradiance data uncertainty and the use of default loss assumptions on annual error are explored, and two approaches to reduce the error inherent in photovoltaic modeling are proposed.« less

  6. The study of CD side to side error in line/space pattern caused by post-exposure bake effect

    NASA Astrophysics Data System (ADS)

    Huang, Jin; Guo, Eric; Ge, Haiming; Lu, Max; Wu, Yijun; Tian, Mingjing; Yan, Shichuan; Wang, Ran

    2016-10-01

    In semiconductor manufacturing, as the design rule has decreased, the ITRS roadmap requires crucial tighter critical dimension (CD) control. CD uniformity is one of the necessary parameters to assure good performance and reliable functionality of any integrated circuit (IC) [1] [2], and towards the advanced technology nodes, it is a challenge to control CD uniformity well. The study of corresponding CD Uniformity by tuning Post-Exposure bake (PEB) and develop process has some significant progress[3], but CD side to side error happening to some line/space pattern are still found in practical application, and the error has approached to over the uniformity tolerance. After details analysis, even though use several developer types, the CD side to side error has not been found significant relationship to the developing. In addition, it is impossible to correct the CD side to side error by electron beam correction as such error does not appear in all Line/Space pattern masks. In this paper the root cause of CD side to side error is analyzed and the PEB module process are optimized as a main factor for improvement of CD side to side error.

  7. Polarizable multipolar electrostatics for cholesterol

    NASA Astrophysics Data System (ADS)

    Fletcher, Timothy L.; Popelier, Paul L. A.

    2016-08-01

    FFLUX is a novel force field under development for biomolecular modelling, and is based on topological atoms and the machine learning method kriging. Successful kriging models have been obtained for realistic electrostatics of amino acids, small peptides, and some carbohydrates but here, for the first time, we construct kriging models for a sizeable ligand of great importance, which is cholesterol. Cholesterol's mean total (internal) electrostatic energy prediction error amounts to 3.9 kJ mol-1, which pleasingly falls below the threshold of 1 kcal mol-1 often cited for accurate biomolecular modelling. We present a detailed analysis of the error distributions.

  8. ISMP Medication Error Report Analysis

    PubMed Central

    Cohen, Michael R.; Smetzer, Judy L.

    2017-01-01

    These medication errors have occurred in health care facilities at least once. They will happen again—perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications. PMID:28179735

  9. Effect of phase errors in stepped-frequency radar systems

    NASA Astrophysics Data System (ADS)

    Vanbrundt, H. E.

    1988-04-01

    Stepped-frequency waveforms are being considered for inverse synthetic aperture radar (ISAR) imaging from ship and airborne platforms and for detailed radar cross section (RCS) measurements of ships and aircraft. These waveforms make it possible to achieve resolutions of 1.0 foot by using existing radar designs and processing technology. One problem not yet fully resolved in using stepped-frequency waveform for ISAR imaging is the deterioration in signal level caused by random frequency error. Random frequency error of the stepped-frequency source results in reduced peak responses and increased null responses. The resulting reduced signal-to-noise ratio is range dependent. Two of the major concerns addressed in this report are radar range limitations for ISAR and the error in calibration for RCS measurements caused by differences in range between a passive reflector used for an RCS reference and the target to be measured. In addressing these concerns, NOSC developed an analysis to assess the tolerable frequency error in terms of resulting power loss in signal power and signal-to-phase noise.

  10. ISMP Medication Error Report Analysis

    PubMed Central

    Cohen, Michael R.; Smetzer, Judy L.

    2017-01-01

    These medication errors have occurred in health care facilities at least once. They will happen again—perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your in-service training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers’ names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters’ wishes as to the level of detail included in publications. PMID:29276260

  11. ISMP Medication Error Report Analysis

    PubMed Central

    Cohen, Michael R.; Smetzer, Judy L.

    2016-01-01

    These medication errors have occurred in health care facilities at least once. They will happen again—perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications. PMID:28057945

  12. ISMP Medication Error Report Analysis

    PubMed Central

    Cohen, Michael R.; Smetzer, Judy L.

    2016-01-01

    ABSTRACT These medication errors have occurred in health care facilities at least once. They will happen again—perhaps where you work. Through education and alertness of personnel and procedural-safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were receivedthrough the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications. PMID:27928183

  13. Improving Patient Safety With Error Identification in Chemotherapy Orders by Verification Nurses.

    PubMed

    Baldwin, Abigail; Rodriguez, Elizabeth S

    2016-02-01

    The prevalence of medication errors associated with chemotherapy administration is not precisely known. Little evidence exists concerning the extent or nature of errors; however, some evidence demonstrates that errors are related to prescribing. This article demonstrates how the review of chemotherapy orders by a designated nurse known as a verification nurse (VN) at a National Cancer Institute-designated comprehensive cancer center helps to identify prescribing errors that may prevent chemotherapy administration mistakes and improve patient safety in outpatient infusion units. This article will describe the role of the VN and details of the verification process. To identify benefits of the VN role, a retrospective review and analysis of chemotherapy near-miss events from 2009-2014 was performed. A total of 4,282 events related to chemotherapy were entered into the Reporting to Improve Safety and Quality system. A majority of the events were categorized as near-miss events, or those that, because of chance, did not result in patient injury, and were identified at the point of prescribing.

  14. Hydromagnetic couple-stress nanofluid flow over a moving convective wall: OHAM analysis

    NASA Astrophysics Data System (ADS)

    Awais, M.; Saleem, S.; Hayat, T.; Irum, S.

    2016-12-01

    This communication presents the magnetohydrodynamics (MHD) flow of a couple-stress nanofluid over a convective moving wall. The flow dynamics are analyzed in the boundary layer region. Convective cooling phenomenon combined with thermophoresis and Brownian motion effects has been discussed. Similarity transforms are utilized to convert the system of partial differential equations into coupled non-linear ordinary differential equation. Optimal homotopy analysis method (OHAM) is utilized and the concept of minimization is employed by defining the average squared residual errors. Effects of couple-stress parameter, convective cooling process parameter and energy enhancement parameters are displayed via graphs and discussed in detail. Various tables are also constructed to present the error analysis and a comparison of obtained results with the already published data. Stream lines are plotted showing a difference of Newtonian fluid model and couplestress fluid model.

  15. Human Factors in Aircraft Maintenance

    DTIC Science & Technology

    2001-03-01

    795 - 3-798. Reason, J . (1990). Human Error. Cambridge: Cambridge University Press. Schmidt, J ., Schmorrow, D . and Figlock, R. (2000). Human factors...and so on. When each step is described in sufficient detail, the task description is complete and task analysis can begin (e.g. Drury, Paramore , Van... Paramore , B., Van Cott, H.P., Grey, S.M. and Corlett, E.M.(1987). Task analysis. In G. Salvendy (Ed) Handbook of Human Factors, Chapter 3.4. New

  16. A detailed description of the uncertainty analysis for high area ratio rocket nozzle tests at the NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth J.; Dieck, Ronald H.; Chuang, Isaac

    1987-01-01

    A preliminary uncertainty analysis was performed for the High Area Ratio Rocket Nozzle test program which took place at the altitude test capsule of the Rocket Engine Test Facility at the NASA Lewis Research Center. Results from the study establish the uncertainty of measured and calculated parameters required for the calculation of rocket engine specific impulse. A generalized description of the uncertainty methodology used is provided. Specific equations and a detailed description of the analysis is presented. Verification of the uncertainty analysis model was performed by comparison with results from the experimental program's data reduction code. Final results include an uncertainty for specific impulse of 1.30 percent. The largest contributors to this uncertainty were calibration errors from the test capsule pressure and thrust measurement devices.

  17. A detailed description of the uncertainty analysis for High Area Ratio Rocket Nozzle tests at the NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth J.; Dieck, Ronald H.; Chuang, Isaac

    1987-01-01

    A preliminary uncertainty analysis has been performed for the High Area Ratio Rocket Nozzle test program which took place at the altitude test capsule of the Rocket Engine Test Facility at the NASA Lewis Research Center. Results from the study establish the uncertainty of measured and calculated parameters required for the calculation of rocket engine specific impulse. A generalized description of the uncertainty methodology used is provided. Specific equations and a detailed description of the analysis are presented. Verification of the uncertainty analysis model was performed by comparison with results from the experimental program's data reduction code. Final results include an uncertainty for specific impulse of 1.30 percent. The largest contributors to this uncertainty were calibration errors from the test capsule pressure and thrust measurement devices.

  18. Mindtagger: A Demonstration of Data Labeling in Knowledge Base Construction.

    PubMed

    Shin, Jaeho; Ré, Christopher; Cafarella, Michael

    2015-08-01

    End-to-end knowledge base construction systems using statistical inference are enabling more people to automatically extract high-quality domain-specific information from unstructured data. As a result of deploying DeepDive framework across several domains, we found new challenges in debugging and improving such end-to-end systems to construct high-quality knowledge bases. DeepDive has an iterative development cycle in which users improve the data. To help our users, we needed to develop principles for analyzing the system's error as well as provide tooling for inspecting and labeling various data products of the system. We created guidelines for error analysis modeled after our colleagues' best practices, in which data labeling plays a critical role in every step of the analysis. To enable more productive and systematic data labeling, we created Mindtagger, a versatile tool that can be configured to support a wide range of tasks. In this demonstration, we show in detail what data labeling tasks are modeled in our error analysis guidelines and how each of them is performed using Mindtagger.

  19. Angular rate optimal design for the rotary strapdown inertial navigation system.

    PubMed

    Yu, Fei; Sun, Qian

    2014-04-22

    Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS.

  20. Geometrical optics analysis of the structural imperfection of retroreflection corner cubes with a nonlinear conjugate gradient method.

    PubMed

    Kim, Hwi; Min, Sung-Wook; Lee, Byoungho

    2008-12-01

    Geometrical optics analysis of the structural imperfection of retroreflection corner cubes is described. In the analysis, a geometrical optics model of six-beam reflection patterns generated by an imperfect retroreflection corner cube is developed, and its structural error extraction is formulated as a nonlinear optimization problem. The nonlinear conjugate gradient method is employed for solving the nonlinear optimization problem, and its detailed implementation is described. The proposed method of analysis is a mathematical basis for the nondestructive optical inspection of imperfectly fabricated retroreflection corner cubes.

  1. Laser-fluorescence measurement of marine algae

    NASA Technical Reports Server (NTRS)

    Browell, E. V.

    1980-01-01

    Progress in remote sensing of algae by laser-induced fluorescence is subject of comprehensive report. Existing single-wavelength and four-wavelength systems are reviewed, and new expression for power received by airborne sensor is derived. Result differs by as much as factor of 10 from those previously reported. Detailed error analysis evluates factors affecting accuracy of laser-fluorosensor systems.

  2. A new multi-symplectic scheme for the generalized Kadomtsev-Petviashvili equation

    NASA Astrophysics Data System (ADS)

    Li, Haochen; Sun, Jianqiang

    2012-09-01

    We propose a new scheme for the generalized Kadomtsev-Petviashvili (KP) equation. The multi-symplectic conservation property of the new scheme is proved. Back error analysis shows that the new multi-symplectic scheme has second order accuracy in space and time. Numerical application on studying the KPI equation and the KPII equation are presented in detail.

  3. SUGAR: graphical user interface-based data refiner for high-throughput DNA sequencing.

    PubMed

    Sato, Yukuto; Kojima, Kaname; Nariai, Naoki; Yamaguchi-Kabata, Yumi; Kawai, Yosuke; Takahashi, Mamoru; Mimori, Takahiro; Nagasaki, Masao

    2014-08-08

    Next-generation sequencers (NGSs) have become one of the main tools for current biology. To obtain useful insights from the NGS data, it is essential to control low-quality portions of the data affected by technical errors such as air bubbles in sequencing fluidics. We develop a software SUGAR (subtile-based GUI-assisted refiner) which can handle ultra-high-throughput data with user-friendly graphical user interface (GUI) and interactive analysis capability. The SUGAR generates high-resolution quality heatmaps of the flowcell, enabling users to find possible signals of technical errors during the sequencing. The sequencing data generated from the error-affected regions of a flowcell can be selectively removed by automated analysis or GUI-assisted operations implemented in the SUGAR. The automated data-cleaning function based on sequence read quality (Phred) scores was applied to a public whole human genome sequencing data and we proved the overall mapping quality was improved. The detailed data evaluation and cleaning enabled by SUGAR would reduce technical problems in sequence read mapping, improving subsequent variant analysis that require high-quality sequence data and mapping results. Therefore, the software will be especially useful to control the quality of variant calls to the low population cells, e.g., cancers, in a sample with technical errors of sequencing procedures.

  4. Evaluation of monthly rainfall estimates derived from the special sensor microwave/imager (SSM/I) over the tropical Pacific

    NASA Technical Reports Server (NTRS)

    Berg, Wesley; Avery, Susan K.

    1995-01-01

    Estimates of monthly rainfall have been computed over the tropical Pacific using passive microwave satellite observations from the special sensor microwave/imager (SSM/I) for the period from July 1987 through December 1990. These monthly estimates are calibrated using data from a network of Pacific atoll rain gauges in order to account for systematic biases and are then compared with several visible and infrared satellite-based rainfall estimation techniques for the purpose of evaluating the performance of the microwave-based estimates. Although several key differences among the various techniques are observed, the general features of the monthly rainfall time series agree very well. Finally, the significant error sources contributing to uncertainties in the monthly estimates are examined and an estimate of the total error is produced. The sampling error characteristics are investigated using data from two SSM/I sensors and a detailed analysis of the characteristics of the diurnal cycle of rainfall over the oceans and its contribution to sampling errors in the monthly SSM/I estimates is made using geosynchronous satellite data. Based on the analysis of the sampling and other error sources the total error was estimated to be of the order of 30 to 50% of the monthly rainfall for estimates averaged over 2.5 deg x 2.5 deg latitude/longitude boxes, with a contribution due to diurnal variability of the order of 10%.

  5. Methodology issues concerning the accuracy of kinematic data collection and analysis using the ariel performance analysis system

    NASA Technical Reports Server (NTRS)

    Wilmington, R. P.; Klute, Glenn K. (Editor); Carroll, Amy E. (Editor); Stuart, Mark A. (Editor); Poliner, Jeff (Editor); Rajulu, Sudhakar (Editor); Stanush, Julie (Editor)

    1992-01-01

    Kinematics, the study of motion exclusive of the influences of mass and force, is one of the primary methods used for the analysis of human biomechanical systems as well as other types of mechanical systems. The Anthropometry and Biomechanics Laboratory (ABL) in the Crew Interface Analysis section of the Man-Systems Division performs both human body kinematics as well as mechanical system kinematics using the Ariel Performance Analysis System (APAS). The APAS supports both analysis of analog signals (e.g. force plate data collection) as well as digitization and analysis of video data. The current evaluations address several methodology issues concerning the accuracy of the kinematic data collection and analysis used in the ABL. This document describes a series of evaluations performed to gain quantitative data pertaining to position and constant angular velocity movements under several operating conditions. Two-dimensional as well as three-dimensional data collection and analyses were completed in a controlled laboratory environment using typical hardware setups. In addition, an evaluation was performed to evaluate the accuracy impact due to a single axis camera offset. Segment length and positional data exhibited errors within 3 percent when using three-dimensional analysis and yielded errors within 8 percent through two-dimensional analysis (Direct Linear Software). Peak angular velocities displayed errors within 6 percent through three-dimensional analyses and exhibited errors of 12 percent when using two-dimensional analysis (Direct Linear Software). The specific results from this series of evaluations and their impacts on the methodology issues of kinematic data collection and analyses are presented in detail. The accuracy levels observed in these evaluations are also presented.

  6. Fast, efficient error reconciliation for quantum cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buttler, W.T.; Lamoreaux, S.K.; Torgerson, J.R.

    2003-05-01

    We describe an error-reconciliation protocol, which we call Winnow, based on the exchange of parity and Hamming's 'syndrome' for N-bit subunits of a large dataset. The Winnow protocol was developed in the context of quantum-key distribution and offers significant advantages and net higher efficiency compared to other widely used protocols within the quantum cryptography community. A detailed mathematical analysis of the Winnow protocol is presented in the context of practical implementations of quantum-key distribution; in particular, the information overhead required for secure implementation is one of the most important criteria in the evaluation of a particular error-reconciliation protocol. The increasemore » in efficiency for the Winnow protocol is largely due to the reduction in authenticated public communication required for its implementation.« less

  7. Analysis of error-correction constraints in an optical disk.

    PubMed

    Roberts, J D; Ryley, A; Jones, D M; Burke, D

    1996-07-10

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  8. Analysis of error-correction constraints in an optical disk

    NASA Astrophysics Data System (ADS)

    Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David

    1996-07-01

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  9. Investigation of Primary Mirror Segment's Residual Errors for the Thirty Meter Telescope

    NASA Technical Reports Server (NTRS)

    Seo, Byoung-Joon; Nissly, Carl; Angeli, George; MacMynowski, Doug; Sigrist, Norbert; Troy, Mitchell; Williams, Eric

    2009-01-01

    The primary mirror segment aberrations after shape corrections with warping harness have been identified as the single largest error term in the Thirty Meter Telescope (TMT) image quality error budget. In order to better understand the likely errors and how they will impact the telescope performance we have performed detailed simulations. We first generated unwarped primary mirror segment surface shapes that met TMT specifications. Then we used the predicted warping harness influence functions and a Shack-Hartmann wavefront sensor model to determine estimates for the 492 corrected segment surfaces that make up the TMT primary mirror. Surface and control parameters, as well as the number of subapertures were varied to explore the parameter space. The corrected segment shapes were then passed to an optical TMT model built using the Jet Propulsion Laboratory (JPL) developed Modeling and Analysis for Controlled Optical Systems (MACOS) ray-trace simulator. The generated exit pupil wavefront error maps provided RMS wavefront error and image-plane characteristics like the Normalized Point Source Sensitivity (PSSN). The results have been used to optimize the segment shape correction and wavefront sensor designs as well as provide input to the TMT systems engineering error budgets.

  10. Investigation of advanced phase-shifting projected fringe profilometry techniques

    NASA Astrophysics Data System (ADS)

    Liu, Hongyu

    1999-11-01

    The phase-shifting projected fringe profilometry (PSPFP) technique is a powerful tool in the profile measurements of rough engineering surfaces. Compared with other competing techniques, this technique is notable for its full-field measurement capacity, system simplicity, high measurement speed, and low environmental vulnerability. The main purpose of this dissertation is to tackle three important problems, which severely limit the capability and the accuracy of the PSPFP technique, with some new approaches. Chapter 1 provides some background information of the PSPFP technique including the measurement principles, basic features, and related techniques is briefly introduced. The objectives and organization of the thesis are also outlined. Chapter 2 gives a theoretical treatment to the absolute PSPFP measurement. The mathematical formulations and basic requirements of the absolute PSPFP measurement and its supporting techniques are discussed in detail. Chapter 3 introduces the experimental verification of the proposed absolute PSPFP technique. Some design details of a prototype system are discussed as supplements to the previous theoretical analysis. Various fundamental experiments performed for concept verification and accuracy evaluation are introduced together with some brief comments. Chapter 4 presents the theoretical study of speckle- induced phase measurement errors. In this analysis, the expression for speckle-induced phase errors is first derived based on the multiplicative noise model of image- plane speckles. The statistics and the system dependence of speckle-induced phase errors are then thoroughly studied through numerical simulations and analytical derivations. Based on the analysis, some suggestions on the system design are given to improve measurement accuracy. Chapter 5 discusses a new technique combating surface reflectivity variations. The formula used for error compensation is first derived based on a simplified model of the detection process. The techniques coping with two major effects of surface reflectivity variations are then introduced. Some fundamental problems in the proposed technique are studied through simulations. Chapter 6 briefly summarizes the major contributions of the current work and provides some suggestions for the future research.

  11. Collaborative recall of details of an emotional film.

    PubMed

    Wessel, Ineke; Zandstra, Anna Roos E; Hengeveld, Hester M E; Moulds, Michelle L

    2015-01-01

    Collaborative inhibition refers to the phenomenon that when several people work together to produce a single memory report, they typically produce fewer items than when the unique items in the individual reports of the same number of participants are combined (i.e., nominal recall). Yet, apart from this negative effect, collaboration may be beneficial in that group members remove errors from a collaborative report. Collaborative inhibition studies on memory for emotional stimuli are scarce. Therefore, the present study examined both collaborative inhibition and collaborative error reduction in the recall of the details of emotional material in a laboratory setting. Female undergraduates (n = 111) viewed a film clip of a fatal accident and subsequently engaged in either collaborative (n = 57) or individual recall (n = 54) in groups of three. The results show that, across several detail categories, collaborating groups recalled fewer details than nominal groups. However, overall, nominal recall produced more errors than collaborative recall. The present results extend earlier findings on both collaborative inhibition and error reduction to the recall of affectively laden material. These findings may have implications for the applied fields of forensic and clinical psychology.

  12. Evaluation of Hand Written and Computerized Out-Patient Prescriptions in Urban Part of Central Gujarat.

    PubMed

    Joshi, Anuradha; Buch, Jatin; Kothari, Nitin; Shah, Nishal

    2016-06-01

    Prescription order is an important therapeutic transaction between physician and patient. A good quality prescription is an extremely important factor for minimizing errors in dispensing medication and it should be adherent to guidelines for prescription writing for benefit of the patient. To evaluate frequency and type of prescription errors in outpatient prescriptions and find whether prescription writing abides with WHO standards of prescription writing. A cross-sectional observational study was conducted at Anand city. Allopathic private practitioners practising at Anand city of different specialities were included in study. Collection of prescriptions was started a month after the consent to minimize bias in prescription writing. The prescriptions were collected from local pharmacy stores of Anand city over a period of six months. Prescriptions were analysed for errors in standard information, according to WHO guide to good prescribing. Descriptive analysis was performed to estimate frequency of errors, data were expressed as numbers and percentage. Total 749 (549 handwritten and 200 computerised) prescriptions were collected. Abundant omission errors were identified in handwritten prescriptions e.g., OPD number was mentioned in 6.19%, patient's age was mentioned in 25.50%, gender in 17.30%, address in 9.29% and weight of patient mentioned in 11.29%, while in drug items only 2.97% drugs were prescribed by generic name. Route and Dosage form was mentioned in 77.35%-78.15%, dose mentioned in 47.25%, unit in 13.91%, regimens were mentioned in 72.93% while signa (direction for drug use) in 62.35%. Total 4384 errors out of 549 handwritten prescriptions and 501 errors out of 200 computerized prescriptions were found in clinicians and patient details. While in drug item details, total number of errors identified were 5015 and 621 in handwritten and computerized prescriptions respectively. As compared to handwritten prescriptions, computerized prescriptions appeared to be associated with relatively lower rates of error. Since out-patient prescription errors are abundant and often occur in handwritten prescriptions, prescribers need to adapt themselves to computerized prescription order entry in their daily practice.

  13. An Improved Spectral Analysis Method for Fatigue Damage Assessment of Details in Liquid Cargo Tanks

    NASA Astrophysics Data System (ADS)

    Zhao, Peng-yuan; Huang, Xiao-ping

    2018-03-01

    Errors will be caused in calculating the fatigue damages of details in liquid cargo tanks by using the traditional spectral analysis method which is based on linear system, for the nonlinear relationship between the dynamic stress and the ship acceleration. An improved spectral analysis method for the assessment of the fatigue damage in detail of a liquid cargo tank is proposed in this paper. Based on assumptions that the wave process can be simulated by summing the sinusoidal waves in different frequencies and the stress process can be simulated by summing the stress processes induced by these sinusoidal waves, the stress power spectral density (PSD) is calculated by expanding the stress processes induced by the sinusoidal waves into Fourier series and adding the amplitudes of each harmonic component with the same frequency. This analysis method can take the nonlinear relationship into consideration and the fatigue damage is then calculated based on the PSD of stress. Take an independent tank in an LNG carrier for example, the accuracy of the improved spectral analysis method is proved much better than that of the traditional spectral analysis method by comparing the calculated damage results with the results calculated by the time domain method. The proposed spectral analysis method is more accurate in calculating the fatigue damages in detail of ship liquid cargo tanks.

  14. Assessment and Verification of SLS Block 1-B Exploration Upper Stage State and Stage Disposal Performance

    NASA Technical Reports Server (NTRS)

    Patrick, Sean; Oliver, Emerson

    2018-01-01

    One of the SLS Navigation System's key performance requirements is a constraint on the payload system's delta-v allocation to correct for insertion errors due to vehicle state uncertainty at payload separation. The SLS navigation team has developed a Delta-Delta-V analysis approach to assess the effect on trajectory correction maneuver (TCM) design needed to correct for navigation errors. This approach differs from traditional covariance analysis based methods and makes no assumptions with regard to the propagation of the state dynamics. This allows for consideration of non-linearity in the propagation of state uncertainties. The Delta-Delta-V analysis approach re-optimizes perturbed SLS mission trajectories by varying key mission states in accordance with an assumed state error. The state error is developed from detailed vehicle 6-DOF Monte Carlo analysis or generated using covariance analysis. These perturbed trajectories are compared to a nominal trajectory to determine necessary TCM design. To implement this analysis approach, a tool set was developed which combines the functionality of a 3-DOF trajectory optimization tool, Copernicus, and a detailed 6-DOF vehicle simulation tool, Marshall Aerospace Vehicle Representation in C (MAVERIC). In addition to delta-v allocation constraints on SLS navigation performance, SLS mission requirement dictate successful upper stage disposal. Due to engine and propellant constraints, the SLS Exploration Upper Stage (EUS) must dispose into heliocentric space by means of a lunar fly-by maneuver. As with payload delta-v allocation, upper stage disposal maneuvers must place the EUS on a trajectory that maximizes the probability of achieving a heliocentric orbit post Lunar fly-by considering all sources of vehicle state uncertainty prior to the maneuver. To ensure disposal, the SLS navigation team has developed an analysis approach to derive optimal disposal guidance targets. This approach maximizes the state error covariance prior to the maneuver to develop and re-optimize a nominal disposal maneuver (DM) target that, if achieved, would maximize the potential for successful upper stage disposal. For EUS disposal analysis, a set of two tools was developed. The first considers only the nominal pre-disposal maneuver state, vehicle constraints, and an a priori estimate of the state error covariance. In the analysis, the optimal nominal disposal target is determined. This is performed by re-formulating the trajectory optimization to consider constraints on the eigenvectors of the error ellipse applied to the nominal trajectory. A bisection search methodology is implemented in the tool to refine these dispersions resulting in the maximum dispersion feasible for successful disposal via lunar fly-by. Success is defined based on the probability that the vehicle will not impact the lunar surface and will achieve a characteristic energy (C3) relative to the Earth such that it is no longer in the Earth-Moon system. The second tool propagates post-disposal maneuver states to determine the success of disposal for provided trajectory achieved states. This is performed using the optimized nominal target within the 6-DOF vehicle simulation. This paper will discuss the application of the Delta-Delta-V analysis approach for performance evaluation as well as trajectory re-optimization so as to demonstrate the system's capability in meeting performance constraints. Additionally, further discussion of the implementation of assessing disposal analysis will be provided.

  15. Prediction Errors but Not Sharpened Signals Simulate Multivoxel fMRI Patterns during Speech Perception

    PubMed Central

    Davis, Matthew H.

    2016-01-01

    Successful perception depends on combining sensory input with prior knowledge. However, the underlying mechanism by which these two sources of information are combined is unknown. In speech perception, as in other domains, two functionally distinct coding schemes have been proposed for how expectations influence representation of sensory evidence. Traditional models suggest that expected features of the speech input are enhanced or sharpened via interactive activation (Sharpened Signals). Conversely, Predictive Coding suggests that expected features are suppressed so that unexpected features of the speech input (Prediction Errors) are processed further. The present work is aimed at distinguishing between these two accounts of how prior knowledge influences speech perception. By combining behavioural, univariate, and multivariate fMRI measures of how sensory detail and prior expectations influence speech perception with computational modelling, we provide evidence in favour of Prediction Error computations. Increased sensory detail and informative expectations have additive behavioural and univariate neural effects because they both improve the accuracy of word report and reduce the BOLD signal in lateral temporal lobe regions. However, sensory detail and informative expectations have interacting effects on speech representations shown by multivariate fMRI in the posterior superior temporal sulcus. When prior knowledge was absent, increased sensory detail enhanced the amount of speech information measured in superior temporal multivoxel patterns, but with informative expectations, increased sensory detail reduced the amount of measured information. Computational simulations of Sharpened Signals and Prediction Errors during speech perception could both explain these behavioural and univariate fMRI observations. However, the multivariate fMRI observations were uniquely simulated by a Prediction Error and not a Sharpened Signal model. The interaction between prior expectation and sensory detail provides evidence for a Predictive Coding account of speech perception. Our work establishes methods that can be used to distinguish representations of Prediction Error and Sharpened Signals in other perceptual domains. PMID:27846209

  16. SU-E-T-635: Process Mapping of Eye Plaque Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huynh, J; Kim, Y

    Purpose: To apply a risk-based assessment and analysis technique (AAPM TG 100) to eye plaque brachytherapy treatment of ocular melanoma. Methods: The role and responsibility of personnel involved in the eye plaque brachytherapy is defined for retinal specialist, radiation oncologist, nurse and medical physicist. The entire procedure was examined carefully. First, major processes were identified and then details for each major process were followed. Results: Seventy-one total potential modes were identified. Eight major processes (corresponding detailed number of modes) are patient consultation (2 modes), pretreatment tumor localization (11), treatment planning (13), seed ordering and calibration (10), eye plaque assembly (10),more » implantation (11), removal (11), and deconstruction (3), respectively. Half of the total modes (36 modes) are related to physicist while physicist is not involved in processes such as during the actual procedure of suturing and removing the plaque. Conclusion: Not only can failure modes arise from physicist-related procedures such as treatment planning and source activity calibration, but it can also exist in more clinical procedures by other medical staff. The improvement of the accurate communication for non-physicist-related clinical procedures could potentially be an approach to prevent human errors. More rigorous physics double check would reduce the error for physicist-related procedures. Eventually, based on this detailed process map, failure mode and effect analysis (FMEA) will identify top tiers of modes by ranking all possible modes with risk priority number (RPN). For those high risk modes, fault tree analysis (FTA) will provide possible preventive action plans.« less

  17. Analysis of laser fluorosensor systems for remote algae detection and quantification

    NASA Technical Reports Server (NTRS)

    Browell, E. V.

    1977-01-01

    The development and performance of single- and multiple-wavelength laser fluorosensor systems for use in the remote detection and quantification of algae are discussed. The appropriate equation for the fluorescence power received by a laser fluorosensor system is derived in detail. Experimental development of a single wavelength system and a four wavelength system, which selectively excites the algae contained in the four primary algal color groups, is reviewed, and test results are presented. A comprehensive error analysis is reported which evaluates the uncertainty in the remote determination of the chlorophyll a concentration contained in algae by single- and multiple-wavelength laser fluorosensor systems. Results of the error analysis indicate that the remote quantification of chlorophyll a by a laser fluorosensor system requires optimum excitation wavelength(s), remote measurement of marine attenuation coefficients, and supplemental instrumentation to reduce uncertainties in the algal fluorescence cross sections.

  18. Comparison of medication safety systems in critical access hospitals: Combined analysis of two studies.

    PubMed

    Cochran, Gary L; Barrett, Ryan S; Horn, Susan D

    2016-08-01

    The role of pharmacist transcription, onsite pharmacist dispensing, use of automated dispensing cabinets (ADCs), nurse-nurse double checks, or barcode-assisted medication administration (BCMA) in reducing medication error rates in critical access hospitals (CAHs) was evaluated. Investigators used the practice-based evidence methodology to identify predictors of medication errors in 12 Nebraska CAHs. Detailed information about each medication administered was recorded through direct observation. Errors were identified by comparing the observed medication administered with the physician's order. Chi-square analysis and Fisher's exact test were used to measure differences between groups of medication-dispensing procedures. Nurses observed 6497 medications being administered to 1374 patients. The overall error rate was 1.2%. The transcription error rates for orders transcribed by an onsite pharmacist were slightly lower than for orders transcribed by a telepharmacy service (0.10% and 0.33%, respectively). Fewer dispensing errors occurred when medications were dispensed by an onsite pharmacist versus any other method of medication acquisition (0.10% versus 0.44%, p = 0.0085). The rates of dispensing errors for medications that were retrieved from a single-cell ADC (0.19%), a multicell ADC (0.45%), or a drug closet or general supply (0.77%) did not differ significantly. BCMA was associated with a higher proportion of dispensing and administration errors intercepted before reaching the patient (66.7%) compared with either manual double checks (10%) or no BCMA or double check (30.4%) of the medication before administration (p = 0.0167). Onsite pharmacist dispensing and BCMA were associated with fewer medication errors and are important components of a medication safety strategy in CAHs. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  19. De-biasing the dynamic mode decomposition for applied Koopman spectral analysis of noisy datasets

    NASA Astrophysics Data System (ADS)

    Hemati, Maziar S.; Rowley, Clarence W.; Deem, Eric A.; Cattafesta, Louis N.

    2017-08-01

    The dynamic mode decomposition (DMD)—a popular method for performing data-driven Koopman spectral analysis—has gained increased popularity for extracting dynamically meaningful spatiotemporal descriptions of fluid flows from snapshot measurements. Often times, DMD descriptions can be used for predictive purposes as well, which enables informed decision-making based on DMD model forecasts. Despite its widespread use and utility, DMD can fail to yield accurate dynamical descriptions when the measured snapshot data are imprecise due to, e.g., sensor noise. Here, we express DMD as a two-stage algorithm in order to isolate a source of systematic error. We show that DMD's first stage, a subspace projection step, systematically introduces bias errors by processing snapshots asymmetrically. To remove this systematic error, we propose utilizing an augmented snapshot matrix in a subspace projection step, as in problems of total least-squares, in order to account for the error present in all snapshots. The resulting unbiased and noise-aware total DMD (TDMD) formulation reduces to standard DMD in the absence of snapshot errors, while the two-stage perspective generalizes the de-biasing framework to other related methods as well. TDMD's performance is demonstrated in numerical and experimental fluids examples. In particular, in the analysis of time-resolved particle image velocimetry data for a separated flow, TDMD outperforms standard DMD by providing dynamical interpretations that are consistent with alternative analysis techniques. Further, TDMD extracts modes that reveal detailed spatial structures missed by standard DMD.

  20. PID-based error signal modeling

    NASA Astrophysics Data System (ADS)

    Yohannes, Tesfay

    1997-10-01

    This paper introduces a PID based signal error modeling. The error modeling is based on the betterment process. The resulting iterative learning algorithm is introduced and a detailed proof is provided for both linear and nonlinear systems.

  1. Automatic network coupling analysis for dynamical systems based on detailed kinetic models.

    PubMed

    Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich

    2005-10-01

    We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.

  2. Ground-based digital imagery for tree stem analysis

    Treesearch

    Neil Clark; Daniel L. Schmoldt; Randolph H. Wynne; Matthew F. Winn; Philip A. Araman

    2000-01-01

    In the USA, a subset of permanent forest sample plots within each geographic region are intensively measured to obtain estimates of tree volume and products. The detailed field measurements required for this type of sampling are both time consuming and error prone. We are attempting to reduce both of these factors with the aid of a commercially-available solid-state...

  3. Acoustic holography as a metrological tool for characterizing medical ultrasound sources and fields

    PubMed Central

    Sapozhnikov, Oleg A.; Tsysar, Sergey A.; Khokhlova, Vera A.; Kreider, Wayne

    2015-01-01

    Acoustic holography is a powerful technique for characterizing ultrasound sources and the fields they radiate, with the ability to quantify source vibrations and reduce the number of required measurements. These capabilities are increasingly appealing for meeting measurement standards in medical ultrasound; however, associated uncertainties have not been investigated systematically. Here errors associated with holographic representations of a linear, continuous-wave ultrasound field are studied. To facilitate the analysis, error metrics are defined explicitly, and a detailed description of a holography formulation based on the Rayleigh integral is provided. Errors are evaluated both for simulations of a typical therapeutic ultrasound source and for physical experiments with three different ultrasound sources. Simulated experiments explore sampling errors introduced by the use of a finite number of measurements, geometric uncertainties in the actual positions of acquired measurements, and uncertainties in the properties of the propagation medium. Results demonstrate the theoretical feasibility of keeping errors less than about 1%. Typical errors in physical experiments were somewhat larger, on the order of a few percent; comparison with simulations provides specific guidelines for improving the experimental implementation to reduce these errors. Overall, results suggest that holography can be implemented successfully as a metrological tool with small, quantifiable errors. PMID:26428789

  4. The NBS scale of radiance temperature

    NASA Technical Reports Server (NTRS)

    Waters, William R.; Walker, James H.; Hattenburg, Albert T.

    1988-01-01

    The measurement methods and instrumentation used in the realization and transfer of the International Practical Temperature Scale (IPTS-68) above the temperature of freezing gold are described. The determination of the ratios of spectral radiance of tungsten-strip lamps to a gold-point blackbody at a wavelength of 654.6 nm is detailed. The response linearity, spectral responsivity, scattering error, and polarization properties of the instrumentation are described. The analysis of the sources of error and estimates of uncertainty are presented. The assigned uncertainties (three standard deviations) in radiance temperature range from + or - 2 K at 2573 K to + or - 0.5 K at 1073 K.

  5. Online Error Reporting for Managing Quality Control Within Radiology.

    PubMed

    Golnari, Pedram; Forsberg, Daniel; Rosipko, Beverly; Sunshine, Jeffrey L

    2016-06-01

    Information technology systems within health care, such as picture archiving and communication system (PACS) in radiology, can have a positive impact on production but can also risk compromising quality. The widespread use of PACS has removed the previous feedback loop between radiologists and technologists. Instead of direct communication of quality discrepancies found for an examination, the radiologist submitted a paper-based quality-control report. A web-based issue-reporting tool can help restore some of the feedback loop and also provide possibilities for more detailed analysis of submitted errors. The purpose of this study was to evaluate the hypothesis that data from use of an online error reporting software for quality control can focus our efforts within our department. For the 372,258 radiologic examinations conducted during the 6-month period study, 930 errors (390 exam protocol, 390 exam validation, and 150 exam technique) were submitted, corresponding to an error rate of 0.25 %. Within the category exam protocol, technologist documentation had the highest number of submitted errors in ultrasonography (77 errors [44 %]), while imaging protocol errors were the highest subtype error for computed tomography modality (35 errors [18 %]). Positioning and incorrect accession had the highest errors in the exam technique and exam validation error category, respectively, for nearly all of the modalities. An error rate less than 1 % could signify a system with a very high quality; however, a more likely explanation is that not all errors were detected or reported. Furthermore, staff reception of the error reporting system could also affect the reporting rate.

  6. Integrating prior information into microwave tomography part 2: Impact of errors in prior information on microwave tomography image quality.

    PubMed

    Kurrant, Douglas; Fear, Elise; Baran, Anastasia; LoVetri, Joe

    2017-12-01

    The authors have developed a method to combine a patient-specific map of tissue structure and average dielectric properties with microwave tomography. The patient-specific map is acquired with radar-based techniques and serves as prior information for microwave tomography. The impact that the degree of structural detail included in this prior information has on image quality was reported in a previous investigation. The aim of the present study is to extend this previous work by identifying and quantifying the impact that errors in the prior information have on image quality, including the reconstruction of internal structures and lesions embedded in fibroglandular tissue. This study also extends the work of others reported in literature by emulating a clinical setting with a set of experiments that incorporate heterogeneity into both the breast interior and glandular region, as well as prior information related to both fat and glandular structures. Patient-specific structural information is acquired using radar-based methods that form a regional map of the breast. Errors are introduced to create a discrepancy in the geometry and electrical properties between the regional map and the model used to generate the data. This permits the impact that errors in the prior information have on image quality to be evaluated. Image quality is quantitatively assessed by measuring the ability of the algorithm to reconstruct both internal structures and lesions embedded in fibroglandular tissue. The study is conducted using both 2D and 3D numerical breast models constructed from MRI scans. The reconstruction results demonstrate robustness of the method relative to errors in the dielectric properties of the background regional map, and to misalignment errors. These errors do not significantly influence the reconstruction accuracy of the underlying structures, or the ability of the algorithm to reconstruct malignant tissue. Although misalignment errors do not significantly impact the quality of the reconstructed fat and glandular structures for the 3D scenarios, the dielectric properties are reconstructed less accurately within the glandular structure for these cases relative to the 2D cases. However, general agreement between the 2D and 3D results was found. A key contribution of this paper is the detailed analysis of the impact of prior information errors on the reconstruction accuracy and ability to detect tumors. The results support the utility of acquiring patient-specific information with radar-based techniques and incorporating this information into MWT. The method is robust to errors in the dielectric properties of the background regional map, and to misalignment errors. Completion of this analysis is an important step toward developing the method into a practical diagnostic tool. © 2017 American Association of Physicists in Medicine.

  7. Signal design study for shuttle/TDRSS Ku-band uplink

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The adequacy of the signal design approach chosen for the TDRSS/orbiter uplink was evaluated. Critical functions and/or components associated with the baseline design were identified, and design alternatives were developed for those areas considered high risk. A detailed set of RF and signal processing performance specifications for the orbiter hardware associated with the TDRSS/orbiter Ku band uplink was analyzed. Performances of a detailed design of the PN despreader, the PSK carrier synchronization loop, and the symbol synchronizer are identified. The performance of the downlink signal by means of computer simulation to obtain a realistic determination of bit error rate degradations was studied. The three channel PM downlink signal was detailed by means of analysis and computer simulation.

  8. Respiratory monitoring system based on the nasal pressure technique for the analysis of sleep breathing disorders: Reduction of static and dynamic errors, and comparisons with thermistors and pneumotachographs

    NASA Astrophysics Data System (ADS)

    Alves de Mesquita, Jayme; Lopes de Melo, Pedro

    2004-03-01

    Thermally sensitive devices—thermistors—have usually been used to monitor sleep-breathing disorders. However, because of their long time constant, these devices are not able to provide a good characterization of fast events, like hypopneas. Nasal pressure recording technique (NPR) has recently been suggested to quantify airflow during sleep. It is claimed that the short time constants of the devices used to implement this technique would allow an accurate analysis of fast abnormal respiratory events. However, these devices present errors associated with nonlinearities and acoustic resonance that could reduce the diagnostic value of the NPR. Moreover, in spite of the high scientific and clinical potential, there is no detailed description of a complete instrumentation system to implement this promising technique in sleep studies. In this context, the purpose of this work was twofold: (1) describe the development of a flexible NPR device and (2) evaluate the performance of this device when compared to pneumotachographs (PNTs) and thermistors. After the design details are described, the system static accuracy is evaluated by a comparative analysis with a PNT. This analysis revealed a significant reduction (p<0.001) of the static error when system nonlinearities were reduced. The dynamic performance of the NPR system was investigated by frequency response analysis and time constant evaluations and the results showed that the developed device response was as good as PNT and around 100 times faster (τ=5,3 ms) than thermistors (τ=512 ms). Experimental results obtained in simulated clinical conditions and in a patient are presented as examples, and confirmed the good features achieved in engineering tests. These results are in close agreement with physiological fundamentals, supplying substantial evidence that the improved dynamic and static characteristics of this device can contribute to a more accurate implementation of medical research projects and to improve the diagnoses of sleep-breathing disorders.

  9. Monitoring sleepiness with on-board electrophysiological recordings for preventing sleep-deprived traffic accidents.

    PubMed

    Papadelis, Christos; Chen, Zhe; Kourtidou-Papadeli, Chrysoula; Bamidis, Panagiotis D; Chouvarda, Ioanna; Bekiaris, Evangelos; Maglaveras, Nikos

    2007-09-01

    The objective of this study is the development and evaluation of efficient neurophysiological signal statistics, which may assess the driver's alertness level and serve as potential indicators of sleepiness in the design of an on-board countermeasure system. Multichannel EEG, EOG, EMG, and ECG were recorded from sleep-deprived subjects exposed to real field driving conditions. A number of severe driving errors occurred during the experiments. The analysis was performed in two main dimensions: the macroscopic analysis that estimates the on-going temporal evolution of physiological measurements during the driving task, and the microscopic event analysis that focuses on the physiological measurements' alterations just before, during, and after the driving errors. Two independent neurophysiologists visually interpreted the measurements. The EEG data were analyzed by using both linear and non-linear analysis tools. We observed the occurrence of brief paroxysmal bursts of alpha activity and an increased synchrony among EEG channels before the driving errors. The alpha relative band ratio (RBR) significantly increased, and the Cross Approximate Entropy that quantifies the synchrony among channels also significantly decreased before the driving errors. Quantitative EEG analysis revealed significant variations of RBR by driving time in the frequency bands of delta, alpha, beta, and gamma. Most of the estimated EEG statistics, such as the Shannon Entropy, Kullback-Leibler Entropy, Coherence, and Cross-Approximate Entropy, were significantly affected by driving time. We also observed an alteration of eyes blinking duration by increased driving time and a significant increase of eye blinks' number and duration before driving errors. EEG and EOG are promising neurophysiological indicators of driver sleepiness and have the potential of monitoring sleepiness in occupational settings incorporated in a sleepiness countermeasure device. The occurrence of brief paroxysmal bursts of alpha activity before severe driving errors is described in detail for the first time. Clear evidence is presented that eye-blinking statistics are sensitive to the driver's sleepiness and should be considered in the design of an efficient and driver-friendly sleepiness detection countermeasure device.

  10. Efficient Reduction and Analysis of Model Predictive Error

    NASA Astrophysics Data System (ADS)

    Doherty, J.

    2006-12-01

    Most groundwater models are calibrated against historical measurements of head and other system states before being used to make predictions in a real-world context. Through the calibration process, parameter values are estimated or refined such that the model is able to reproduce historical behaviour of the system at pertinent observation points reasonably well. Predictions made by the model are deemed to have greater integrity because of this. Unfortunately, predictive integrity is not as easy to achieve as many groundwater practitioners would like to think. The level of parameterisation detail estimable through the calibration process (especially where estimation takes place on the basis of heads alone) is strictly limited, even where full use is made of modern mathematical regularisation techniques such as those encapsulated in the PEST calibration package. (Use of these mechanisms allows more information to be extracted from a calibration dataset than is possible using simpler regularisation devices such as zones of piecewise constancy.) Where a prediction depends on aspects of parameterisation detail that are simply not inferable through the calibration process (which is often the case for predictions related to contaminant movement, and/or many aspects of groundwater/surface water interaction), then that prediction may be just as much in error as it would have been if the model had not been calibrated at all. Model predictive error arises from two sources. These are (a) the presence of measurement noise within the calibration dataset through which linear combinations of parameters spanning the "calibration solution space" are inferred, and (b) the sensitivity of the prediction to members of the "calibration null space" spanned by linear combinations of parameters which are not inferable through the calibration process. The magnitude of the former contribution depends on the level of measurement noise. The magnitude of the latter contribution (which often dominates the former) depends on the "innate variability" of hydraulic properties within the model domain. Knowledge of both of these is a prerequisite for characterisation of the magnitude of possible model predictive error. Unfortunately, in most cases, such knowledge is incomplete and subjective. Nevertheless, useful analysis of model predictive error can still take place. The present paper briefly discusses the means by which mathematical regularisation can be employed in the model calibration process in order to extract as much information as possible on hydraulic property heterogeneity prevailing within the model domain, thereby reducing predictive error to the lowest that can be achieved on the basis of that dataset. It then demonstrates the means by which predictive error variance can be quantified based on information supplied by the regularised inversion process. Both linear and nonlinear predictive error variance analysis is demonstrated using a number of real-world and synthetic examples.

  11. Analysis and modeling of leakage current sensor under pulsating direct current

    NASA Astrophysics Data System (ADS)

    Li, Kui; Dai, Yihua; Wang, Yao; Niu, Feng; Chen, Zhao; Huang, Shaopo

    2017-05-01

    In this paper, the transformation characteristics of current sensor under pulsating DC leakage current is investigated. The mathematical model of current sensor is proposed to accurately describe the secondary side current and excitation current. The transformation process of current sensor is illustrated in details and the transformation error is analyzed from multi aspects. A simulation model is built and a sensor prototype is designed to conduct comparative evaluation, and both simulation and experimental results are presented to verify the correctness of theoretical analysis.

  12. Satellite Test of Radiation Impact on Ramtron 512K FRAM

    NASA Technical Reports Server (NTRS)

    MacLeod, Todd C.; Sayyah, Rana; Sims, W. Herb; Varnavas, Kosta A.; Ho, Fat D.

    2009-01-01

    The Memory Test Experiment is a space test of a ferroelectric memory device on a low Earth orbit satellite. The test consists of writing and reading data with a ferroelectric based memory device. Any errors are detected and are stored on board the satellite. The data is send to the ground through telemetry once a day. Analysis of the data can determine the kind of error that was found and will lead to a better understanding of the effects of space radiation on memory systems. The test will be one of the first flight demonstrations of ferroelectric memory in a near polar orbit which allows testing in a varied radiation environment. The memory devices being tested is a Ramtron Inc. 512K memory device. This paper details the goals and purpose of this experiment as well as the development process. The process for analyzing the data to gain the maximum understanding of the performance of the ferroelectric memory device is detailed.

  13. Improved method for implicit Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, F. B.; Martin, W. R.

    2001-01-01

    The Implicit Monte Carlo (IMC) method has been used for over 30 years to analyze radiative transfer problems, such as those encountered in stellar atmospheres or inertial confinement fusion. Reference [2] provided an exact error analysis of IMC for 0-D problems and demonstrated that IMC can exhibit substantial errors when timesteps are large. These temporal errors are inherent in the method and are in addition to spatial discretization errors and approximations that address nonlinearities (due to variation of physical constants). In Reference [3], IMC and four other methods were analyzed in detail and compared on both theoretical grounds and themore » accuracy of numerical tests. As discussed in, two alternative schemes for solving the radiative transfer equations, the Carter-Forest (C-F) method and the Ahrens-Larsen (A-L) method, do not exhibit the errors found in IMC; for 0-D, both of these methods are exact for all time, while for 3-D, A-L is exact for all time and C-F is exact within a timestep. These methods can yield substantially superior results to IMC.« less

  14. ISMP Medication Error Report Analysis: Understanding Human Over-reliance on Technology It's Exelan, Not Exelon Crash Cart Drug Mix-up Risk with Entering a "Test Order".

    PubMed

    Cohen, Michael R; Smetzer, Judy L

    2017-01-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  15. Medical error and systems of signaling: conceptual and linguistic definition.

    PubMed

    Smorti, Andrea; Cappelli, Francesco; Zarantonello, Roberta; Tani, Franca; Gensini, Gian Franco

    2014-09-01

    In recent years the issue of patient safety has been the subject of detailed investigations, particularly as a result of the increasing attention from the patients and the public on the problem of medical error. The purpose of this work is firstly to define the classification of medical errors, which are distinguished between two perspectives: those that are personal, and those that are caused by the system. Furthermore we will briefly review some of the main methods used by healthcare organizations to identify and analyze errors. During this discussion it has been determined that, in order to constitute a practical, coordinated and shared action to counteract the error, it is necessary to promote an analysis that considers all elements (human, technological and organizational) that contribute to the occurrence of a critical event. Therefore, it is essential to create a culture of constructive confrontation that encourages an open and non-punitive debate about the causes that led to error. In conclusion we have thus underlined that in health it is essential to affirm a system discussion that considers the error as a learning source, and as a result of the interaction between the individual and the organization. In this way, one should encourage a non-guilt bearing discussion on evident errors and on those which are not immediately identifiable, in order to create the conditions that recognize and corrects the error even before it produces negative consequences.

  16. Thyroid cancer following scalp irradiation: a reanalysis accounting for uncertainty in dosimetry.

    PubMed

    Schafer, D W; Lubin, J H; Ron, E; Stovall, M; Carroll, R J

    2001-09-01

    In the 1940s and 1950s, over 20,000 children in Israel were treated for tinea capitis (scalp ringworm) by irradiation to induce epilation. Follow-up studies showed that the radiation exposure was associated with the development of malignant thyroid neoplasms. Despite this clear evidence of an effect, the magnitude of the dose-response relationship is much less clear because of probable errors in individual estimates of dose to the thyroid gland. Such errors have the potential to bias dose-response estimation, a potential that was not widely appreciated at the time of the original analyses. We revisit this issue, describing in detail how errors in dosimetry might occur, and we develop a new dose-response model that takes the uncertainties of the dosimetry into account. Our model for the uncertainty in dosimetry is a complex and new variant of the classical multiplicative Berkson error model, having components of classical multiplicative measurement error as well as missing data. Analysis of the tinea capitis data suggests that measurement error in the dosimetry has only a negligible effect on dose-response estimation and inference as well as on the modifying effect of age at exposure.

  17. Angular Rate Optimal Design for the Rotary Strapdown Inertial Navigation System

    PubMed Central

    Yu, Fei; Sun, Qian

    2014-01-01

    Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS. PMID:24759115

  18. Distorting the Historical Record: One Detailed Example from the Albert Shanker Institute's Report

    ERIC Educational Resources Information Center

    American Educator, 2012

    2012-01-01

    This article presents a detailed example from the Albert Shanker Institute's report that shows the error of U.S. history textbooks and how it is distorting the historical record. One of the most glaring errors in textbooks is the treatment of the role that unions and labor activists played as key participants in the civil rights movement. The…

  19. Obstetric Neuraxial Drug Administration Errors: A Quantitative and Qualitative Analytical Review.

    PubMed

    Patel, Santosh; Loveridge, Robert

    2015-12-01

    Drug administration errors in obstetric neuraxial anesthesia can have devastating consequences. Although fully recognizing that they represent "only the tip of the iceberg," published case reports/series of these errors were reviewed in detail with the aim of estimating the frequency and the nature of these errors. We identified case reports and case series from MEDLINE and performed a quantitative analysis of the involved drugs, error setting, source of error, the observed complications, and any therapeutic interventions. We subsequently performed a qualitative analysis of the human factors involved and proposed modifications to practice. Twenty-nine cases were identified. Various drugs were given in error, but no direct effects on the course of labor, mode of delivery, or neonatal outcome were reported. Four maternal deaths from the accidental intrathecal administration of tranexamic acid were reported, all occurring after delivery of the fetus. A range of hemodynamic and neurologic signs and symptoms were noted, but the most commonly reported complication was the failure of the intended neuraxial anesthetic technique. Several human factors were present; most common factors were drug storage issues and similar drug appearance. Four practice recommendations were identified as being likely to have prevented the errors. The reported errors exposed latent conditions within health care systems. We suggest that the implementation of the following processes may decrease the risk of these types of drug errors: (1) Careful reading of the label on any drug ampule or syringe before the drug is drawn up or injected; (2) labeling all syringes; (3) checking labels with a second person or a device (such as a barcode reader linked to a computer) before the drug is drawn up or administered; and (4) use of non-Luer lock connectors on all epidural/spinal/combined spinal-epidural devices. Further study is required to determine whether routine use of these processes will reduce drug error.

  20. Addressing Systematic Errors in Correlation Tracking on HMI Magnetograms

    NASA Astrophysics Data System (ADS)

    Mahajan, Sushant S.; Hathaway, David H.; Munoz-Jaramillo, Andres; Martens, Petrus C.

    2017-08-01

    Correlation tracking in solar magnetograms is an effective method to measure the differential rotation and meridional flow on the solar surface. However, since the tracking accuracy required to successfully measure meridional flow is very high, small systematic errors have a noticeable impact on measured meridional flow profiles. Additionally, the uncertainties of this kind of measurements have been historically underestimated, leading to controversy regarding flow profiles at high latitudes extracted from measurements which are unreliable near the solar limb.Here we present a set of systematic errors we have identified (and potential solutions), including bias caused by physical pixel sizes, center-to-limb systematics, and discrepancies between measurements performed using different time intervals. We have developed numerical techniques to get rid of these systematic errors and in the process improve the accuracy of the measurements by an order of magnitude.We also present a detailed analysis of uncertainties in these measurements using synthetic magnetograms and the quantification of an upper limit below which meridional flow measurements cannot be trusted as a function of latitude.

  1. Analysis of GRACE Range-rate Residuals with Emphasis on Reprocessed Star-Camera Datasets

    NASA Astrophysics Data System (ADS)

    Goswami, S.; Flury, J.; Naeimi, M.; Bandikova, T.; Guerr, T. M.; Klinger, B.

    2015-12-01

    Since March 2002 the two GRACE satellites orbit the Earth at rela-tively low altitude. Determination of the gravity field of the Earth including itstemporal variations from the satellites' orbits and the inter-satellite measure-ments is the goal of the mission. Yet, the time-variable gravity signal has notbeen fully exploited. This can be seen better in the computed post-fit range-rateresiduals. The errors reflected in the range-rate residuals are due to the differ-ent sources as systematic errors, mismodelling errors and tone errors. Here, weanalyse the effect of three different star-camera data sets on the post-fit range-rate residuals. On the one hand, we consider the available attitude data andon other hand we take the two different data sets which has been reprocessedat Institute of Geodesy, Hannover and Institute of Theoretical Geodesy andSatellite Geodesy, TU Graz Austria respectively. Then the differences in therange-rate residuals computed from different attitude dataset are analyzed inthis study. Details will be given and results will be discussed.

  2. Fast Determination of Distribution-Connected PV Impacts Using a Variable Time-Step Quasi-Static Time-Series Approach: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry

    The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce themore » required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.« less

  3. Alignment error envelopes for single particle analysis.

    PubMed

    Jensen, G J

    2001-01-01

    To determine the structure of a biological particle to high resolution by electron microscopy, image averaging is required to combine information from different views and to increase the signal-to-noise ratio. Starting from the number of noiseless views necessary to resolve features of a given size, four general factors are considered that increase the number of images actually needed: (1) the physics of electron scattering introduces shot noise, (2) thermal motion and particle inhomogeneity cause the scattered electrons to describe a mixture of structures, (3) the microscope system fails to usefully record all the information carried by the scattered electrons, and (4) image misalignment leads to information loss through incoherent averaging. The compound effect of factors 2-4 is approximated by the product of envelope functions. The problem of incoherent image averaging is developed in detail through derivation of five envelope functions that account for small errors in 11 "alignment" parameters describing particle location, orientation, defocus, magnification, and beam tilt. The analysis provides target error tolerances for single particle analysis to near-atomic (3.5 A) resolution, and this prospect is shown to depend critically on image quality, defocus determination, and microscope alignment. Copyright 2001 Academic Press.

  4. Integrating Six Sigma with total quality management: a case example for measuring medication errors.

    PubMed

    Revere, Lee; Black, Ken

    2003-01-01

    Six Sigma is a new management philosophy that seeks a nonexistent error rate. It is ripe for healthcare because many healthcare processes require a near-zero tolerance for mistakes. For most organizations, establishing a Six Sigma program requires significant resources and produces considerable stress. However, in healthcare, management can piggyback Six Sigma onto current total quality management (TQM) efforts so that minimal disruption occurs in the organization. Six Sigma is an extension of the Failure Mode and Effects Analysis that is required by JCAHO; it can easily be integrated into existing quality management efforts. Integrating Six Sigma into the existing TQM program facilitates process improvement through detailed data analysis. A drilled-down approach to root-cause analysis greatly enhances the existing TQM approach. Using the Six Sigma metrics, internal project comparisons facilitate resource allocation while external project comparisons allow for benchmarking. Thus, the application of Six Sigma makes TQM efforts more successful. This article presents a framework for including Six Sigma in an organization's TQM plan while providing a concrete example using medication errors. Using the process defined in this article, healthcare executives can integrate Six Sigma into all of their TQM projects.

  5. Design and Error Analysis of a Vehicular AR System with Auto-Harmonization.

    PubMed

    Foxlin, Eric; Calloway, Thomas; Zhang, Hongsheng

    2015-12-01

    This paper describes the design, development and testing of an AR system that was developed for aerospace and ground vehicles to meet stringent accuracy and robustness requirements. The system uses an optical see-through HMD, and thus requires extremely low latency, high tracking accuracy and precision alignment and calibration of all subsystems in order to avoid mis-registration and "swim". The paper focuses on the optical/inertial hybrid tracking system and describes novel solutions to the challenges with the optics, algorithms, synchronization, and alignment with the vehicle and HMD systems. Tracker accuracy is presented with simulation results to predict the registration accuracy. A car test is used to create a through-the-eyepiece video demonstrating well-registered augmentations of the road and nearby structures while driving. Finally, a detailed covariance analysis of AR registration error is derived.

  6. AGILE: Autonomous Global Integrated Language Exploitation

    DTIC Science & Technology

    2009-12-01

    combination, including METEOR-based alignment (with stemming and WordNet synonym matching) and GIZA ++ based alignment. So far, we have not seen any...parse trees and a detailed analysis of how function words operate in translation. This program lets us fix alignment errors that systems like GIZA ...correlates better with Pyramid than with Responsiveness scoring (i.e., it is a more precise, careful, measure) • BE generally outperforms ROUGE

  7. The Nature of the Nodes, Weights and Degree of Precision in Gaussian Quadrature Rules

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2011-01-01

    We present a comprehensive proof of the theorem that relates the weights and nodes of a Gaussian quadrature rule to its degree of precision. This level of detail is often absent in modern texts on numerical analysis. We show that the degree of precision is maximal, and that the approximation error in Gaussian quadrature is minimal, in a…

  8. Refractive Errors and Concomitant Strabismus: A Systematic Review and Meta-analysis.

    PubMed

    Tang, Shu Min; Chan, Rachel Y T; Bin Lin, Shi; Rong, Shi Song; Lau, Henry H W; Lau, Winnie W Y; Yip, Wilson W K; Chen, Li Jia; Ko, Simon T C; Yam, Jason C S

    2016-10-12

    This systematic review and meta-analysis is to evaluate the risk of development of concomitant strabismus due to refractive errors. Eligible studies published from 1946 to April 1, 2016 were identified from MEDLINE and EMBASE that evaluated any kinds of refractive errors (myopia, hyperopia, astigmatism and anisometropia) as an independent factor for concomitant exotropia and concomitant esotropia. Totally 5065 published records were retrieved for screening, 157 of them eligible for detailed evaluation. Finally 7 population-based studies involving 23,541 study subjects met our criteria for meta-analysis. The combined OR showed that myopia was a risk factor for exotropia (OR: 5.23, P = 0.0001). We found hyperopia had a dose-related effect for esotropia (OR for a spherical equivalent [SE] of 2-3 diopters [D]: 10.16, P = 0.01; OR for an SE of 3-4D: 17.83, P < 0.0001; OR for an SE of 4-5D: 41.01, P < 0.0001; OR for an SE of ≥5D: 162.68, P < 0.0001). Sensitivity analysis indicated our results were robust. Results of this study confirmed myopia as a risk for concomitant exotropia and identified a dose-related effect for hyperopia as a risk of concomitant esotropia.

  9. Refractive Errors and Concomitant Strabismus: A Systematic Review and Meta-analysis

    PubMed Central

    Tang, Shu Min; Chan, Rachel Y. T.; Bin Lin, Shi; Rong, Shi Song; Lau, Henry H. W.; Lau, Winnie W. Y.; Yip, Wilson W. K.; Chen, Li Jia; Ko, Simon T. C.; Yam, Jason C. S.

    2016-01-01

    This systematic review and meta-analysis is to evaluate the risk of development of concomitant strabismus due to refractive errors. Eligible studies published from 1946 to April 1, 2016 were identified from MEDLINE and EMBASE that evaluated any kinds of refractive errors (myopia, hyperopia, astigmatism and anisometropia) as an independent factor for concomitant exotropia and concomitant esotropia. Totally 5065 published records were retrieved for screening, 157 of them eligible for detailed evaluation. Finally 7 population-based studies involving 23,541 study subjects met our criteria for meta-analysis. The combined OR showed that myopia was a risk factor for exotropia (OR: 5.23, P = 0.0001). We found hyperopia had a dose-related effect for esotropia (OR for a spherical equivalent [SE] of 2–3 diopters [D]: 10.16, P = 0.01; OR for an SE of 3-4D: 17.83, P < 0.0001; OR for an SE of 4-5D: 41.01, P < 0.0001; OR for an SE of ≥5D: 162.68, P < 0.0001). Sensitivity analysis indicated our results were robust. Results of this study confirmed myopia as a risk for concomitant exotropia and identified a dose-related effect for hyperopia as a risk of concomitant esotropia. PMID:27731389

  10. A passive microwave technique for estimating rainfall and vertical structure information from space. Part 1: Algorithm description

    NASA Technical Reports Server (NTRS)

    Kummerow, Christian; Giglio, Louis

    1994-01-01

    This paper describes a multichannel physical approach for retrieving rainfall and vertical structure information from satellite-based passive microwave observations. The algorithm makes use of statistical inversion techniques based upon theoretically calculated relations between rainfall rates and brightness temperatures. Potential errors introduced into the theoretical calculations by the unknown vertical distribution of hydrometeors are overcome by explicity accounting for diverse hydrometeor profiles. This is accomplished by allowing for a number of different vertical distributions in the theoretical brightness temperature calculations and requiring consistency between the observed and calculated brightness temperatures. This paper will focus primarily on the theoretical aspects of the retrieval algorithm, which includes a procedure used to account for inhomogeneities of the rainfall within the satellite field of view as well as a detailed description of the algorithm as it is applied over both ocean and land surfaces. The residual error between observed and calculated brightness temperatures is found to be an important quantity in assessing the uniqueness of the solution. It is further found that the residual error is a meaningful quantity that can be used to derive expected accuracies from this retrieval technique. Examples comparing the retrieved results as well as the detailed analysis of the algorithm performance under various circumstances are the subject of a companion paper.

  11. A survey of computational methods and error rate estimation procedures for peptide and protein identification in shotgun proteomics

    PubMed Central

    Nesvizhskii, Alexey I.

    2010-01-01

    This manuscript provides a comprehensive review of the peptide and protein identification process using tandem mass spectrometry (MS/MS) data generated in shotgun proteomic experiments. The commonly used methods for assigning peptide sequences to MS/MS spectra are critically discussed and compared, from basic strategies to advanced multi-stage approaches. A particular attention is paid to the problem of false-positive identifications. Existing statistical approaches for assessing the significance of peptide to spectrum matches are surveyed, ranging from single-spectrum approaches such as expectation values to global error rate estimation procedures such as false discovery rates and posterior probabilities. The importance of using auxiliary discriminant information (mass accuracy, peptide separation coordinates, digestion properties, and etc.) is discussed, and advanced computational approaches for joint modeling of multiple sources of information are presented. This review also includes a detailed analysis of the issues affecting the interpretation of data at the protein level, including the amplification of error rates when going from peptide to protein level, and the ambiguities in inferring the identifies of sample proteins in the presence of shared peptides. Commonly used methods for computing protein-level confidence scores are discussed in detail. The review concludes with a discussion of several outstanding computational issues. PMID:20816881

  12. Basic research for the geodynamics program

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The mathematical models of space very long base interferometry (VLBI) observables suitable for least squares covariance analysis were derived and estimatability problems inherent in the space VLBI system were explored, including a detailed rank defect analysis and sensitivity analysis. An important aim is to carry out a comparative analysis of the mathematical models of the ground-based VLBI and space VLBI observables in order to describe the background in detail. Computer programs were developed in order to check the relations, assess errors, and analyze sensitivity. In order to investigate the estimatability of different geodetic and geodynamic parameters from the space VLBI observables, the mathematical models for time delay and time delay rate observables of space VLBI were analytically derived along with the partial derivatives with respect to the parameters. Rank defect analysis was carried out both by analytical and numerical testing of linear dependencies between the columns of the normal matrix thus formed. Definite conclusions were formed about the rank defects in the system.

  13. High‐resolution trench photomosaics from image‐based modeling: Workflow and error analysis

    USGS Publications Warehouse

    Reitman, Nadine G.; Bennett, Scott E. K.; Gold, Ryan D.; Briggs, Richard; Duross, Christopher

    2015-01-01

    Photomosaics are commonly used to construct maps of paleoseismic trench exposures, but the conventional process of manually using image‐editing software is time consuming and produces undesirable artifacts and distortions. Herein, we document and evaluate the application of image‐based modeling (IBM) for creating photomosaics and 3D models of paleoseismic trench exposures, illustrated with a case‐study trench across the Wasatch fault in Alpine, Utah. Our results include a structure‐from‐motion workflow for the semiautomated creation of seamless, high‐resolution photomosaics designed for rapid implementation in a field setting. Compared with conventional manual methods, the IBM photomosaic method provides a more accurate, continuous, and detailed record of paleoseismic trench exposures in approximately half the processing time and 15%–20% of the user input time. Our error analysis quantifies the effect of the number and spatial distribution of control points on model accuracy. For this case study, an ∼87  m2 exposure of a benched trench photographed at viewing distances of 1.5–7 m yields a model with <2  cm root mean square error (rmse) with as few as six control points. Rmse decreases as more control points are implemented, but the gains in accuracy are minimal beyond 12 control points. Spreading control points throughout the target area helps to minimize error. We propose that 3D digital models and corresponding photomosaics should be standard practice in paleoseismic exposure archiving. The error analysis serves as a guide for future investigations that seek balance between speed and accuracy during photomosaic and 3D model construction.

  14. Measuring Pressure Volume Loops in the Mouse.

    PubMed

    Townsend, DeWayne

    2016-05-02

    Understanding the causes and progression of heart disease presents a significant challenge to the biomedical community. The genetic flexibility of the mouse provides great potential to explore cardiac function at the molecular level. The mouse's small size does present some challenges in regards to performing detailed cardiac phenotyping. Miniaturization and other advancements in technology have made many methods of cardiac assessment possible in the mouse. Of these, the simultaneous collection of pressure and volume data provides a detailed picture of cardiac function that is not available through any other modality. Here a detailed procedure for the collection of pressure-volume loop data is described. Included is a discussion of the principles underlying the measurements and the potential sources of error. Anesthetic management and surgical approaches are discussed in great detail as they are both critical to obtaining high quality hemodynamic measurements. The principles of hemodynamic protocol development and relevant aspects of data analysis are also addressed.

  15. Incident reporting: Its role in aviation safety and the acquisition of human error data

    NASA Technical Reports Server (NTRS)

    Reynard, W. D.

    1983-01-01

    The rationale for aviation incident reporting systems is presented and contrasted to some of the shortcomings of accident investigation procedures. The history of the United State's Aviation Safety Reporting System (ASRS) is outlined and the program's character explained. The planning elements that resulted in the ASRS program's voluntary, confidential, and non-punitive design are discussed. Immunity, from enforcement action and misuse of the volunteered data, is explained and evaluated. Report generation techniques and the ASRS data analysis process are described; in addition, examples of the ASRS program's output and accomplishments are detailed. Finally, the value of incident reporting for the acquisition of safety information, particularly human error data, is explored.

  16. Unavoidable electric current caused by inhomogeneities and its influence on measured material parameters of thermoelectric materials

    NASA Astrophysics Data System (ADS)

    Song, K.; Song, H. P.; Gao, C. F.

    2018-03-01

    It is well known that the key factor determining the performance of thermoelectric materials is the figure of merit, which depends on the thermal conductivity (TC), electrical conductivity, and Seebeck coefficient (SC). The electric current must be zero when measuring the TC and SC to avoid the occurrence of measurement errors. In this study, the complex-variable method is used to analyze the thermoelectric field near an elliptic inhomogeneity in an open circuit, and the field distributions are obtained in closed form. Our analysis shows that an electric current inevitably exists in both the matrix and the inhomogeneity even though the circuit is open. This unexpected electric current seriously affects the accuracy with which the TC and SC are measured. These measurement errors, both overall and local, are analyzed in detail. In addition, an error correction method is proposed based on the analytical results.

  17. A Medication Safety Model: A Case Study in Thai Hospital

    PubMed Central

    Rattanarojsakul, Phichai; Thawesaengskulthai, Natcha

    2013-01-01

    Reaching zero defects is vital in medication service. Medication error can be reduced if the causes are recognized. The purpose of this study is to search for a conceptual framework of the causes of medication error in Thailand and to examine relationship between these factors and its importance. The study was carried out upon an in-depth case study and survey of hospital personals who were involved in the drug use process. The structured survey was based on Emergency Care Research Institute (ECRI) (2008) questionnaires focusing on the important factors that affect the medication safety. Additional questionnaires included content to the context of Thailand's private hospital, validated by five-hospital qualified experts. By correlation Pearson analysis, the result revealed 14 important factors showing a linear relationship with drug administration error except the medication reconciliation. By independent sample t-test, the administration error in the hospital was significantly related to external impact. The multiple regression analysis of the detail of medication administration also indicated the patient identification before administration of medication, detection of the risk of medication adverse effects and assurance of medication administration at the right time, dosage and route were statistically significant at 0.05 level. The major implication of the study is to propose a medication safety model in a Thai private hospital. PMID:23985110

  18. A medication safety model: a case study in Thai hospital.

    PubMed

    Rattanarojsakul, Phichai; Thawesaengskulthai, Natcha

    2013-06-12

    Reaching zero defects is vital in medication service. Medication error can be reduced if the causes are recognized. The purpose of this study is to search for a conceptual framework of the causes of medication error in Thailand and to examine relationship between these factors and its importance. The study was carried out upon an in-depth case study and survey of hospital personals who were involved in the drug use process. The structured survey was based on Emergency Care Research Institute (ECRI) (2008) questionnaires focusing on the important factors that affect the medication safety. Additional questionnaires included content to the context of Thailand's private hospital, validated by five-hospital qualified experts. By correlation Pearson analysis, the result revealed 14 important factors showing a linear relationship with drug administration error except the medication reconciliation. By independent sample t-test, the administration error in the hospital was significantly related to external impact. The multiple regression analysis of the detail of medication administration also indicated the patient identification before administration of medication, detection of the risk of medication adverse effects and assurance of medication administration at the right time, dosage and route were statistically significant at 0.05 level. The major implication of the study is to propose a medication safety model in a Thai private hospital.

  19. Analysis of Learning Curve Fitting Techniques.

    DTIC Science & Technology

    1987-09-01

    1986. 15. Neter, John and others. Applied Linear Regression Models. Homewood IL: Irwin, 19-33. 16. SAS User’s Guide: Basics, Version 5 Edition. SAS... Linear Regression Techniques (15:23-52). Random errors are assumed to be normally distributed when using -# ordinary least-squares, according to Johnston...lot estimated by the improvement curve formula. For a more detailed explanation of the ordinary least-squares technique, see Neter, et. al., Applied

  20. Understanding the nature of errors in nursing: using a model to analyse critical incident reports of errors which had resulted in an adverse or potentially adverse event.

    PubMed

    Meurier, C E

    2000-07-01

    Human errors are common in clinical practice, but they are under-reported. As a result, very little is known of the types, antecedents and consequences of errors in nursing practice. This limits the potential to learn from errors and to make improvement in the quality and safety of nursing care. The aim of this study was to use an Organizational Accident Model to analyse critical incidents of errors in nursing. Twenty registered nurses were invited to produce a critical incident report of an error (which had led to an adverse event or potentially could have led to an adverse event) they had made in their professional practice and to write down their responses to the error using a structured format. Using Reason's Organizational Accident Model, supplemental information was then collected from five of the participants by means of an individual in-depth interview to explore further issues relating to the incidents they had reported. The detailed analysis of one of the incidents is discussed in this paper, demonstrating the effectiveness of this approach in providing insight into the chain of events which may lead to an adverse event. The case study approach using critical incidents of clinical errors was shown to provide relevant information regarding the interaction of organizational factors, local circumstances and active failures (errors) in producing an adverse or potentially adverse event. It is suggested that more use should be made of this approach to understand how errors are made in practice and to take appropriate preventative measures.

  1. Spoken Word Recognition Errors in Speech Audiometry: A Measure of Hearing Performance?

    PubMed Central

    Coene, Martine; van der Lee, Anneke; Govaerts, Paul J.

    2015-01-01

    This report provides a detailed analysis of incorrect responses from an open-set spoken word-repetition task which is part of a Dutch speech audiometric test battery. Single-consonant confusions were analyzed from 230 normal hearing participants in terms of the probability of choice of a particular response on the basis of acoustic-phonetic, lexical, and frequency variables. The results indicate that consonant confusions are better predicted by lexical knowledge than by acoustic properties of the stimulus word. A detailed analysis of the transmission of phonetic features indicates that “voicing” is best preserved whereas “manner of articulation” yields most perception errors. As consonant confusion matrices are often used to determine the degree and type of a patient's hearing impairment, to predict a patient's gain in hearing performance with hearing devices and to optimize the device settings in view of maximum output, the observed findings are highly relevant for the audiological practice. Based on our findings, speech audiometric outcomes provide a combined auditory-linguistic profile of the patient. The use of confusion matrices might therefore not be the method best suited to measure hearing performance. Ideally, they should be complemented by other listening task types that are known to have less linguistic bias, such as phonemic discrimination. PMID:26557717

  2. A comparison between different error modeling of MEMS applied to GPS/INS integrated systems.

    PubMed

    Quinchia, Alex G; Falco, Gianluca; Falletti, Emanuela; Dovis, Fabio; Ferrer, Carles

    2013-07-24

    Advances in the development of micro-electromechanical systems (MEMS) have made possible the fabrication of cheap and small dimension accelerometers and gyroscopes, which are being used in many applications where the global positioning system (GPS) and the inertial navigation system (INS) integration is carried out, i.e., identifying track defects, terrestrial and pedestrian navigation, unmanned aerial vehicles (UAVs), stabilization of many platforms, etc. Although these MEMS sensors are low-cost, they present different errors, which degrade the accuracy of the navigation systems in a short period of time. Therefore, a suitable modeling of these errors is necessary in order to minimize them and, consequently, improve the system performance. In this work, the most used techniques currently to analyze the stochastic errors that affect these sensors are shown and compared: we examine in detail the autocorrelation, the Allan variance (AV) and the power spectral density (PSD) techniques. Subsequently, an analysis and modeling of the inertial sensors, which combines autoregressive (AR) filters and wavelet de-noising, is also achieved. Since a low-cost INS (MEMS grade) presents error sources with short-term (high-frequency) and long-term (low-frequency) components, we introduce a method that compensates for these error terms by doing a complete analysis of Allan variance, wavelet de-nosing and the selection of the level of decomposition for a suitable combination between these techniques. Eventually, in order to assess the stochastic models obtained with these techniques, the Extended Kalman Filter (EKF) of a loosely-coupled GPS/INS integration strategy is augmented with different states. Results show a comparison between the proposed method and the traditional sensor error models under GPS signal blockages using real data collected in urban roadways.

  3. A Comparison between Different Error Modeling of MEMS Applied to GPS/INS Integrated Systems

    PubMed Central

    Quinchia, Alex G.; Falco, Gianluca; Falletti, Emanuela; Dovis, Fabio; Ferrer, Carles

    2013-01-01

    Advances in the development of micro-electromechanical systems (MEMS) have made possible the fabrication of cheap and small dimension accelerometers and gyroscopes, which are being used in many applications where the global positioning system (GPS) and the inertial navigation system (INS) integration is carried out, i.e., identifying track defects, terrestrial and pedestrian navigation, unmanned aerial vehicles (UAVs), stabilization of many platforms, etc. Although these MEMS sensors are low-cost, they present different errors, which degrade the accuracy of the navigation systems in a short period of time. Therefore, a suitable modeling of these errors is necessary in order to minimize them and, consequently, improve the system performance. In this work, the most used techniques currently to analyze the stochastic errors that affect these sensors are shown and compared: we examine in detail the autocorrelation, the Allan variance (AV) and the power spectral density (PSD) techniques. Subsequently, an analysis and modeling of the inertial sensors, which combines autoregressive (AR) filters and wavelet de-noising, is also achieved. Since a low-cost INS (MEMS grade) presents error sources with short-term (high-frequency) and long-term (low-frequency) components, we introduce a method that compensates for these error terms by doing a complete analysis of Allan variance, wavelet de-nosing and the selection of the level of decomposition for a suitable combination between these techniques. Eventually, in order to assess the stochastic models obtained with these techniques, the Extended Kalman Filter (EKF) of a loosely-coupled GPS/INS integration strategy is augmented with different states. Results show a comparison between the proposed method and the traditional sensor error models under GPS signal blockages using real data collected in urban roadways. PMID:23887084

  4. Frontal Theta Links Prediction Errors to Behavioral Adaptation in Reinforcement Learning

    PubMed Central

    Cavanagh, James F.; Frank, Michael J.; Klein, Theresa J.; Allen, John J.B.

    2009-01-01

    Investigations into action monitoring have consistently detailed a fronto-central voltage deflection in the Event-Related Potential (ERP) following the presentation of negatively valenced feedback, sometimes termed the Feedback Related Negativity (FRN). The FRN has been proposed to reflect a neural response to prediction errors during reinforcement learning, yet the single trial relationship between neural activity and the quanta of expectation violation remains untested. Although ERP methods are not well suited to single trial analyses, the FRN has been associated with theta band oscillatory perturbations in the medial prefrontal cortex. Medio-frontal theta oscillations have been previously associated with expectation violation and behavioral adaptation and are well suited to single trial analysis. Here, we recorded EEG activity during a probabilistic reinforcement learning task and fit the performance data to an abstract computational model (Q-learning) for calculation of single-trial reward prediction errors. Single-trial theta oscillatory activities following feedback were investigated within the context of expectation (prediction error) and adaptation (subsequent reaction time change). Results indicate that interactive medial and lateral frontal theta activities reflect the degree of negative and positive reward prediction error in the service of behavioral adaptation. These different brain areas use prediction error calculations for different behavioral adaptations: with medial frontal theta reflecting the utilization of prediction errors for reaction time slowing (specifically following errors), but lateral frontal theta reflecting prediction errors leading to working memory-related reaction time speeding for the correct choice. PMID:19969093

  5. Prescribing Errors Involving Medication Dosage Forms

    PubMed Central

    Lesar, Timothy S

    2002-01-01

    CONTEXT Prescribing errors involving medication dose formulations have been reported to occur frequently in hospitals. No systematic evaluations of the characteristics of errors related to medication dosage formulation have been performed. OBJECTIVE To quantify the characteristics, frequency, and potential adverse patient effects of prescribing errors involving medication dosage forms . DESIGN Evaluation of all detected medication prescribing errors involving or related to medication dosage forms in a 631-bed tertiary care teaching hospital. MAIN OUTCOME MEASURES Type, frequency, and potential for adverse effects of prescribing errors involving or related to medication dosage forms. RESULTS A total of 1,115 clinically significant prescribing errors involving medication dosage forms were detected during the 60-month study period. The annual number of detected errors increased throughout the study period. Detailed analysis of the 402 errors detected during the last 16 months of the study demonstrated the most common errors to be: failure to specify controlled release formulation (total of 280 cases; 69.7%) both when prescribing using the brand name (148 cases; 36.8%) and when prescribing using the generic name (132 cases; 32.8%); and prescribing controlled delivery formulations to be administered per tube (48 cases; 11.9%). The potential for adverse patient outcome was rated as potentially “fatal or severe” in 3 cases (0.7%), and “serious” in 49 cases (12.2%). Errors most commonly involved cardiovascular agents (208 cases; 51.7%). CONCLUSIONS Hospitalized patients are at risk for adverse outcomes due to prescribing errors related to inappropriate use of medication dosage forms. This information should be considered in the development of strategies to prevent adverse patient outcomes resulting from such errors. PMID:12213138

  6. Statistical analysis of the determinations of the Sun's Galactocentric distance

    NASA Astrophysics Data System (ADS)

    Malkin, Zinovy

    2013-02-01

    Based on several tens of R0 measurements made during the past two decades, several studies have been performed to derive the best estimate of R0. Some used just simple averaging to derive a result, whereas others provided comprehensive analyses of possible errors in published results. In either case, detailed statistical analyses of data used were not performed. However, a computation of the best estimates of the Galactic rotation constants is not only an astronomical but also a metrological task. Here we perform an analysis of 53 R0 measurements (published in the past 20 years) to assess the consistency of the data. Our analysis shows that they are internally consistent. It is also shown that any trend in the R0 estimates from the last 20 years is statistically negligible, which renders the presence of a bandwagon effect doubtful. On the other hand, the formal errors in the published R0 estimates improve significantly with time.

  7. Design of a fiber-optic multiphoton microscopy handheld probe

    PubMed Central

    Zhao, Yuan; Sheng, Mingyu; Huang, Lin; Tang, Shuo

    2016-01-01

    We have developed a fiber-optic multiphoton microscopy (MPM) system with handheld probe using femtosecond fiber laser. Here we present the detailed optical design and analysis of the handheld probe. The optical systems using Lightpath 352140 and 352150 as objective lens were analyzed. A custom objective module that includes Lightpath 355392 and two customized corrective lenses was designed. Their performances were compared by wavefront error, field curvature, astigmatism, F-θ error, and tolerance in Zemax simulation. Tolerance analysis predicted the focal spot size to be 1.13, 1.19 and 0.83 µm, respectively. Lightpath 352140 and 352150 were implemented in experiment and the measured lateral resolution was 1.22 and 1.3 µm, respectively, which matched with the prediction. MPM imaging by the handheld probe were conducted on leaf, fish scale and rat tail tendon. The MPM resolution can potentially be improved by the custom objective module. PMID:27699109

  8. Design of a fiber-optic multiphoton microscopy handheld probe.

    PubMed

    Zhao, Yuan; Sheng, Mingyu; Huang, Lin; Tang, Shuo

    2016-09-01

    We have developed a fiber-optic multiphoton microscopy (MPM) system with handheld probe using femtosecond fiber laser. Here we present the detailed optical design and analysis of the handheld probe. The optical systems using Lightpath 352140 and 352150 as objective lens were analyzed. A custom objective module that includes Lightpath 355392 and two customized corrective lenses was designed. Their performances were compared by wavefront error, field curvature, astigmatism, F-θ error, and tolerance in Zemax simulation. Tolerance analysis predicted the focal spot size to be 1.13, 1.19 and 0.83 µm, respectively. Lightpath 352140 and 352150 were implemented in experiment and the measured lateral resolution was 1.22 and 1.3 µm, respectively, which matched with the prediction. MPM imaging by the handheld probe were conducted on leaf, fish scale and rat tail tendon. The MPM resolution can potentially be improved by the custom objective module.

  9. Measuring the relationship between interruptions, multitasking and prescribing errors in an emergency department: a study protocol

    PubMed Central

    Raban, Magdalena Z; Walter, Scott R; Douglas, Heather E; Strumpman, Dana; Mackenzie, John; Westbrook, Johanna I

    2015-01-01

    Introduction Interruptions and multitasking are frequent in clinical settings, and have been shown in the cognitive psychology literature to affect performance, increasing the risk of error. However, comparatively less is known about their impact on errors in clinical work. This study will assess the relationship between prescribing errors, interruptions and multitasking in an emergency department (ED) using direct observations and chart review. Methods and analysis The study will be conducted in an ED of a 440-bed teaching hospital in Sydney, Australia. Doctors will be shadowed at proximity by observers for 2 h time intervals while they are working on day shift (between 0800 and 1800). Time stamped data on tasks, interruptions and multitasking will be recorded on a handheld computer using the validated Work Observation Method by Activity Timing (WOMBAT) tool. The prompts leading to interruptions and multitasking will also be recorded. When doctors prescribe medication, type of chart and chart sections written on, along with the patient's medical record number (MRN) will be recorded. A clinical pharmacist will access patient records and assess the medication orders for prescribing errors. The prescribing error rate will be calculated per prescribing task and is defined as the number of errors divided by the number of medication orders written during the prescribing task. The association between prescribing error rates, and rates of prompts, interruptions and multitasking will be assessed using statistical modelling. Ethics and dissemination Ethics approval has been obtained from the hospital research ethics committee. Eligible doctors will be provided with written information sheets and written consent will be obtained if they agree to participate. Doctor details and MRNs will be kept separate from the data on prescribing errors, and will not appear in the final data set for analysis. Study results will be disseminated in publications and feedback to the ED. PMID:26463224

  10. Modeling misidentification errors in capture-recapture studies using photographic identification of evolving marks

    USGS Publications Warehouse

    Yoshizaki, J.; Pollock, K.H.; Brownie, C.; Webster, R.A.

    2009-01-01

    Misidentification of animals is potentially important when naturally existing features (natural tags) are used to identify individual animals in a capture-recapture study. Photographic identification (photoID) typically uses photographic images of animals' naturally existing features as tags (photographic tags) and is subject to two main causes of identification errors: those related to quality of photographs (non-evolving natural tags) and those related to changes in natural marks (evolving natural tags). The conventional methods for analysis of capture-recapture data do not account for identification errors, and to do so requires a detailed understanding of the misidentification mechanism. Focusing on the situation where errors are due to evolving natural tags, we propose a misidentification mechanism and outline a framework for modeling the effect of misidentification in closed population studies. We introduce methods for estimating population size based on this model. Using a simulation study, we show that conventional estimators can seriously overestimate population size when errors due to misidentification are ignored, and that, in comparison, our new estimators have better properties except in cases with low capture probabilities (<0.2) or low misidentification rates (<2.5%). ?? 2009 by the Ecological Society of America.

  11. Magnetospheric Multiscale (MMS) Mission Commissioning Phase Orbit Determination Error Analysis

    NASA Technical Reports Server (NTRS)

    Chung, Lauren R.; Novak, Stefan; Long, Anne; Gramling, Cheryl

    2009-01-01

    The Magnetospheric MultiScale (MMS) mission commissioning phase starts in a 185 km altitude x 12 Earth radii (RE) injection orbit and lasts until the Phase 1 mission orbits and orientation to the Earth-Sun li ne are achieved. During a limited time period in the early part of co mmissioning, five maneuvers are performed to raise the perigee radius to 1.2 R E, with a maneuver every other apogee. The current baseline is for the Goddard Space Flight Center Flight Dynamics Facility to p rovide MMS orbit determination support during the early commissioning phase using all available two-way range and Doppler tracking from bo th the Deep Space Network and Space Network. This paper summarizes th e results from a linear covariance analysis to determine the type and amount of tracking data required to accurately estimate the spacecraf t state, plan each perigee raising maneuver, and support thruster cal ibration during this phase. The primary focus of this study is the na vigation accuracy required to plan the first and the final perigee ra ising maneuvers. Absolute and relative position and velocity error hi stories are generated for all cases and summarized in terms of the ma ximum root-sum-square consider and measurement noise error contributi ons over the definitive and predictive arcs and at discrete times inc luding the maneuver planning and execution times. Details of the meth odology, orbital characteristics, maneuver timeline, error models, and error sensitivities are provided.

  12. Accuracy of measurement in electrically evoked compound action potentials.

    PubMed

    Hey, Matthias; Müller-Deile, Joachim

    2015-01-15

    Electrically evoked compound action potentials (ECAP) in cochlear implant (CI) patients are characterized by the amplitude of the N1P1 complex. The measurement of evoked potentials yields a combination of the measured signal with various noise components but for ECAP procedures performed in the clinical routine, only the averaged curve is accessible. To date no detailed analysis of error dimension has been published. The aim of this study was to determine the error of the N1P1 amplitude and to determine the factors that impact the outcome. Measurements were performed on 32 CI patients with either CI24RE (CA) or CI512 implants using the Software Custom Sound EP (Cochlear). N1P1 error approximation of non-averaged raw data consisting of recorded single-sweeps was compared to methods of error approximation based on mean curves. The error approximation of the N1P1 amplitude using averaged data showed comparable results to single-point error estimation. The error of the N1P1 amplitude depends on the number of averaging steps and amplification; in contrast, the error of the N1P1 amplitude is not dependent on the stimulus intensity. Single-point error showed smaller N1P1 error and better coincidence with 1/√(N) function (N is the number of measured sweeps) compared to the known maximum-minimum criterion. Evaluation of N1P1 amplitude should be accompanied by indication of its error. The retrospective approximation of this measurement error from the averaged data available in clinically used software is possible and best done utilizing the D-trace in forward masking artefact reduction mode (no stimulation applied and recording contains only the switch-on-artefact). Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Estimate of uncertainties in polarized parton distributions

    NASA Astrophysics Data System (ADS)

    Miyama, M.; Goto, Y.; Hirai, M.; Kobayashi, H.; Kumano, S.; Morii, T.; Saito, N.; Shibata, T.-A.; Yamanishi, T.

    2001-10-01

    From \\chi^2 analysis of polarized deep inelastic scattering data, we determined polarized parton distribution functions (Y. Goto et al. (AAC), Phys. Rev. D 62, 34017 (2000).). In order to clarify the reliability of the obtained distributions, we should estimate uncertainties of the distributions. In this talk, we discuss the pol-PDF uncertainties by using a Hessian method. A Hessian matrix H_ij is given by second derivatives of the \\chi^2, and the error matrix \\varepsilon_ij is defined as the inverse matrix of H_ij. Using the error matrix, we calculate the error of a function F by (δ F)^2 = sum_i,j fracpartial Fpartial ai \\varepsilon_ij fracpartial Fpartial aj , where a_i,j are the parameters in the \\chi^2 analysis. Using this method, we show the uncertainties of the pol-PDF, structure functions g_1, and spin asymmetries A_1. Furthermore, we show a role of future experiments such as the RHIC-Spin. An important purpose of planned experiments in the near future is to determine the polarized gluon distribution function Δ g (x) in detail. We reanalyze the pol-PDF uncertainties including the gluon fake data which are expected to be given by the upcoming experiments. From this analysis, we discuss how much the uncertainties of Δ g (x) can be improved by such measurements.

  14. Spatial uncertainty of a geoid undulation model in Guayaquil, Ecuador

    NASA Astrophysics Data System (ADS)

    Chicaiza, E. G.; Leiva, C. A.; Arranz, J. J.; Buenańo, X. E.

    2017-06-01

    Geostatistics is a discipline that deals with the statistical analysis of regionalized variables. In this case study, geostatistics is used to estimate geoid undulation in the rural area of Guayaquil town in Ecuador. The geostatistical approach was chosen because the estimation error of prediction map is getting. Open source statistical software R and mainly geoR, gstat and RGeostats libraries were used. Exploratory data analysis (EDA), trend and structural analysis were carried out. An automatic model fitting by Iterative Least Squares and other fitting procedures were employed to fit the variogram. Finally, Kriging using gravity anomaly of Bouguer as external drift and Universal Kriging were used to get a detailed map of geoid undulation. The estimation uncertainty was reached in the interval [-0.5; +0.5] m for errors and a maximum estimation standard deviation of 2 mm in relation with the method of interpolation applied. The error distribution of the geoid undulation map obtained in this study provides a better result than Earth gravitational models publicly available for the study area according the comparison with independent validation points. The main goal of this paper is to confirm the feasibility to use geoid undulations from Global Navigation Satellite Systems and leveling field measurements and geostatistical techniques methods in order to use them in high-accuracy engineering projects.

  15. A cognitive model for multidigit number reading: Inferences from individuals with selective impairments.

    PubMed

    Dotan, Dror; Friedmann, Naama

    2018-04-01

    We propose a detailed cognitive model of multi-digit number reading. The model postulates separate processes for visual analysis of the digit string and for oral production of the verbal number. Within visual analysis, separate sub-processes encode the digit identities and the digit order, and additional sub-processes encode the number's decimal structure: its length, the positions of 0, and the way it is parsed into triplets (e.g., 314987 → 314,987). Verbal production consists of a process that generates the verbal structure of the number, and another process that retrieves the phonological forms of each number word. The verbal number structure is first encoded in a tree-like structure, similarly to syntactic trees of sentences, and then linearized to a sequence of number-word specifiers. This model is based on an investigation of the number processing abilities of seven individuals with different selective deficits in number reading. We report participants with impairment in specific sub-processes of the visual analysis of digit strings - in encoding the digit order, in encoding the number length, or in parsing the digit string to triplets. Other participants were impaired in verbal production, making errors in the number structure (shifts of digits to another decimal position, e.g., 3,040 → 30,004). Their selective deficits yielded several dissociations: first, we found a double dissociation between visual analysis deficits and verbal production deficits. Second, several dissociations were found within visual analysis: a double dissociation between errors in digit order and errors in the number length; a dissociation between order/length errors and errors in parsing the digit string into triplets; and a dissociation between the processing of different digits - impaired order encoding of the digits 2-9, without errors in the 0 position. Third, within verbal production, a dissociation was found between digit shifts and substitutions of number words. A selective deficit in any of the processes described by the model would cause difficulties in number reading, which we propose to term "dysnumeria". Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A New System to Monitor Data Analyses and Results of Physics Data Validation Between Pulses at DIII-D

    NASA Astrophysics Data System (ADS)

    Flanagan, S.; Schachter, J. M.; Schissel, D. P.

    2001-10-01

    A Data Analysis Monitoring (DAM) system has been developed to monitor between pulse physics analysis at the DIII-D National Fusion Facility. The system allows for rapid detection of discrepancies in diagnostic measurements or the results from physics analysis codes. This enables problems to be detected and possibly fixed between pulses as opposed to after the experimental run has concluded thus increasing the efficiency of experimental time. An example of a consistency check is comparing the stored energy from integrating the measured kinetic profiles to that calculated from magnetic measurements by EFIT. This new system also tracks the progress of MDSplus dispatching of software for data analysis and the loading of analyzed data into MDSplus. DAM uses a Java Servlet to receive messages, Clips to implement expert system logic, and displays its results to multiple web clients via HTML. If an error is detected by DAM, users can view more detailed information so that steps can be taken to eliminate the error for the next pulse. A demonstration of this system including a simulated DIII-D pulse cycle will be presented.

  17. Improved methods for the measurement and analysis of stellar magnetic fields

    NASA Technical Reports Server (NTRS)

    Saar, Steven H.

    1988-01-01

    The paper presents several improved methods for the measurement of magnetic fields on cool stars which take into account simple radiative transfer effects and the exact Zeeman patterns. Using these methods, high-resolution, low-noise data can be fitted with theoretical line profiles to determine the mean magnetic field strength in stellar active regions and a model-dependent fraction of the stellar surface (filling factor) covered by these regions. Random errors in the derived field strength and filling factor are parameterized in terms of signal-to-noise ratio, wavelength, spectral resolution, stellar rotation rate, and the magnetic parameters themselves. Weak line blends, if left uncorrected, can have significant systematic effects on the derived magnetic parameters, and thus several methods are developed to compensate partially for them. The magnetic parameters determined by previous methods likely have systematic errors because of such line blends and because of line saturation effects. Other sources of systematic error are explored in detail. These sources of error currently make it difficult to determine the magnetic parameters of individual stars to better than about + or - 20 percent.

  18. Strategies for Detecting and Correcting Errors in Accounting Problems.

    ERIC Educational Resources Information Center

    James, Marianne L.

    2003-01-01

    Reviews common errors in accounting tests that students commit resulting from deficiencies in fundamental prior knowledge, ineffective test taking, and inattention to detail and provides solutions to the problems. (JOW)

  19. Optical frequency modulation continuous wave coherent laser radar for spacecraft safe landing vector velocity measurement

    NASA Astrophysics Data System (ADS)

    Sui, Xiao-lin; Zhou, Shou-huan

    2013-05-01

    The design and performance of Optical frequency modulation continuous wave (OFMCW) coherent laser radar is presented. By employing a combination of optical heterodyne and linear frequency modulation techniques and utilizing fiber optic technologies, highly efficient, compact and reliable laser radar suitable for operation in a space environment is being developed.We also give a hardware structure of the OFMCW coherent laser radar. We made a detailed analysis of the measurement error. Its accuracy in the speed range is less than 0.5%.Measurement results for the movement of the carrier has also made a detailed assessment. The results show that its acceleration vector has better adaptability. The circuit structure is also given a detailed design. At the end of the article, we give the actual authentication method and experimental results.

  20. Dynamic gas temperature measurement system

    NASA Technical Reports Server (NTRS)

    Elmore, D. L.; Robinson, W. W.; Watkins, W. B.

    1983-01-01

    A gas temperature measurement system with compensated frequency response of 1 KHz and capability to operate in the exhaust of a gas turbine combustor was developed. Environmental guidelines for this measurement are presented, followed by a preliminary design of the selected measurement method. Transient thermal conduction effects were identified as important; a preliminary finite-element conduction model quantified the errors expected by neglecting conduction. A compensation method was developed to account for effects of conduction and convection. This method was verified in analog electrical simulations, and used to compensate dynamic temperature data from a laboratory combustor and a gas turbine engine. Detailed data compensations are presented. Analysis of error sources in the method were done to derive confidence levels for the compensated data.

  1. Human Reliability and the Cost of Doing Business

    NASA Technical Reports Server (NTRS)

    DeMott, Diana

    2014-01-01

    Most businesses recognize that people will make mistakes and assume errors are just part of the cost of doing business, but does it need to be? Companies with high risk, or major consequences, should consider the effect of human error. In a variety of industries, Human Errors have caused costly failures and workplace injuries. These have included: airline mishaps, medical malpractice, administration of medication and major oil spills have all been blamed on human error. A technique to mitigate or even eliminate some of these costly human errors is the use of Human Reliability Analysis (HRA). Various methodologies are available to perform Human Reliability Assessments that range from identifying the most likely areas for concern to detailed assessments with human error failure probabilities calculated. Which methodology to use would be based on a variety of factors that would include: 1) how people react and act in different industries, and differing expectations based on industries standards, 2) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 3) type and availability of data and 4) how the industry views risk & reliability influences ( types of emergencies, contingencies and routine tasks versus cost based concerns). The Human Reliability Assessments should be the first step to reduce, mitigate or eliminate the costly mistakes or catastrophic failures. Using Human Reliability techniques to identify and classify human error risks allows a company more opportunities to mitigate or eliminate these risks and prevent costly failures.

  2. Measuring silicon pore optics

    NASA Astrophysics Data System (ADS)

    Vacanti, Giuseppe; Barrière, Nicolas; Bavdaz, Marcos; Chatbi, Abdelhakim; Collon, Maximilien; Dekker, Daniëlle; Girou, David; Günther, Ramses; van der Hoeven, Roy; Krumrey, Michael; Landgraf, Boris; Müller, Peter; Schreiber, Swenja; Vervest, Mark; Wille, Eric

    2017-09-01

    While predictions based on the metrology (local slope errors and detailed geometrical details) play an essential role in controlling the development of the manufacturing processes, X-ray characterization remains the ultimate indication of the actual performance of Silicon Pore Optics (SPO). For this reason SPO stacks and mirror modules are routinely characterized at PTB's X-ray Pencil Beam Facility at BESSY II. Obtaining standard X-ray results quickly, right after the production of X-ray optics is essential to making sure that X-ray results can inform decisions taken in the lab. We describe the data analysis pipeline in operations at cosine, and how it allows us to go from stack production to full X-ray characterization in 24 hours.

  3. Quadratic electro-optic effects and electro-absorption process in multilayer nanoshells

    NASA Astrophysics Data System (ADS)

    Bahari, Ali; Rahimi Moghadam, Fereshteh

    2011-07-01

    In this corrigendum, the authors would like to report typographic errors in the first name of the second author and in equation 7. The details of these errors can be found in the PDF. The authors would like to express their sincere apologies for these errors in the article.

  4. Moments of inclination error distribution computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1981-01-01

    A FORTRAN coded computer program is described which calculates orbital inclination error statistics using a closed-form solution. This solution uses a data base of trajectory errors from actual flights to predict the orbital inclination error statistics. The Scott flight history data base consists of orbit insertion errors in the trajectory parameters - altitude, velocity, flight path angle, flight azimuth, latitude and longitude. The methods used to generate the error statistics are of general interest since they have other applications. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included.

  5. Toward accurate prediction of pKa values for internal protein residues: the importance of conformational relaxation and desolvation energy.

    PubMed

    Wallace, Jason A; Wang, Yuhang; Shi, Chuanyin; Pastoor, Kevin J; Nguyen, Bao-Linh; Xia, Kai; Shen, Jana K

    2011-12-01

    Proton uptake or release controls many important biological processes, such as energy transduction, virus replication, and catalysis. Accurate pK(a) prediction informs about proton pathways, thereby revealing detailed acid-base mechanisms. Physics-based methods in the framework of molecular dynamics simulations not only offer pK(a) predictions but also inform about the physical origins of pK(a) shifts and provide details of ionization-induced conformational relaxation and large-scale transitions. One such method is the recently developed continuous constant pH molecular dynamics (CPHMD) method, which has been shown to be an accurate and robust pK(a) prediction tool for naturally occurring titratable residues. To further examine the accuracy and limitations of CPHMD, we blindly predicted the pK(a) values for 87 titratable residues introduced in various hydrophobic regions of staphylococcal nuclease and variants. The predictions gave a root-mean-square deviation of 1.69 pK units from experiment, and there were only two pK(a)'s with errors greater than 3.5 pK units. Analysis of the conformational fluctuation of titrating side-chains in the context of the errors of calculated pK(a) values indicate that explicit treatment of conformational flexibility and the associated dielectric relaxation gives CPHMD a distinct advantage. Analysis of the sources of errors suggests that more accurate pK(a) predictions can be obtained for the most deeply buried residues by improving the accuracy in calculating desolvation energies. Furthermore, it is found that the generalized Born implicit-solvent model underlying the current CPHMD implementation slightly distorts the local conformational environment such that the inclusion of an explicit-solvent representation may offer improvement of accuracy. Copyright © 2011 Wiley-Liss, Inc.

  6. Model-based registration for assessment of spinal deformities in idiopathic scoliosis

    NASA Astrophysics Data System (ADS)

    Forsberg, Daniel; Lundström, Claes; Andersson, Mats; Knutsson, Hans

    2014-01-01

    Detailed analysis of spinal deformity is important within orthopaedic healthcare, in particular for assessment of idiopathic scoliosis. This paper addresses this challenge by proposing an image analysis method, capable of providing a full three-dimensional spine characterization. The proposed method is based on the registration of a highly detailed spine model to image data from computed tomography. The registration process provides an accurate segmentation of each individual vertebra and the ability to derive various measures describing the spinal deformity. The derived measures are estimated from landmarks attached to the spine model and transferred to the patient data according to the registration result. Evaluation of the method provides an average point-to-surface error of 0.9 mm ± 0.9 (comparing segmentations), and an average target registration error of 2.3 mm ± 1.7 (comparing landmarks). Comparing automatic and manual measurements of axial vertebral rotation provides a mean absolute difference of 2.5° ± 1.8, which is on a par with other computerized methods for assessing axial vertebral rotation. A significant advantage of our method, compared to other computerized methods for rotational measurements, is that it does not rely on vertebral symmetry for computing the rotational measures. The proposed method is fully automatic and computationally efficient, only requiring three to four minutes to process an entire image volume covering vertebrae L5 to T1. Given the use of landmarks, the method can be readily adapted to estimate other measures describing a spinal deformity by changing the set of employed landmarks. In addition, the method has the potential to be utilized for accurate segmentations of the vertebrae in routine computed tomography examinations, given the relatively low point-to-surface error.

  7. Precision Attitude Determination System (PADS) design and analysis. Two-axis gimbal star tracker

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Development of the Precision Attitude Determination System (PADS) focused chiefly on the two-axis gimballed star tracker and electronics design improved from that of Precision Pointing Control System (PPCS), and application of the improved tracker for PADS at geosynchronous altitude. System design, system analysis, software design, and hardware design activities are reported. The system design encompasses the PADS configuration, system performance characteristics, component design summaries, and interface considerations. The PADS design and performance analysis includes error analysis, performance analysis via attitude determination simulation, and star tracker servo design analysis. The design of the star tracker and electronics are discussed. Sensor electronics schematics are included. A detailed characterization of the application software algorithms and computer requirements is provided.

  8. Retrieval Failure Contributes to Gist-Based False Recognition

    PubMed Central

    Guerin, Scott A.; Robbins, Clifford A.; Gilmore, Adrian W.; Schacter, Daniel L.

    2011-01-01

    People often falsely recognize items that are similar to previously encountered items. This robust memory error is referred to as gist-based false recognition. A widely held view is that this error occurs because the details fade rapidly from our memory. Contrary to this view, an initial experiment revealed that, following the same encoding conditions that produce high rates of gist-based false recognition, participants overwhelmingly chose the correct target rather than its related foil when given the option to do so. A second experiment showed that this result is due to increased access to stored details provided by reinstatement of the originally encoded photograph, rather than to increased attention to the details. Collectively, these results suggest that details needed for accurate recognition are, to a large extent, still stored in memory and that a critical factor determining whether false recognition will occur is whether these details can be accessed during retrieval. PMID:22125357

  9. UK surveillance: provision of quality assured information from combined datasets.

    PubMed

    Paiba, G A; Roberts, S R; Houston, C W; Williams, E C; Smith, L H; Gibbens, J C; Holdship, S; Lysons, R

    2007-09-14

    Surveillance information is most useful when provided within a risk framework, which is achieved by presenting results against an appropriate denominator. Often the datasets are captured separately and for different purposes, and will have inherent errors and biases that can be further confounded by the act of merging. The United Kingdom Rapid Analysis and Detection of Animal-related Risks (RADAR) system contains data from several sources and provides both data extracts for research purposes and reports for wider stakeholders. Considerable efforts are made to optimise the data in RADAR during the Extraction, Transformation and Loading (ETL) process. Despite efforts to ensure data quality, the final dataset inevitably contains some data errors and biases, most of which cannot be rectified during subsequent analysis. So, in order for users to establish the 'fitness for purpose' of data merged from more than one data source, Quality Statements are produced as defined within the overarching surveillance Quality Framework. These documents detail identified data errors and biases following ETL and report construction as well as relevant aspects of the datasets from which the data originated. This paper illustrates these issues using RADAR datasets, and describes how they can be minimised.

  10. Model-based cost-effectiveness analysis of interventions aimed at preventing medication error at hospital admission (medicines reconciliation).

    PubMed

    Karnon, Jonathan; Campbell, Fiona; Czoski-Murray, Carolyn

    2009-04-01

    Medication errors can lead to preventable adverse drug events (pADEs) that have significant cost and health implications. Errors often occur at care interfaces, and various interventions have been devised to reduce medication errors at the point of admission to hospital. The aim of this study is to assess the incremental costs and effects [measured as quality adjusted life years (QALYs)] of a range of such interventions for which evidence of effectiveness exists. A previously published medication errors model was adapted to describe the pathway of errors occurring at admission through to the occurrence of pADEs. The baseline model was populated using literature-based values, and then calibrated to observed outputs. Evidence of effects was derived from a systematic review of interventions aimed at preventing medication error at hospital admission. All five interventions, for which evidence of effectiveness was identified, are estimated to be extremely cost-effective when compared with the baseline scenario. Pharmacist-led reconciliation intervention has the highest expected net benefits, and a probability of being cost-effective of over 60% by a QALY value of pound10 000. The medication errors model provides reasonably strong evidence that some form of intervention to improve medicines reconciliation is a cost-effective use of NHS resources. The variation in the reported effectiveness of the few identified studies of medication error interventions illustrates the need for extreme attention to detail in the development of interventions, but also in their evaluation and may justify the primary evaluation of more than one specification of included interventions.

  11. Using snowball sampling method with nurses to understand medication administration errors.

    PubMed

    Sheu, Shuh-Jen; Wei, Ien-Lan; Chen, Ching-Huey; Yu, Shu; Tang, Fu-In

    2009-02-01

    We aimed to encourage nurses to release information about drug administration errors to increase understanding of error-related circumstances and to identify high-alert situations. Drug administration errors represent the majority of medication errors, but errors are underreported. Effective ways are lacking to encourage nurses to actively report errors. Snowball sampling was conducted to recruit participants. A semi-structured questionnaire was used to record types of error, hospital and nurse backgrounds, patient consequences, error discovery mechanisms and reporting rates. Eighty-five nurses participated, reporting 328 administration errors (259 actual, 69 near misses). Most errors occurred in medical surgical wards of teaching hospitals, during day shifts, committed by nurses working fewer than two years. Leading errors were wrong drugs and doses, each accounting for about one-third of total errors. Among 259 actual errors, 83.8% resulted in no adverse effects; among remaining 16.2%, 6.6% had mild consequences and 9.6% had serious consequences (severe reaction, coma, death). Actual errors and near misses were discovered mainly through double-check procedures by colleagues and nurses responsible for errors; reporting rates were 62.5% (162/259) vs. 50.7% (35/69) and only 3.5% (9/259) vs. 0% (0/69) were disclosed to patients and families. High-alert situations included administration of 15% KCl, insulin and Pitocin; using intravenous pumps; and implementation of cardiopulmonary resuscitation (CPR). Snowball sampling proved to be an effective way to encourage nurses to release details concerning medication errors. Using empirical data, we identified high-alert situations. Strategies for reducing drug administration errors by nurses are suggested. Survey results suggest that nurses should double check medication administration in known high-alert situations. Nursing management can use snowball sampling to gather error details from nurses in a non-reprimanding atmosphere, helping to establish standard operational procedures for known high-alert situations.

  12. Older adults encode--but do not always use--perceptual details: intentional versus unintentional effects of detail on memory judgments.

    PubMed

    Koutstaal, Wilma

    2003-03-01

    Investigations of memory deficits in older individuals have concentrated on their increased likelihood of forgetting events or details of events that were actually encountered (errors of omission). However, mounting evidence demonstrates that normal cognitive aging also is associated with an increased propensity for errors of commission--shown in false alarms or false recognition. The present study examined the origins of this age difference. Older and younger adults each performed three types of memory tasks in which details of encountered items might influence performance. Although older adults showed greater false recognition of related lures on a standard (identical) old/new episodic recognition task, older and younger adults showed parallel effects of detail on repetition priming and meaning-based episodic recognition (decreased priming and decreased meaning-based recognition for different relative to same exemplars). The results suggest that the older adults encoded details but used them less effectively than the younger adults in the recognition context requiring their deliberate, controlled use.

  13. Reproducing American Sign Language sentences: cognitive scaffolding in working memory

    PubMed Central

    Supalla, Ted; Hauser, Peter C.; Bavelier, Daphne

    2014-01-01

    The American Sign Language Sentence Reproduction Test (ASL-SRT) requires the precise reproduction of a series of ASL sentences increasing in complexity and length. Error analyses of such tasks provides insight into working memory and scaffolding processes. Data was collected from three groups expected to differ in fluency: deaf children, deaf adults and hearing adults, all users of ASL. Quantitative (correct/incorrect recall) and qualitative error analyses were performed. Percent correct on the reproduction task supports its sensitivity to fluency as test performance clearly differed across the three groups studied. A linguistic analysis of errors further documented differing strategies and bias across groups. Subjects' recall projected the affordance and constraints of deep linguistic representations to differing degrees, with subjects resorting to alternate processing strategies when they failed to recall the sentence correctly. A qualitative error analysis allows us to capture generalizations about the relationship between error pattern and the cognitive scaffolding, which governs the sentence reproduction process. Highly fluent signers and less-fluent signers share common chokepoints on particular words in sentences. However, they diverge in heuristic strategy. Fluent signers, when they make an error, tend to preserve semantic details while altering morpho-syntactic domains. They produce syntactically correct sentences with equivalent meaning to the to-be-reproduced one, but these are not verbatim reproductions of the original sentence. In contrast, less-fluent signers tend to use a more linear strategy, preserving lexical status and word ordering while omitting local inflections, and occasionally resorting to visuo-motoric imitation. Thus, whereas fluent signers readily use top-down scaffolding in their working memory, less fluent signers fail to do so. Implications for current models of working memory across spoken and signed modalities are considered. PMID:25152744

  14. Statistical and systematic errors in the measurement of weak-lensing Minkowski functionals: Application to the Canada-France-Hawaii Lensing Survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shirasaki, Masato; Yoshida, Naoki, E-mail: masato.shirasaki@utap.phys.s.u-tokyo.ac.jp

    2014-05-01

    The measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in the measurement of weak-lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurements, photometric redshift errors, and shear calibration correction. We first generate mock weak-lensing catalogs that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform a Fisher analysis using the large set of mock catalogs for various cosmological models. We find that the statistical error associated with the observational effects degradesmore » the cosmological parameter constraints by a factor of a few. The Subaru Hyper Suprime-Cam (HSC) survey with a sky coverage of ∼1400 deg{sup 2} will constrain the dark energy equation of the state parameter with an error of Δw {sub 0} ∼ 0.25 by the lensing MFs alone, but biases induced by the systematics can be comparable to the 1σ error. We conclude that the lensing MFs are powerful statistics beyond the two-point statistics only if well-calibrated measurement of both the redshifts and the shapes of source galaxies is performed. Finally, we analyze the CFHTLenS data to explore the ability of the MFs to break degeneracies between a few cosmological parameters. Using a combined analysis of the MFs and the shear correlation function, we derive the matter density Ω{sub m0}=0.256±{sub 0.046}{sup 0.054}.« less

  15. Characterization of Deposits on Glass Substrate as a Tool in Failure Analysis: The Orbiter Vehicle Columbia Case Study

    NASA Technical Reports Server (NTRS)

    Olivas, J. D.; Melroy, P.; McDanels, S.; Wallace, T.; Zapata, M. C.

    2006-01-01

    In connection with the accident investigation of the space shuttle Columbia, an analysis methodology utilizing well established microscopic and spectroscopic techniques was implemented for evaluating the environment to which the exterior fused silica glass was exposed. Through the implementation of optical microscopy, scanning electron microscopy, energy dispersive spectroscopy, transmission electron microscopy, and electron diffraction, details emerged regarding the manner in which a charred metallic deposited layer formed on top of the exposed glass. Due to nature of the substrate and the materials deposited, the methodology proved to allow for a more detailed analysis of the vehicle breakup. By contrast, similar analytical methodologies on metallic substrates have proven to be challenging due to strong potential for error resulting from substrate contamination. This information proved to be valuable to not only those involved in investigating the break up of Columbia, but also provides a potential guide for investigating future high altitude and high energy accidents.

  16. Users manual for Streamtube Curvature Analysis: Analytical method for predicting the pressure distribution about a nacelle at transonic speeds, volume 1

    NASA Technical Reports Server (NTRS)

    Keith, J. S.; Ferguson, D. R.; Heck, P. H.

    1972-01-01

    The computer program, Streamtube Curvature Analysis, is described for the engineering user and for the programmer. The user oriented documentation includes a description of the mathematical governing equations, their use in the solution, and the method of solution. The general logical flow of the program is outlined and detailed instructions for program usage and operation are explained. General procedures for program use and the program capabilities and limitations are described. From the standpoint of the grammar, the overlay structure of the program is described. The various storage tables are defined and their uses explained. The input and output are discussed in detail. The program listing includes numerous comments so that the logical flow within the program is easily followed. A test case showing input data and output format is included as well as an error printout description.

  17. Including sheath effects in the interpretation of planar retarding potential analyzer's low-energy ion data

    NASA Astrophysics Data System (ADS)

    Fisher, L. E.; Lynch, K. A.; Fernandes, P. A.; Bekkeng, T. A.; Moen, J.; Zettergren, M.; Miceli, R. J.; Powell, S.; Lessard, M. R.; Horak, P.

    2016-04-01

    The interpretation of planar retarding potential analyzers (RPA) during ionospheric sounding rocket missions requires modeling the thick 3D plasma sheath. This paper overviews the theory of RPAs with an emphasis placed on the impact of the sheath on current-voltage (I-V) curves. It then describes the Petite Ion Probe (PIP) which has been designed to function in this difficult regime. The data analysis procedure for this instrument is discussed in detail. Data analysis begins by modeling the sheath with the Spacecraft Plasma Interaction System (SPIS), a particle-in-cell code. Test particles are traced through the sheath and detector to determine the detector's response. A training set is constructed from these simulated curves for a support vector regression analysis which relates the properties of the I-V curve to the properties of the plasma. The first in situ use of the PIPs occurred during the MICA sounding rocket mission which launched from Poker Flat, Alaska in February of 2012. These data are presented as a case study, providing valuable cross-instrument comparisons. A heritage top-hat thermal ion electrostatic analyzer, called the HT, and a multi-needle Langmuir probe have been used to validate both the PIPs and the data analysis method. Compared to the HT, the PIP ion temperature measurements agree with a root-mean-square error of 0.023 eV. These two instruments agree on the parallel-to-B plasma flow velocity with a root-mean-square error of 130 m/s. The PIP with its field of view aligned perpendicular-to-B provided a density measurement with an 11% error compared to the multi-needle Langmuir Probe. Higher error in the other PIP's density measurement is likely due to simplifications in the SPIS model geometry.

  18. Error analysis of motion correction method for laser scanning of moving objects

    NASA Astrophysics Data System (ADS)

    Goel, S.; Lohani, B.

    2014-05-01

    The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.

  19. Handbook of experiences in the design and installation of solar heating and cooling systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, D.S.; Oberoi, H.S.

    1980-07-01

    A large array of problems encountered are detailed, including design errors, installation mistakes, cases of inadequate durability of materials and unacceptable reliability of components, and wide variations in the performance and operation of different solar systems. Durability, reliability, and design problems are reviewed for solar collector subsystems, heat transfer fluids, thermal storage, passive solar components, piping/ducting, and reliability/operational problems. The following performance topics are covered: criteria for design and performance analysis, domestic hot water systems, passive space heating systems, active space heating systems, space cooling systems, analysis of systems performance, and performance evaluations. (MHR)

  20. Applications and accuracy of the parallel diagonal dominant algorithm

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1993-01-01

    The Parallel Diagonal Dominant (PDD) algorithm is a highly efficient, ideally scalable tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is introduced. Then the algorithm is extended to solve periodic tridiagonal systems. A variant, the reduced PDD algorithm, is also proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric, and anti-symmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the algorithm is a good candidate for the emerging massively parallel machines.

  1. Photographic and photometric enhancement of Lunar Orbiter products, projects A, B and C

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A detailed discussion is presented of the framelet joining, photometric data improvement, and statistical error analysis. The Lunar Orbiter film handling system, readout system, and the digitization are described, along with the technique of joining adjacent framelets by a using a digital computer. Time and cost estimates are given. The problems and techniques involved in improving the digitized data are discussed. It was found that spectacular improvements are possible. Program documentations are included.

  2. Systematic and stochastic influences on the performance of the MinION nanopore sequencer across a range of nucleotide bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnakumar, Raga; Sinha, Anupama; Bird, Sara W.

    Emerging sequencing technologies are allowing us to characterize environmental, clinical and laboratory samples with increasing speed and detail, including real-time analysis and interpretation of data. One example of this is being able to rapidly and accurately detect a wide range of pathogenic organisms, both in the clinic and the field. Genomes can have radically different GC content however, such that accurate sequence analysis can be challenging depending upon the technology used. Here, we have characterized the performance of the Oxford MinION nanopore sequencer for detection and evaluation of organisms with a range of genomic nucleotide bias. We have diagnosed themore » quality of base-calling across individual reads and discovered that the position within the read affects base-calling and quality scores. Finally, we have evaluated the performance of the current state-of-the-art neural network-based MinION basecaller, characterizing its behavior with respect to systemic errors as well as context- and sequence-specific errors. Overall, we present a detailed characterization the capabilities of the MinION in terms of generating high-accuracy sequence data from genomes with a wide range of nucleotide content. This study provides a framework for designing the appropriate experiments that are the likely to lead to accurate and rapid field-forward diagnostics.« less

  3. Systematic and stochastic influences on the performance of the MinION nanopore sequencer across a range of nucleotide bias

    DOE PAGES

    Krishnakumar, Raga; Sinha, Anupama; Bird, Sara W.; ...

    2018-02-16

    Emerging sequencing technologies are allowing us to characterize environmental, clinical and laboratory samples with increasing speed and detail, including real-time analysis and interpretation of data. One example of this is being able to rapidly and accurately detect a wide range of pathogenic organisms, both in the clinic and the field. Genomes can have radically different GC content however, such that accurate sequence analysis can be challenging depending upon the technology used. Here, we have characterized the performance of the Oxford MinION nanopore sequencer for detection and evaluation of organisms with a range of genomic nucleotide bias. We have diagnosed themore » quality of base-calling across individual reads and discovered that the position within the read affects base-calling and quality scores. Finally, we have evaluated the performance of the current state-of-the-art neural network-based MinION basecaller, characterizing its behavior with respect to systemic errors as well as context- and sequence-specific errors. Overall, we present a detailed characterization the capabilities of the MinION in terms of generating high-accuracy sequence data from genomes with a wide range of nucleotide content. This study provides a framework for designing the appropriate experiments that are the likely to lead to accurate and rapid field-forward diagnostics.« less

  4. Assimilating satellite-based canopy height within an ecosystem model to estimate aboveground forest biomass

    NASA Astrophysics Data System (ADS)

    Joetzjer, E.; Pillet, M.; Ciais, P.; Barbier, N.; Chave, J.; Schlund, M.; Maignan, F.; Barichivich, J.; Luyssaert, S.; Hérault, B.; von Poncet, F.; Poulter, B.

    2017-07-01

    Despite advances in Earth observation and modeling, estimating tropical biomass remains a challenge. Recent work suggests that integrating satellite measurements of canopy height within ecosystem models is a promising approach to infer biomass. We tested the feasibility of this approach to retrieve aboveground biomass (AGB) at three tropical forest sites by assimilating remotely sensed canopy height derived from a texture analysis algorithm applied to the high-resolution Pleiades imager in the Organizing Carbon and Hydrology in Dynamic Ecosystems Canopy (ORCHIDEE-CAN) ecosystem model. While mean AGB could be estimated within 10% of AGB derived from census data in average across sites, canopy height derived from Pleiades product was spatially too smooth, thus unable to accurately resolve large height (and biomass) variations within the site considered. The error budget was evaluated in details, and systematic errors related to the ORCHIDEE-CAN structure contribute as a secondary source of error and could be overcome by using improved allometric equations.

  5. Model Errors in Simulating Precipitation and Radiation fields in the NARCCAP Hindcast Experiment

    NASA Astrophysics Data System (ADS)

    Kim, J.; Waliser, D. E.; Mearns, L. O.; Mattmann, C. A.; McGinnis, S. A.; Goodale, C. E.; Hart, A. F.; Crichton, D. J.

    2012-12-01

    The relationship between the model errors in simulating precipitation and radiation fields including the surface insolation and OLR, is examined from the multi-RCM NARCCAP hindcast experiment for the conterminous U.S. region. Findings in this study suggest that the RCM biases in simulating precipitation are related with those in simulating radiation fields. For a majority of RCMs participated in the NARCCAP hindcast experiment as well as their ensemble, the spatial pattern of the insolation bias is negatively correlated with that of the precipitation bias, suggesting that the biases in precipitation and surface insolation are systematically related, most likely via the cloud fields. The relationship varies according to seasons as well with stronger relationship between the simulated precipitation and surface insolation during winter. This suggests that the RCM biases in precipitation and radiation are related via cloud fields. Additional analysis on the RCM errors in OLR is underway to examine more details of this relationship.

  6. Error analysis in a stereo vision-based pedestrian detection sensor for collision avoidance applications.

    PubMed

    Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M

    2010-01-01

    This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.

  7. Error Analysis in a Stereo Vision-Based Pedestrian Detection Sensor for Collision Avoidance Applications

    PubMed Central

    Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M.

    2010-01-01

    This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance. PMID:22319323

  8. Operator- and software-related post-experimental variability and source of error in 2-DE analysis.

    PubMed

    Millioni, Renato; Puricelli, Lucia; Sbrignadello, Stefano; Iori, Elisabetta; Murphy, Ellen; Tessari, Paolo

    2012-05-01

    In the field of proteomics, several approaches have been developed for separating proteins and analyzing their differential relative abundance. One of the oldest, yet still widely used, is 2-DE. Despite the continuous advance of new methods, which are less demanding from a technical standpoint, 2-DE is still compelling and has a lot of potential for improvement. The overall variability which affects 2-DE includes biological, experimental, and post-experimental (software-related) variance. It is important to highlight how much of the total variability of this technique is due to post-experimental variability, which, so far, has been largely neglected. In this short review, we have focused on this topic and explained that post-experimental variability and source of error can be further divided into those which are software-dependent and those which are operator-dependent. We discuss these issues in detail, offering suggestions for reducing errors that may affect the quality of results, summarizing the advantages and drawbacks of each approach.

  9. Where Are the Logical Errors in the Theory of Big Bang?

    NASA Astrophysics Data System (ADS)

    Kalanov, Temur Z.

    2015-04-01

    The critical analysis of the foundations of the theory of Big Bang is proposed. The unity of formal logic and of rational dialectics is methodological basis of the analysis. It is argued that the starting point of the theory of Big Bang contains three fundamental logical errors. The first error is the assumption that a macroscopic object (having qualitative determinacy) can have an arbitrarily small size and can be in the singular state (i.e., in the state that has no qualitative determinacy). This assumption implies that the transition, (macroscopic object having the qualitative determinacy) --> (singular state of matter that has no qualitative determinacy), leads to loss of information contained in the macroscopic object. The second error is the assumption that there are the void and the boundary between matter and void. But if such boundary existed, then it would mean that the void has dimensions and can be measured. The third error is the assumption that the singular state of matter can make a transition into the normal state without the existence of the program of qualitative and quantitative development of the matter, without controlling influence of other (independent) object. However, these assumptions conflict with the practice and, consequently, formal logic, rational dialectics, and cybernetics. Indeed, from the point of view of cybernetics, the transition, (singular state of the Universe) -->(normal state of the Universe),would be possible only in the case if there was the Managed Object that is outside the Universe and have full, complete, and detailed information about the Universe. Thus, the theory of Big Bang is a scientific fiction.

  10. Characterizing Air Pollution Exposure Misclassification Errors Using Detailed Cell Phone Location Data

    NASA Astrophysics Data System (ADS)

    Yu, H.; Russell, A. G.; Mulholland, J. A.

    2017-12-01

    In air pollution epidemiologic studies with spatially resolved air pollution data, exposures are often estimated using the home locations of individual subjects. Due primarily to lack of data or logistic difficulties, the spatiotemporal mobility of subjects are mostly neglected, which are expected to result in exposure misclassification errors. In this study, we applied detailed cell phone location data to characterize potential exposure misclassification errors associated with home-based exposure estimation of air pollution. The cell phone data sample consists of 9,886 unique simcard IDs collected on one mid-week day in October, 2013 from Shenzhen, China. The Community Multi-scale Air Quality model was used to simulate hourly ambient concentrations of six chosen pollutants at 3 km spatial resolution, which were then fused with observational data to correct for potential modeling biases and errors. Air pollution exposure for each simcard ID was estimated by matching hourly pollutant concentrations with detailed location data for corresponding IDs. Finally, the results were compared with exposure estimates obtained using the home location method to assess potential exposure misclassification errors. Our results show that the home-based method is likely to have substantial exposure misclassification errors, over-estimating exposures for subjects with higher exposure levels and under-estimating exposures for those with lower exposure levels. This has the potential to lead to a bias-to-the-null in the health effect estimates. Our findings suggest that the use of cell phone data has the potential for improving the characterization of exposure and exposure misclassification in air pollution epidemiology studies.

  11. Influence of tire dynamics on slip ratio estimation of independent driving wheel system

    NASA Astrophysics Data System (ADS)

    Li, Jianqiu; Song, Ziyou; Wei, Yintao; Ouyang, Minggao

    2014-11-01

    The independent driving wheel system, which is composed of in-wheel permanent magnet synchronous motor(I-PMSM) and tire, is more convenient to estimate the slip ratio because the rotary speed of the rotor can be accurately measured. However, the ring speed of the tire ring doesn't equal to the rotor speed considering the tire deformation. For this reason, a deformable tire and a detailed I-PMSM are modeled by using Matlab/Simulink. Moreover, the tire/road contact interface(a slippery road) is accurately described by the non-linear relaxation length-based model and the Magic Formula pragmatic model. Based on the relatively accurate model, the error of slip ratio estimated by the rotor rotary speed is analyzed in both time and frequency domains when a quarter car is started by the I-PMSM with a definite target torque input curve. In addition, the natural frequencies(NFs) of the driving wheel system with variable parameters are illustrated to present the relationship between the slip ratio estimation error and the NF. According to this relationship, a low-pass filter, whose cut-off frequency corresponds to the NF, is proposed to eliminate the error in the estimated slip ratio. The analysis, concerning the effect of the driving wheel parameters and road conditions on slip ratio estimation, shows that the peak estimation error can be reduced up to 75% when the LPF is adopted. The robustness and effectiveness of the LPF are therefore validated. This paper builds up the deformable tire model and the detailed I-PMSM models, and analyzes the effect of the driving wheel parameters and road conditions on slip ratio estimation.

  12. The vertical variability of hyporheic fluxes inferred from riverbed temperature data

    NASA Astrophysics Data System (ADS)

    Cranswick, Roger H.; Cook, Peter G.; Shanafield, Margaret; Lamontagne, Sebastien

    2014-05-01

    We present detailed profiles of vertical water flux from the surface to 1.2 m beneath the Haughton River in the tropical northeast of Australia. A 1-D numerical model is used to estimate vertical flux based on raw temperature time series observations from within downwelling, upwelling, neutral, and convergent sections of the hyporheic zone. A Monte Carlo analysis is used to derive error bounds for the fluxes based on temperature measurement error and uncertainty in effective thermal diffusivity. Vertical fluxes ranged from 5.7 m d-1 (downward) to -0.2 m d-1 (upward) with the lowest relative errors for values between 0.3 and 6 m d-1. Our 1-D approach provides a useful alternative to 1-D analytical and other solutions because it does not incorporate errors associated with simplified boundary conditions or assumptions of purely vertical flow, hydraulic parameter values, or hydraulic conditions. To validate the ability of this 1-D approach to represent the vertical fluxes of 2-D flow fields, we compare our model with two simple 2-D flow fields using a commercial numerical model. These comparisons showed that: (1) the 1-D vertical flux was equivalent to the mean vertical component of flux irrespective of a changing horizontal flux; and (2) the subsurface temperature data inherently has a "spatial footprint" when the vertical flux profiles vary spatially. Thus, the mean vertical flux within a 2-D flow field can be estimated accurately without requiring the flow to be purely vertical. The temperature-derived 1-D vertical flux represents the integrated vertical component of flux along the flow path intersecting the observation point. This article was corrected on 6 JUN 2014. See the end of the full text for details.

  13. Prevalence of refractive errors in the European adult population: the Gutenberg Health Study (GHS).

    PubMed

    Wolfram, Christian; Höhn, René; Kottler, Ulrike; Wild, Philipp; Blettner, Maria; Bühren, Jens; Pfeiffer, Norbert; Mirshahi, Alireza

    2014-07-01

    To study the distribution of refractive errors among adults of European descent. Population-based eye study in Germany with 15010 participants aged 35-74 years. The study participants underwent a detailed ophthalmic examination according to a standardised protocol. Refractive error was determined by an automatic refraction device (Humphrey HARK 599) without cycloplegia. Definitions for the analysis were myopia <-0.5 dioptres (D), hyperopia >+0.5 D, astigmatism >0.5 cylinder D and anisometropia >1.0 D difference in the spherical equivalent between the eyes. Exclusion criterion was previous cataract or refractive surgery. 13959 subjects were eligible. Refractive errors ranged from -21.5 to +13.88 D. Myopia was present in 35.1% of this study sample, hyperopia in 31.8%, astigmatism in 32.3% and anisometropia in 13.5%. The prevalence of myopia decreased, while the prevalence of hyperopia, astigmatism and anisometropia increased with age. 3.5% of the study sample had no refractive correction for their ametropia. Refractive errors affect the majority of the population. The Gutenberg Health Study sample contains more myopes than other study cohorts in adult populations. Our findings do not support the hypothesis of a generally lower prevalence of myopia among adults in Europe as compared with East Asia.

  14. The good, the bad and the outliers: automated detection of errors and outliers from groundwater hydrographs

    NASA Astrophysics Data System (ADS)

    Peterson, Tim J.; Western, Andrew W.; Cheng, Xiang

    2018-03-01

    Suspicious groundwater-level observations are common and can arise for many reasons ranging from an unforeseen biophysical process to bore failure and data management errors. Unforeseen observations may provide valuable insights that challenge existing expectations and can be deemed outliers, while monitoring and data handling failures can be deemed errors, and, if ignored, may compromise trend analysis and groundwater model calibration. Ideally, outliers and errors should be identified but to date this has been a subjective process that is not reproducible and is inefficient. This paper presents an approach to objectively and efficiently identify multiple types of errors and outliers. The approach requires only the observed groundwater hydrograph, requires no particular consideration of the hydrogeology, the drivers (e.g. pumping) or the monitoring frequency, and is freely available in the HydroSight toolbox. Herein, the algorithms and time-series model are detailed and applied to four observation bores with varying dynamics. The detection of outliers was most reliable when the observation data were acquired quarterly or more frequently. Outlier detection where the groundwater-level variance is nonstationary or the absolute trend increases rapidly was more challenging, with the former likely to result in an under-estimation of the number of outliers and the latter an overestimation in the number of outliers.

  15. Ideas for a pattern-oriented approach towards a VERA analysis ensemble

    NASA Astrophysics Data System (ADS)

    Gorgas, T.; Dorninger, M.

    2010-09-01

    Ideas for a pattern-oriented approach towards a VERA analysis ensemble For many applications in meteorology and especially for verification purposes it is important to have some information about the uncertainties of observation and analysis data. A high quality of these "reference data" is an absolute necessity as the uncertainties are reflected in verification measures. The VERA (Vienna Enhanced Resolution Analysis) scheme includes a sophisticated quality control tool which accounts for the correction of observational data and provides an estimation of the observation uncertainty. It is crucial for meteorologically and physically reliable analysis fields. VERA is based on a variational principle and does not need any first guess fields. It is therefore NWP model independent and can also be used as an unbiased reference for real time model verification. For downscaling purposes VERA uses an a priori knowledge on small-scale physical processes over complex terrain, the so called "fingerprint technique", which transfers information from rich to data sparse regions. The enhanced Joint D-PHASE and COPS data set forms the data base for the analysis ensemble study. For the WWRP projects D-PHASE and COPS a joint activity has been started to collect GTS and non-GTS data from the national and regional meteorological services in Central Europe for 2007. Data from more than 11.000 stations are available for high resolution analyses. The usage of random numbers as perturbations for ensemble experiments is a common approach in meteorology. In most implementations, like for NWP-model ensemble systems, the focus lies on error growth and propagation on the spatial and temporal scale. When defining errors in analysis fields we have to consider the fact that analyses are not time dependent and that no perturbation method aimed at temporal evolution is possible. Further, the method applied should respect two major sources of analysis errors: Observation errors AND analysis or interpolation errors. With the concept of an analysis ensemble we hope to get a more detailed sight on both sources of analysis errors. For the computation of the VERA ensemble members a sample of Gaussian random perturbations is produced for each station and parameter. The deviation of perturbations is based on the correction proposals by the VERA QC scheme to provide some "natural" limits for the ensemble. In order to put more emphasis on the weather situation we aim to integrate the main synoptic field structures as weighting factors for the perturbations. Two widely approved approaches are used for the definition of these main field structures: The Principal Component Analysis and a 2D-Discrete Wavelet Transform. The results of tests concerning the implementation of this pattern-supported analysis ensemble system and a comparison of the different approaches are given in the presentation.

  16. Articulation in schoolchildren and adults with neurofibromatosis type 1.

    PubMed

    Cosyns, Marjan; Mortier, Geert; Janssens, Sandra; Bogaert, Famke; D'Hondt, Stephanie; Van Borsel, John

    2012-01-01

    Several authors mentioned the occurrence of articulation problems in the neurofibromatosis type 1 (NF1) population. However, few studies have undertaken a detailed analysis of the articulation skills of NF1 patients, especially in schoolchildren and adults. Therefore, the aim of the present study was to examine in depth the articulation skills of NF1 schoolchildren and adults, both phonetically and phonologically. Speech samples were collected from 43 Flemish NF1 patients (14 children and 29 adults), ranging in age between 7 and 53 years, using a standardized speech test in which all Flemish single speech sounds and most clusters occur in all their permissible syllable positions. Analyses concentrated on consonants only and included a phonetic inventory, a phonetic, and a phonological analysis. It was shown that phonetic inventories were incomplete in 16.28% (7/43) of participants, in which totally correct realizations of the sibilants /ʃ/ and/or /ʒ/ were missing. Phonetic analysis revealed that distortions were the predominant phonetic error type. Sigmatismus stridens, multiple ad- or interdentality, and, in children, rhotacismus non vibrans were frequently observed. From a phonological perspective, the most common error types were substitution and syllable structure errors. Particularly, devoicing, cluster simplification, and, in children, deletion of the final consonant of words were perceived. Further, it was demonstrated that significantly more men than women presented with an incomplete phonetic inventory, and that girls tended to display more articulation errors than boys. Additionally, children exhibited significantly more articulation errors than adults, suggesting that although the articulation skills of NF1 patients evolve positively with age, articulation problems do not resolve completely from childhood to adulthood. As such, the articulation errors made by NF1 adults may be regarded as residual articulation disorders. It can be concluded that the speech of NF1 patients is characterized by mild articulation disorders at an age where this is no longer expected. Readers will be able to describe neurofibromatosis type 1 (NF1) and explain the articulation errors displayed by schoolchildren and adults with this genetic syndrome. © 2011 Elsevier Inc. All rights reserved.

  17. Inverse problem of HIV cell dynamics using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    González, J. A.; Guzmán, F. S.

    2017-01-01

    In order to describe the cell dynamics of T-cells in a patient infected with HIV, we use a flavour of Perelson's model. This is a non-linear system of Ordinary Differential Equations that describes the evolution of healthy, latently infected, infected T-cell concentrations and the free viral cells. Different parameters in the equations give different dynamics. Considering the concentration of these types of cells is known for a particular patient, the inverse problem consists in estimating the parameters in the model. We solve this inverse problem using a Genetic Algorithm (GA) that minimizes the error between the solutions of the model and the data from the patient. These errors depend on the parameters of the GA, like mutation rate and population, although a detailed analysis of this dependence will be described elsewhere.

  18. Feasibility, strategy, methodology, and analysis of probe measurements in plasma under high gas pressure

    NASA Astrophysics Data System (ADS)

    Demidov, V. I.; Koepke, M. E.; Kurlyandskaya, I. P.; Malkov, M. A.

    2018-02-01

    This paper reviews existing theories for interpreting probe measurements of electron distribution functions (EDF) at high gas pressure when collisions of electrons with atoms and/or molecules near the probe are pervasive. An explanation of whether or not the measurements are realizable and reliable, an enumeration of the most common sources of measurement error, and an outline of proper probe-experiment design elements that inherently limit or avoid error is presented. Additionally, we describe recent expanded plasma-condition compatibility for EDF measurement, including in applications of large wall probe plasma diagnostics. This summary of the authors’ experiences gained over decades of practicing and developing probe diagnostics is intended to inform, guide, suggest, and detail the advantages and disadvantages of probe application in plasma research.

  19. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    PubMed Central

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052

  20. A novel design of membrane mirror with small deformation and imaging performance analysis in infrared system

    NASA Astrophysics Data System (ADS)

    Zhang, Shuqing; Wang, Yongquan; Zhi, Xiyang

    2017-05-01

    A method of diminishing the shape error of membrane mirror is proposed in this paper. The inner inflating pressure is considerably decreased by adopting the pre-shaped membrane. Small deformation of the membrane mirror with greatly reduced shape error is sequentially achieved. Primarily a finite element model of the above pre-shaped membrane is built on the basis of its mechanical properties. Then accurate shape data under different pressures can be acquired by iteratively calculating the node displacements of the model. Shape data are applicable to build up deformed reflecting surfaces for the simulative analysis in ZEMAX. Finally, ground-based imaging experiments of 4-bar targets and nature scene are conducted. Experiment results indicate that the MTF of the infrared system can reach to 0.3 at a high spatial resolution of 10l p/mm, and texture details of the nature scene are well-presented. The method can provide theoretical basis and technical support for the applications in lightweight optical components with ultra-large apertures.

  1. Application of parameter estimation to aircraft stability and control: The output-error approach

    NASA Technical Reports Server (NTRS)

    Maine, Richard E.; Iliff, Kenneth W.

    1986-01-01

    The practical application of parameter estimation methodology to the problem of estimating aircraft stability and control derivatives from flight test data is examined. The primary purpose of the document is to present a comprehensive and unified picture of the entire parameter estimation process and its integration into a flight test program. The document concentrates on the output-error method to provide a focus for detailed examination and to allow us to give specific examples of situations that have arisen. The document first derives the aircraft equations of motion in a form suitable for application to estimation of stability and control derivatives. It then discusses the issues that arise in adapting the equations to the limitations of analysis programs, using a specific program for an example. The roles and issues relating to mass distribution data, preflight predictions, maneuver design, flight scheduling, instrumentation sensors, data acquisition systems, and data processing are then addressed. Finally, the document discusses evaluation and the use of the analysis results.

  2. Modeling and characterization of multipath in global navigation satellite system ranging signals

    NASA Astrophysics Data System (ADS)

    Weiss, Jan Peter

    The Global Positioning System (GPS) provides position, velocity, and time information to users in anywhere near the earth in real-time and regardless of weather conditions. Since the system became operational, improvements in many areas have reduced systematic errors affecting GPS measurements such that multipath, defined as any signal taking a path other than the direct, has become a significant, if not dominant, error source for many applications. This dissertation utilizes several approaches to characterize and model multipath errors in GPS measurements. Multipath errors in GPS ranging signals are characterized for several receiver systems and environments. Experimental P(Y) code multipath data are analyzed for ground stations with multipath levels ranging from minimal to severe, a C-12 turboprop, an F-18 jet, and an aircraft carrier. Comparisons between receivers utilizing single patch antennas and multi-element arrays are also made. In general, the results show significant reductions in multipath with antenna array processing, although large errors can occur even with this kind of equipment. Analysis of airborne platform multipath shows that the errors tend to be small in magnitude because the size of the aircraft limits the geometric delay of multipath signals, and high in frequency because aircraft dynamics cause rapid variations in geometric delay. A comprehensive multipath model is developed and validated. The model integrates 3D structure models, satellite ephemerides, electromagnetic ray-tracing algorithms, and detailed antenna and receiver models to predict multipath errors. Validation is performed by comparing experimental and simulated multipath via overall error statistics, per satellite time histories, and frequency content analysis. The validation environments include two urban buildings, an F-18, an aircraft carrier, and a rural area where terrain multipath dominates. The validated models are used to identify multipath sources, characterize signal properties, evaluate additional antenna and receiver tracking configurations, and estimate the reflection coefficients of multipath-producing surfaces. Dynamic models for an F-18 landing on an aircraft carrier correlate aircraft dynamics to multipath frequency content; the model also characterizes the separate contributions of multipath due to the aircraft, ship, and ocean to the overall error statistics. Finally, reflection coefficients for multipath produced by terrain are estimated via a least-squares algorithm.

  3. Practical considerations for obtaining high quality quantitative computed tomography data of the skeletal system.

    PubMed

    Troy, Karen L; Edwards, W Brent

    2018-05-01

    Quantitative CT (QCT) analysis involves the calculation of specific parameters such as bone volume and density from CT image data, and can be a powerful tool for understanding bone quality and quantity. However, without careful attention to detail during all steps of the acquisition and analysis process, data can be of poor- to unusable-quality. Good quality QCT for research requires meticulous attention to detail and standardization of all aspects of data collection and analysis to a degree that is uncommon in a clinical setting. Here, we review the literature to summarize practical and technical considerations for obtaining high quality QCT data, and provide examples of how each recommendation affects calculated variables. We also provide an overview of the QCT analysis technique to illustrate additional opportunities to improve data reproducibility and reliability. Key recommendations include: standardizing the scanner and data acquisition settings, minimizing image artifacts, selecting an appropriate reconstruction algorithm, and maximizing repeatability and objectivity during QCT analysis. The goal of the recommendations is to reduce potential sources of error throughout the analysis, from scan acquisition to the interpretation of results. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Chemistry of groundwater discharge inferred from longitudinal river sampling

    NASA Astrophysics Data System (ADS)

    Batlle-Aguilar, J.; Harrington, G. A.; Leblanc, M.; Welch, C.; Cook, P. G.

    2014-02-01

    We present an approach for identifying groundwater discharge chemistry and quantifying spatially distributed groundwater discharge into rivers based on longitudinal synoptic sampling and flow gauging of a river. The method is demonstrated using a 450 km reach of a tropical river in Australia. Results obtained from sampling for environmental tracers, major ions, and selected trace element chemistry were used to calibrate a steady state one-dimensional advective transport model of tracer distribution along the river. The model closely reproduced river discharge and environmental tracer and chemistry composition along the study length. It provided a detailed longitudinal profile of groundwater inflow chemistry and discharge rates, revealing that regional fractured mudstones in the central part of the catchment contributed up to 40% of all groundwater discharge. Detailed analysis of model calibration errors and modeled/measured groundwater ion ratios elucidated that groundwater discharging in the top of the catchment is a mixture of local groundwater and bank storage return flow, making the method potentially useful to differentiate between local and regional sourced groundwater discharge. As the error in tracer concentration induced by a flow event applies equally to any conservative tracer, we show that major ion ratios can still be resolved with minimal error when river samples are collected during transient flow conditions. The ability of the method to infer groundwater inflow chemistry from longitudinal river sampling is particularly attractive in remote areas where access to groundwater is limited or not possible, and for identification of actual fluxes of salts and/or specific contaminant sources.

  5. Detailed analysis of the effects of stencil spatial variations with arbitrary high-order finite-difference Maxwell solver

    DOE PAGES

    Vincenti, H.; Vay, J. -L.

    2015-11-22

    Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less

  6. Methods for the computation of detailed geoids and their accuracy

    NASA Technical Reports Server (NTRS)

    Rapp, R. H.; Rummel, R.

    1975-01-01

    Two methods for the computation of geoid undulations using potential coefficients and 1 deg x 1 deg terrestrial anomaly data are examined. It was found that both methods give the same final result but that one method allows a more simplified error analysis. Specific equations were considered for the effect of the mass of the atmosphere and a cap dependent zero-order undulation term was derived. Although a correction to a gravity anomaly for the effect of the atmosphere is only about -0.87 mgal, this correction causes a fairly large undulation correction that was not considered previously. The accuracy of a geoid undulation computed by these techniques was estimated considering anomaly data errors, potential coefficient errors, and truncation (only a finite set of potential coefficients being used) errors. It was found that an optimum cap size of 20 deg should be used. The geoid and its accuracy were computed in the Geos 3 calibration area using the GEM 6 potential coefficients and 1 deg x 1 deg terrestrial anomaly data. The accuracy of the computed geoid is on the order of plus or minus 2 m with respect to an unknown set of best earth parameter constants.

  7. Reinforcement and validation of the analyses and conclusions related to fishway evaluation data from Bunt et al.: ‘Performance of fish passage structures at upstream barriers to migration’

    USGS Publications Warehouse

    Bunt, C.M.; Castro-Santos, Theodore R.; Haro, Alexander

    2016-01-01

    Detailed re-examination of the datasets that were used for a meta-analysis of fishway attraction and passage revealed a number of errors that we addressed and corrected. We subsequently re-analysed the revised dataset, and results showed no significant changes in the primary conclusions of the original study; for most species, effective performance cannot be assured for any fishway type.

  8. An investigation of the energy balance of solar active regions using the ACRIM irradiance data

    NASA Technical Reports Server (NTRS)

    Petro, L. D.

    1986-01-01

    The detection of a significant correlation between the solar irradiance, corrected for flux deficit due to sunspots, and both the 205 nm flux and a photometric facular index were examined. A detailed analysis supports facular emission as the more likely source of correlation with the corrected radiance, rather then the error in sunspot correction. A computer program which simulates two dimensional convection in a compressible, stratified medium was investigated. Subroutines to calculate ionization and other thermodynamic variables were also completed.

  9. An investigation into pilot and system response to critical in-flight events, volume 1

    NASA Technical Reports Server (NTRS)

    Rockwell, T. H.; Giffin, W. C.

    1981-01-01

    The scope of a critical in-flight event (CIFE) with emphasis on pilot management of available resources is described. Detailed scenarios for both full mission simulation and written testing of pilot responses to CIFE's, and statistical relationships among pilot characteristics and observed responses are developed. A model developed to described pilot response to CIFE and an analysis of professional fight crews compliance with specified operating procedures and the relationships with in-flight errors are included.

  10. Analytical redundancy management mechanization and flight data analysis for the F-8 digital fly-by-wire aircraft flight control sensors

    NASA Technical Reports Server (NTRS)

    Deckert, J. C.

    1983-01-01

    The details are presented of an onboard digital computer algorithm designed to reliably detect and isolate the first failure in a duplex set of flight control sensors aboard the NASA F-8 digital fly-by-wire aircraft. The algorithm's successful flight test program is summarized, and specific examples are presented of algorithm behavior in response to software-induced signal faults, both with and without aircraft parameter modeling errors.

  11. Mapping transmission risk of Lassa fever in West Africa: the importance of quality control, sampling bias, and error weighting.

    PubMed

    Peterson, A Townsend; Moses, Lina M; Bausch, Daniel G

    2014-01-01

    Lassa fever is a disease that has been reported from sites across West Africa; it is caused by an arenavirus that is hosted by the rodent M. natalensis. Although it is confined to West Africa, and has been documented in detail in some well-studied areas, the details of the distribution of risk of Lassa virus infection remain poorly known at the level of the broader region. In this paper, we explored the effects of certainty of diagnosis, oversampling in well-studied region, and error balance on results of mapping exercises. Each of the three factors assessed in this study had clear and consistent influences on model results, overestimating risk in southern, humid zones in West Africa, and underestimating risk in drier and more northern areas. The final, adjusted risk map indicates broad risk areas across much of West Africa. Although risk maps are increasingly easy to develop from disease occurrence data and raster data sets summarizing aspects of environments and landscapes, this process is highly sensitive to issues of data quality, sampling design, and design of analysis, with macrogeographic implications of each of these issues and the potential for misrepresenting real patterns of risk.

  12. Initial Results from On-Orbit Testing of the Fram Memory Test Experiment on the Fastsat Micro-Satellite

    NASA Technical Reports Server (NTRS)

    MacLeond, Todd C.; Sims, W. Herb; Varnavas,Kosta A.; Ho, Fat D.

    2011-01-01

    The Memory Test Experiment is a space test of a ferroelectric memory device on a low Earth orbit satellite that launched in November 2010. The memory device being tested is a commercial Ramtron Inc. 512K memory device. The circuit was designed into the satellite avionics and is not used to control the satellite. The test consists of writing and reading data with the ferroelectric based memory device. Any errors are detected and are stored on board the satellite. The data is sent to the ground through telemetry once a day. Analysis of the data can determine the kind of error that was found and will lead to a better understanding of the effects of space radiation on memory systems. The test is one of the first flight demonstrations of ferroelectric memory in a near polar orbit which allows testing in a varied radiation environment. The initial data from the test is presented. This paper details the goals and purpose of this experiment as well as the development process. The process for analyzing the data to gain the maximum understanding of the performance of the ferroelectric memory device is detailed.

  13. Grid workflow validation using ontology-based tacit knowledge: A case study for quantitative remote sensing applications

    NASA Astrophysics Data System (ADS)

    Liu, Jia; Liu, Longli; Xue, Yong; Dong, Jing; Hu, Yingcui; Hill, Richard; Guang, Jie; Li, Chi

    2017-01-01

    Workflow for remote sensing quantitative retrieval is the ;bridge; between Grid services and Grid-enabled application of remote sensing quantitative retrieval. Workflow averts low-level implementation details of the Grid and hence enables users to focus on higher levels of application. The workflow for remote sensing quantitative retrieval plays an important role in remote sensing Grid and Cloud computing services, which can support the modelling, construction and implementation of large-scale complicated applications of remote sensing science. The validation of workflow is important in order to support the large-scale sophisticated scientific computation processes with enhanced performance and to minimize potential waste of time and resources. To research the semantic correctness of user-defined workflows, in this paper, we propose a workflow validation method based on tacit knowledge research in the remote sensing domain. We first discuss the remote sensing model and metadata. Through detailed analysis, we then discuss the method of extracting the domain tacit knowledge and expressing the knowledge with ontology. Additionally, we construct the domain ontology with Protégé. Through our experimental study, we verify the validity of this method in two ways, namely data source consistency error validation and parameters matching error validation.

  14. A NEW SYSTEM TO MONITOR DATA ANALYSES AND RESULTS OF PHYSICS DATA VALIDATION BETWEEN PULSES AT DIII-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FLANAGAN,A; SCHACHTER,J.M; SCHISSEL,D.P

    2003-02-01

    A Data Analysis Monitoring (DAM) system has been developed to monitor between pulse physics analysis at the DIII-D National Fusion Facility (http://nssrv1.gat.com:8000/dam). The system allows for rapid detection of discrepancies in diagnostic measurements or the results from physics analysis codes. This enables problems to be detected and possibly fixed between pulses as opposed to after the experimental run has concluded thus increasing the efficiency of experimental time. An example of a consistency check is comparing the experimentally measured neutron rate and the expected neutron emission, RDD0D. A significant difference between these two values could indicate a problem with one ormore » more diagnostics, or the presence of unanticipated phenomena in the plasma. This new system also tracks the progress of MDSplus dispatched data analysis software and the loading of analyzed data into MDSplus. DAM uses a Java Servlet to receive messages, CLIPS to implement expert system logic, and displays its results to multiple web clients via HTML. If an error is detected by DAM, users can view more detailed information so that steps can be taken to eliminate the error for the next pulse.« less

  15. System to monitor data analyses and results of physics data validation between pulses at DIII-D

    NASA Astrophysics Data System (ADS)

    Flanagan, S.; Schachter, J. M.; Schissel, D. P.

    2004-06-01

    A data analysis monitoring (DAM) system has been developed to monitor between pulse physics analysis at the DIII-D National Fusion Facility (http://nssrv1.gat.com:8000/dam). The system allows for rapid detection of discrepancies in diagnostic measurements or the results from physics analysis codes. This enables problems to be detected and possibly fixed between pulses as opposed to after the experimental run has concluded, thus increasing the efficiency of experimental time. An example of a consistency check is comparing the experimentally measured neutron rate and the expected neutron emission, RDD0D. A significant difference between these two values could indicate a problem with one or more diagnostics, or the presence of unanticipated phenomena in the plasma. This system also tracks the progress of MDSplus dispatched data analysis software and the loading of analyzed data into MDSplus. DAM uses a Java Servlet to receive messages, C Language Integrated Production System to implement expert system logic, and displays its results to multiple web clients via Hypertext Markup Language. If an error is detected by DAM, users can view more detailed information so that steps can be taken to eliminate the error for the next pulse.

  16. Devil in the details? Developmental dyslexia and visual long-term memory for details.

    PubMed

    Huestegge, Lynn; Rohrßen, Julia; van Ermingen-Marbach, Muna; Pape-Neumann, Julia; Heim, Stefan

    2014-01-01

    Cognitive theories on causes of developmental dyslexia can be divided into language-specific and general accounts. While the former assume that words are special in that associated processing problems are rooted in language-related cognition (e.g., phonology) deficits, the latter propose that dyslexia is rather rooted in a general impairment of cognitive (e.g., visual and/or auditory) processing streams. In the present study, we examined to what extent dyslexia (typically characterized by poor orthographic representations) may be associated with a general deficit in visual long-term memory (LTM) for details. We compared object- and detail-related visual LTM performance (and phonological skills) between dyslexic primary school children and IQ-, age-, and gender-matched controls. The results revealed that while the overall amount of LTM errors was comparable between groups, dyslexic children exhibited a greater portion of detail-related errors. The results suggest that not only phonological, but also general visual resolution deficits in LTM may play an important role in developmental dyslexia.

  17. Quantification of the Uncertainties for the Ares I A106 Ascent Aerodynamic Database

    NASA Technical Reports Server (NTRS)

    Houlden, Heather P.; Favaregh, Amber L.

    2010-01-01

    A detailed description of the quantification of uncertainties for the Ares I ascent aero 6-DOF wind tunnel database is presented. The database was constructed from wind tunnel test data and CFD results. The experimental data came from tests conducted in the Boeing Polysonic Wind Tunnel in St. Louis and the Unitary Plan Wind Tunnel at NASA Langley Research Center. The major sources of error for this database were: experimental error (repeatability), database modeling errors, and database interpolation errors.

  18. Using cell phone location to assess misclassification errors in air pollution exposure estimation.

    PubMed

    Yu, Haofei; Russell, Armistead; Mulholland, James; Huang, Zhijiong

    2018-02-01

    Air pollution epidemiologic and health impact studies often rely on home addresses to estimate individual subject's pollution exposure. In this study, we used detailed cell phone location data, the call detail record (CDR), to account for the impact of spatiotemporal subject mobility on estimates of ambient air pollutant exposure. This approach was applied on a sample with 9886 unique simcard IDs in Shenzhen, China, on one mid-week day in October 2013. Hourly ambient concentrations of six chosen pollutants were simulated by the Community Multi-scale Air Quality model fused with observational data, and matched with detailed location data for these IDs. The results were compared with exposure estimates using home addresses to assess potential exposure misclassification errors. We found the misclassifications errors are likely to be substantial when home location alone is applied. The CDR based approach indicates that the home based approach tends to over-estimate exposures for subjects with higher exposure levels and under-estimate exposures for those with lower exposure levels. Our results show that the cell phone location based approach can be used to assess exposure misclassification error and has the potential for improving exposure estimates in air pollution epidemiology studies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. A median filter approach for correcting errors in a vector field

    NASA Technical Reports Server (NTRS)

    Schultz, H.

    1985-01-01

    Techniques are presented for detecting and correcting errors in a vector field. These methods employ median filters which are frequently used in image processing to enhance edges and remove noise. A detailed example is given for wind field maps produced by a spaceborne scatterometer. The error detection and replacement algorithm was tested with simulation data from the NASA Scatterometer (NSCAT) project.

  20. Groundwater flow in the transition zone between freshwater and saltwater: a field-based study and analysis of measurement errors

    NASA Astrophysics Data System (ADS)

    Post, Vincent E. A.; Banks, Eddie; Brunke, Miriam

    2018-02-01

    The quantification of groundwater flow near the freshwater-saltwater transition zone at the coast is difficult because of variable-density effects and tidal dynamics. Head measurements were collected along a transect perpendicular to the shoreline at a site south of the city of Adelaide, South Australia, to determine the transient flow pattern. This paper presents a detailed overview of the measurement procedure, data post-processing methods and uncertainty analysis in order to assess how measurement errors affect the accuracy of the inferred flow patterns. A particular difficulty encountered was that some of the piezometers were leaky, which necessitated regular measurements of the electrical conductivity and temperature of the water inside the wells to correct for density effects. Other difficulties included failure of pressure transducers, data logger clock drift and operator error. The data obtained were sufficiently accurate to show that there is net seaward horizontal flow of freshwater in the top part of the aquifer, and a net landward flow of saltwater in the lower part. The vertical flow direction alternated with the tide, but due to the large uncertainty of the head gradients and density terms, no net flow could be established with any degree of confidence. While the measurement problems were amplified under the prevailing conditions at the site, similar errors can lead to large uncertainties everywhere. The methodology outlined acknowledges the inherent uncertainty involved in measuring groundwater flow. It can also assist to establish the accuracy requirements of the experimental setup.

  1. Qualitative and quantitative assessment of Illumina's forensic STR and SNP kits on MiSeq FGx™.

    PubMed

    Sharma, Vishakha; Chow, Hoi Yan; Siegel, Donald; Wurmbach, Elisa

    2017-01-01

    Massively parallel sequencing (MPS) is a powerful tool transforming DNA analysis in multiple fields ranging from medicine, to environmental science, to evolutionary biology. In forensic applications, MPS offers the ability to significantly increase the discriminatory power of human identification as well as aid in mixture deconvolution. However, before the benefits of any new technology can be employed, a thorough evaluation of its quality, consistency, sensitivity, and specificity must be rigorously evaluated in order to gain a detailed understanding of the technique including sources of error, error rates, and other restrictions/limitations. This extensive study assessed the performance of Illumina's MiSeq FGx MPS system and ForenSeq™ kit in nine experimental runs including 314 reaction samples. In-depth data analysis evaluated the consequences of different assay conditions on test results. Variables included: sample numbers per run, targets per run, DNA input per sample, and replications. Results are presented as heat maps revealing patterns for each locus. Data analysis focused on read numbers (allele coverage), drop-outs, drop-ins, and sequence analysis. The study revealed that loci with high read numbers performed better and resulted in fewer drop-outs and well balanced heterozygous alleles. Several loci were prone to drop-outs which led to falsely typed homozygotes and therefore to genotype errors. Sequence analysis of allele drop-in typically revealed a single nucleotide change (deletion, insertion, or substitution). Analyses of sequences, no template controls, and spurious alleles suggest no contamination during library preparation, pooling, and sequencing, but indicate that sequencing or PCR errors may have occurred due to DNA polymerase infidelities. Finally, we found utilizing Illumina's FGx System at recommended conditions does not guarantee 100% outcomes for all samples tested, including the positive control, and required manual editing due to low read numbers and/or allele drop-in. These findings are important for progressing towards implementation of MPS in forensic DNA testing.

  2. GPS Data Filtration Method for Drive Cycle Analysis Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duran, A.; Earleywine, M.

    2013-02-01

    When employing GPS data acquisition systems to capture vehicle drive-cycle information, a number of errors often appear in the raw data samples, such as sudden signal loss, extraneous or outlying data points, speed drifting, and signal white noise, all of which limit the quality of field data for use in downstream applications. Unaddressed, these errors significantly impact the reliability of source data and limit the effectiveness of traditional drive-cycle analysis approaches and vehicle simulation software. Without reliable speed and time information, the validity of derived metrics for drive cycles, such as acceleration, power, and distance, become questionable. This study exploresmore » some of the common sources of error present in raw onboard GPS data and presents a detailed filtering process designed to correct for these issues. Test data from both light and medium/heavy duty applications are examined to illustrate the effectiveness of the proposed filtration process across the range of vehicle vocations. Graphical comparisons of raw and filtered cycles are presented, and statistical analyses are performed to determine the effects of the proposed filtration process on raw data. Finally, an evaluation of the overall benefits of data filtration on raw GPS data and present potential areas for continued research is presented.« less

  3. Measurement of differential cross section of D(3He,p)4He from 0.8 MeV to 3.6 MeV

    NASA Astrophysics Data System (ADS)

    Zhu, J. P.; Xiao, X.; Yan, S.; Gao, Y.; Xue, J. M.; Wang, Y. G.

    2017-12-01

    Precise knowledge of the nuclear reaction cross-section is crucial for nuclear reaction analysis methods and its applications. In order to apply nuclear reaction analysis methods to Plasma Facing Materials studies on 4.5 MV electrostatic accelerator at Peking University, differential cross-section for d(3He,p) α at several backward angles was measured with a relative error about ± 6.2 % , gives detailed information at the laboratory angle of 135° from 800 keV to 3600 keV, as well as a rough angular distribution from 130° to 160°.

  4. Snow parameters from Nimbus-6 electrically scanned microwave radiometer. [(ESMR-6)

    NASA Technical Reports Server (NTRS)

    Abrams, G.; Edgerton, A. T.

    1977-01-01

    Two sites in Canada were selected for detailed analysis of the ESMR-6/ snow relationships. Data were analyzed for February 1976 for site 1 and January, February and March 1976 for site 2. Snowpack water equivalents were less than 4.5 inches for site 1 and, depending on the month, were between 2.9 and 14.5 inches for site 2. A statistically significant relationship was found between ESMR-6 measurements and snowpack water equivalents for the Site 2 February and March data. Associated analysis findings presented are the effects of random measurement errors, snow site physiolography, and weather conditions on the ESMR-6/snow relationship.

  5. Measuring diagnoses: ICD code accuracy.

    PubMed

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-10-01

    To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.

  6. Automated drug dispensing system reduces medication errors in an intensive care setting.

    PubMed

    Chapuis, Claire; Roustit, Matthieu; Bal, Gaëlle; Schwebel, Carole; Pansu, Pascal; David-Tchouda, Sandra; Foroni, Luc; Calop, Jean; Timsit, Jean-François; Allenet, Benoît; Bosson, Jean-Luc; Bedouch, Pierrick

    2010-12-01

    We aimed to assess the impact of an automated dispensing system on the incidence of medication errors related to picking, preparation, and administration of drugs in a medical intensive care unit. We also evaluated the clinical significance of such errors and user satisfaction. Preintervention and postintervention study involving a control and an intervention medical intensive care unit. Two medical intensive care units in the same department of a 2,000-bed university hospital. Adult medical intensive care patients. After a 2-month observation period, we implemented an automated dispensing system in one of the units (study unit) chosen randomly, with the other unit being the control. The overall error rate was expressed as a percentage of total opportunities for error. The severity of errors was classified according to National Coordinating Council for Medication Error Reporting and Prevention categories by an expert committee. User satisfaction was assessed through self-administered questionnaires completed by nurses. A total of 1,476 medications for 115 patients were observed. After automated dispensing system implementation, we observed a reduced percentage of total opportunities for error in the study compared to the control unit (13.5% and 18.6%, respectively; p<.05); however, no significant difference was observed before automated dispensing system implementation (20.4% and 19.3%, respectively; not significant). Before-and-after comparisons in the study unit also showed a significantly reduced percentage of total opportunities for error (20.4% and 13.5%; p<.01). An analysis of detailed opportunities for error showed a significant impact of the automated dispensing system in reducing preparation errors (p<.05). Most errors caused no harm (National Coordinating Council for Medication Error Reporting and Prevention category C). The automated dispensing system did not reduce errors causing harm. Finally, the mean for working conditions improved from 1.0±0.8 to 2.5±0.8 on the four-point Likert scale. The implementation of an automated dispensing system reduced overall medication errors related to picking, preparation, and administration of drugs in the intensive care unit. Furthermore, most nurses favored the new drug dispensation organization.

  7. Medication Errors in Pediatric Anesthesia: A Report From the Wake Up Safe Quality Improvement Initiative.

    PubMed

    Lobaugh, Lauren M Y; Martin, Lizabeth D; Schleelein, Laura E; Tyler, Donald C; Litman, Ronald S

    2017-09-01

    Wake Up Safe is a quality improvement initiative of the Society for Pediatric Anesthesia that contains a deidentified registry of serious adverse events occurring in pediatric anesthesia. The aim of this study was to describe and characterize reported medication errors to find common patterns amenable to preventative strategies. In September 2016, we analyzed approximately 6 years' worth of medication error events reported to Wake Up Safe. Medication errors were classified by: (1) medication category; (2) error type by phase of administration: prescribing, preparation, or administration; (3) bolus or infusion error; (4) provider type and level of training; (5) harm as defined by the National Coordinating Council for Medication Error Reporting and Prevention; and (6) perceived preventability. From 2010 to the time of our data analysis in September 2016, 32 institutions had joined and submitted data on 2087 adverse events during 2,316,635 anesthetics. These reports contained details of 276 medication errors, which comprised the third highest category of events behind cardiac and respiratory related events. Medication errors most commonly involved opioids and sedative/hypnotics. When categorized by phase of handling, 30 events occurred during preparation, 67 during prescribing, and 179 during administration. The most common error type was accidental administration of the wrong dose (N = 84), followed by syringe swap (accidental administration of the wrong syringe, N = 49). Fifty-seven (21%) reported medication errors involved medications prepared as infusions as opposed to 1 time bolus administrations. Medication errors were committed by all types of anesthesia providers, most commonly by attendings. Over 80% of reported medication errors reached the patient and more than half of these events caused patient harm. Fifteen events (5%) required a life sustaining intervention. Nearly all cases (97%) were judged to be either likely or certainly preventable. Our findings characterize the most common types of medication errors in pediatric anesthesia practice and provide guidance on future preventative strategies. Many of these errors will be almost entirely preventable with the use of prefilled medication syringes to avoid accidental ampule swap, bar-coding at the point of medication administration to prevent syringe swap and to confirm the proper dose, and 2-person checking of medication infusions for accuracy.

  8. The role of ensemble-based statistics in variational assimilation of cloud-affected observations from infrared imagers

    NASA Astrophysics Data System (ADS)

    Hacker, Joshua; Vandenberghe, Francois; Jung, Byoung-Jo; Snyder, Chris

    2017-04-01

    Effective assimilation of cloud-affected radiance observations from space-borne imagers, with the aim of improving cloud analysis and forecasting, has proven to be difficult. Large observation biases, nonlinear observation operators, and non-Gaussian innovation statistics present many challenges. Ensemble-variational data assimilation (EnVar) systems offer the benefits of flow-dependent background error statistics from an ensemble, and the ability of variational minimization to handle nonlinearity. The specific benefits of ensemble statistics, relative to static background errors more commonly used in variational systems, have not been quantified for the problem of assimilating cloudy radiances. A simple experiment framework is constructed with a regional NWP model and operational variational data assimilation system, to provide the basis understanding the importance of ensemble statistics in cloudy radiance assimilation. Restricting the observations to those corresponding to clouds in the background forecast leads to innovations that are more Gaussian. The number of large innovations is reduced compared to the more general case of all observations, but not eliminated. The Huber norm is investigated to handle the fat tails of the distributions, and allow more observations to be assimilated without the need for strict background checks that eliminate them. Comparing assimilation using only ensemble background error statistics with assimilation using only static background error statistics elucidates the importance of the ensemble statistics. Although the cost functions in both experiments converge to similar values after sufficient outer-loop iterations, the resulting cloud water, ice, and snow content are greater in the ensemble-based analysis. The subsequent forecasts from the ensemble-based analysis also retain more condensed water species, indicating that the local environment is more supportive of clouds. In this presentation we provide details that explain the apparent benefit from using ensembles for cloudy radiance assimilation in an EnVar context.

  9. Underlying risk factors for prescribing errors in long-term aged care: a qualitative study.

    PubMed

    Tariq, Amina; Georgiou, Andrew; Raban, Magdalena; Baysari, Melissa Therese; Westbrook, Johanna

    2016-09-01

    To identify system-related risk factors perceived to contribute to prescribing errors in Australian long-term care settings, that is, residential aged care facilities (RACFs). The study used qualitative methods to explore factors that contribute to unsafe prescribing in RACFs. Data were collected at three RACFs in metropolitan Sydney, Australia between May and November 2011. Participants included RACF managers, doctors, pharmacists and RACF staff actively involved in prescribing-related processes. Methods included non-participant observations (74 h), in-depth semistructured interviews (n=25) and artefact analysis. Detailed process activity models were developed for observed prescribing episodes supplemented by triangulated analysis using content analysis methods. System-related factors perceived to increase the risk of prescribing errors in RACFs were classified into three overarching themes: communication systems, team coordination and staff management. Factors associated with communication systems included limited point-of-care access to information, inadequate handovers, information storage across different media (paper, electronic and memory), poor legibility of charts, information double handling, multiple faxing of medication charts and reliance on manual chart reviews. Team factors included lack of established lines of responsibility, inadequate team communication and limited participation of doctors in multidisciplinary initiatives like medication advisory committee meetings. Factors related to staff management and workload included doctors' time constraints and their accessibility, lack of trained RACF staff and high RACF staff turnover. The study highlights several system-related factors including laborious methods for exchanging medication information, which often act together to contribute to prescribing errors. Multiple interventions (eg, technology systems, team communication protocols) are required to support the collaborative nature of RACF prescribing. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  10. Rain radar measurement error estimation using data assimilation in an advection-based nowcasting system

    NASA Astrophysics Data System (ADS)

    Merker, Claire; Ament, Felix; Clemens, Marco

    2017-04-01

    The quantification of measurement uncertainty for rain radar data remains challenging. Radar reflectivity measurements are affected, amongst other things, by calibration errors, noise, blocking and clutter, and attenuation. Their combined impact on measurement accuracy is difficult to quantify due to incomplete process understanding and complex interdependencies. An improved quality assessment of rain radar measurements is of interest for applications both in meteorology and hydrology, for example for precipitation ensemble generation, rainfall runoff simulations, or in data assimilation for numerical weather prediction. Especially a detailed description of the spatial and temporal structure of errors is beneficial in order to make best use of the areal precipitation information provided by radars. Radar precipitation ensembles are one promising approach to represent spatially variable radar measurement errors. We present a method combining ensemble radar precipitation nowcasting with data assimilation to estimate radar measurement uncertainty at each pixel. This combination of ensemble forecast and observation yields a consistent spatial and temporal evolution of the radar error field. We use an advection-based nowcasting method to generate an ensemble reflectivity forecast from initial data of a rain radar network. Subsequently, reflectivity data from single radars is assimilated into the forecast using the Local Ensemble Transform Kalman Filter. The spread of the resulting analysis ensemble provides a flow-dependent, spatially and temporally correlated reflectivity error estimate at each pixel. We will present first case studies that illustrate the method using data from a high-resolution X-band radar network.

  11. Effects of stinger axial dynamics and mass compensation methods on experimental modal analysis

    NASA Astrophysics Data System (ADS)

    Hu, Ximing

    1992-06-01

    A longitudinal bar model that includes both stinger elastic and inertia properties is used to analyze the stinger's axial dynamics as well as the mass compensation that is required to obtain accurate input forces when a stinger is installed between the excitation source, force transducer, and the structure under test. Stinger motion transmissibility and force transmissibility, axial resonance and excitation energy transfer problems are discussed in detail. Stinger mass compensation problems occur when the force transducer is mounted on the exciter end of the stinger. These problems are studied theoretically, numerically, and experimentally. It is found that the measured Frequency Response Function (FRF) can be underestimated if mass compensation is based on the stinger exciter-end acceleration and can be overestimated if the mass compensation is based on the structure-end acceleration due to the stinger's compliance. A new mass compensation method that is based on two accelerations is introduced and is seen to improve the accuracy considerably. The effects of the force transducer's compliance on the mass compensation are also discussed. A theoretical model is developed that describes the measurement system's FRD around a test structure's resonance. The model shows that very large measurement errors occur when there is a small relative phase shift between the force and acceleration measurements. These errors can be in hundreds of percent corresponding to a phase error on the order of one or two degrees. The physical reasons for this unexpected error pattern are explained. This error is currently unknown to the experimental modal analysis community. Two sample structures consisting of a rigid mass and a double cantilever beam are used in the numerical calculations and experiments.

  12. Comparison of the clinical information provided by the FreeStyle Navigator continuous interstitial glucose monitor versus traditional blood glucose readings.

    PubMed

    McGarraugh, Geoffrey V; Clarke, William L; Kovatchev, Boris P

    2010-05-01

    The purpose of the analysis was to compare the clinical utility of data from traditional self-monitoring of blood glucose (SMBG) to that of continuous glucose monitoring (CGM). A clinical study of the clinical accuracy of the FreeStyle Navigator CGM System (Abbott Diabetes Care, Alameda, CA), which includes SMBG capabilities, was conducted by comparison to the YSI blood glucose analyzer (YSI Inc., Yellow Springs, OH) using 58 subjects with type 1 diabetes. The Continuous Glucose-Error Grid Analysis (CG-EGA) was used as the analytical tool. Using CG-EGA, the "clinically accurate," "benign errors," and "clinical errors" were 86.8%, 8.7%, and 4.5% for SMBG and 92.7%, 3.7%, and 3.6% for CGM, respectively. If blood glucose is viewed as a process in time, SMBG would provide accurate information about this process 86.8% of the time, whereas CGM would provide accurate information about this process 92.7% of the time (P < 0.0001). In the hypoglycemic range, however, SMBG is more accurate as the "clinically accurate," "benign errors," and "clinical errors" were 83.5%, 6.4%, and 10.1% for SMBG and 57.1%, 8.4%, and 34.5% (P < 0.0001) for CGM, respectively. While SMBG produces more accurate instantaneous glucose values than CGM, control of blood glucose involves a system in flux, and CGM provides more detailed insight into the dynamics of that system. In the normal and elevated glucose ranges, the additional information about the direction and rate of glucose change provided by the FreeStyle Navigator CGM System increases the ability to make correct clinical decisions when compared to episodic SMBG tests.

  13. Response to "Improving Patient Safety With Error Identification in Chemotherapy Orders by Verification Nurses"
.

    PubMed

    Zhu, Ling-Ling; Lv, Na; Zhou, Quan

    2016-12-01

    We read, with great interest, the study by Baldwin and Rodriguez (2016), which described the role of the verification nurse and details the verification process in identifying errors related to chemotherapy orders. We strongly agree with their findings that a verification nurse, collaborating closely with the prescribing physician, pharmacist, and treating nurse, can better identify errors and maintain safety during chemotherapy administration.

  14. Study on analysis from sources of error for Airborne LIDAR

    NASA Astrophysics Data System (ADS)

    Ren, H. C.; Yan, Q.; Liu, Z. J.; Zuo, Z. Q.; Xu, Q. Q.; Li, F. F.; Song, C.

    2016-11-01

    With the advancement of Aerial Photogrammetry, it appears that to obtain geo-spatial information of high spatial and temporal resolution provides a new technical means for Airborne LIDAR measurement techniques, with unique advantages and broad application prospects. Airborne LIDAR is increasingly becoming a new kind of space for earth observation technology, which is mounted by launching platform for aviation, accepting laser pulses to get high-precision, high-density three-dimensional coordinate point cloud data and intensity information. In this paper, we briefly demonstrates Airborne laser radar systems, and that some errors about Airborne LIDAR data sources are analyzed in detail, so the corresponding methods is put forwarded to avoid or eliminate it. Taking into account the practical application of engineering, some recommendations were developed for these designs, which has crucial theoretical and practical significance in Airborne LIDAR data processing fields.

  15. Evaluation of the CEAS model for barley yields in North Dakota and Minnesota

    NASA Technical Reports Server (NTRS)

    Barnett, T. L. (Principal Investigator)

    1981-01-01

    The CEAS yield model is based upon multiple regression analysis at the CRD and state levels. For the historical time series, yield is regressed on a set of variables derived from monthly mean temperature and monthly precipitation. Technological trend is represented by piecewise linear and/or quadriatic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-79) demonstrated that biases are small and performance as indicated by the root mean square errors are acceptable for intended application, however, model response for individual years particularly unusual years, is not very reliable and shows some large errors. The model is objective, adequate, timely, simple and not costly. It considers scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.

  16. A circadian rhythm in skill-based errors in aviation maintenance.

    PubMed

    Hobbs, Alan; Williamson, Ann; Van Dongen, Hans P A

    2010-07-01

    In workplaces where activity continues around the clock, human error has been observed to exhibit a circadian rhythm, with a characteristic peak in the early hours of the morning. Errors are commonly distinguished by the nature of the underlying cognitive failure, particularly the level of intentionality involved in the erroneous action. The Skill-Rule-Knowledge (SRK) framework of Rasmussen is used widely in the study of industrial errors and accidents. The SRK framework describes three fundamental types of error, according to whether behavior is under the control of practiced sensori-motor skill routines with minimal conscious awareness; is guided by implicit or explicit rules or expertise; or where the planning of actions requires the conscious application of domain knowledge. Up to now, examinations of circadian patterns of industrial errors have not distinguished between different types of error. Consequently, it is not clear whether all types of error exhibit the same circadian rhythm. A survey was distributed to aircraft maintenance personnel in Australia. Personnel were invited to anonymously report a safety incident and were prompted to describe, in detail, the human involvement (if any) that contributed to it. A total of 402 airline maintenance personnel reported an incident, providing 369 descriptions of human error in which the time of the incident was reported and sufficient detail was available to analyze the error. Errors were categorized using a modified version of the SRK framework, in which errors are categorized as skill-based, rule-based, or knowledge-based, or as procedure violations. An independent check confirmed that the SRK framework had been applied with sufficient consistency and reliability. Skill-based errors were the most common form of error, followed by procedure violations, rule-based errors, and knowledge-based errors. The frequency of errors was adjusted for the estimated proportion of workers present at work/each hour of the day, and the 24 h pattern of each error type was examined. Skill-based errors exhibited a significant circadian rhythm, being most prevalent in the early hours of the morning. Variation in the frequency of rule-based errors, knowledge-based errors, and procedure violations over the 24 h did not reach statistical significance. The results suggest that during the early hours of the morning, maintenance technicians are at heightened risk of "absent minded" errors involving failures to execute action plans as intended.

  17. A neural network for real-time retrievals of PWV and LWP from Arctic millimeter-wave ground-based observations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cadeddu, M. P.; Turner, D. D.; Liljegren, J. C.

    2009-07-01

    This paper presents a new neural network (NN) algorithm for real-time retrievals of low amounts of precipitable water vapor (PWV) and integrated liquid water from millimeter-wave ground-based observations. Measurements are collected by the 183.3-GHz G-band vapor radiometer (GVR) operating at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility, Barrow, AK. The NN provides the means to explore the nonlinear regime of the measurements and investigate the physical boundaries of the operability of the instrument. A methodology to compute individual error bars associated with the NN output is developed, and a detailed error analysis of the network output is provided.more » Through the error analysis, it is possible to isolate several components contributing to the overall retrieval errors and to analyze the dependence of the errors on the inputs. The network outputs and associated errors are then compared with results from a physical retrieval and with the ARM two-channel microwave radiometer (MWR) statistical retrieval. When the NN is trained with a seasonal training data set, the retrievals of water vapor yield results that are comparable to those obtained from a traditional physical retrieval, with a retrieval error percentage of {approx}5% when the PWV is between 2 and 10 mm, but with the advantages that the NN algorithm does not require vertical profiles of temperature and humidity as input and is significantly faster computationally. Liquid water path (LWP) retrievals from the NN have a significantly improved clear-sky bias (mean of {approx}2.4 g/m{sup 2}) and a retrieval error varying from 1 to about 10 g/m{sup 2} when the PWV amount is between 1 and 10 mm. As an independent validation of the LWP retrieval, the longwave downwelling surface flux was computed and compared with observations. The comparison shows a significant improvement with respect to the MWR statistical retrievals, particularly for LWP amounts of less than 60 g/m{sup 2}.« less

  18. A Well-Calibrated Ocean Algorithm for Special Sensor Microwave/Imager

    NASA Technical Reports Server (NTRS)

    Wentz, Frank J.

    1997-01-01

    I describe an algorithm for retrieving geophysical parameters over the ocean from special sensor microwave/imager (SSM/I) observations. This algorithm is based on a model for the brightness temperature T(sub B) of the ocean and intervening atmosphere. The retrieved parameters are the near-surface wind speed W, the columnar water vapor V, the columnar cloud liquid water L, and the line-of-sight wind W(sub LS). I restrict my analysis to ocean scenes free of rain, and when the algorithm detects rain, the retrievals are discarded. The model and algorithm are precisely calibrated using a very large in situ database containing 37,650 SSM/I overpasses of buoys and 35,108 overpasses of radiosonde sites. A detailed error analysis indicates that the T(sub B) model rms accuracy is between 0.5 and 1 K and that the rms retrieval accuracies for wind, vapor, and cloud are 0.9 m/s, 1.2 mm, and 0.025 mm, respectively. The error in specifying the cloud temperature will introduce an additional 10% error in the cloud water retrieval. The spatial resolution for these accuracies is 50 km. The systematic errors in the retrievals are smaller than the rms errors, being about 0.3 m/s, 0.6 mm, and 0.005 mm for W, V, and L, respectively. The one exception is the systematic error in wind speed of -1.0 m/s that occurs for observations within +/-20 deg of upwind. The inclusion of the line-of-sight wind W(sub LS) in the retrieval significantly reduces the error in wind speed due to wind direction variations. The wind error for upwind observations is reduced from -3.0 to -1.0 m/s. Finally, I find a small signal in the 19-GHz, horizontal polarization (h(sub pol) T(sub B) residual DeltaT(sub BH) that is related to the effective air pressure of the water vapor profile. This information may be of some use in specifying the vertical distribution of water vapor.

  19. Medication errors: a prospective cohort study of hand-written and computerised physician order entry in the intensive care unit.

    PubMed

    Shulman, Rob; Singer, Mervyn; Goldstone, John; Bellingan, Geoff

    2005-10-05

    The study aimed to compare the impact of computerised physician order entry (CPOE) without decision support with hand-written prescribing (HWP) on the frequency, type and outcome of medication errors (MEs) in the intensive care unit. Details of MEs were collected before, and at several time points after, the change from HWP to CPOE. The study was conducted in a London teaching hospital's 22-bedded general ICU. The sampling periods were 28 weeks before and 2, 10, 25 and 37 weeks after introduction of CPOE. The unit pharmacist prospectively recorded details of MEs and the total number of drugs prescribed daily during the data collection periods, during the course of his normal chart review. The total proportion of MEs was significantly lower with CPOE (117 errors from 2429 prescriptions, 4.8%) than with HWP (69 errors from 1036 prescriptions, 6.7%) (p < 0.04). The proportion of errors reduced with time following the introduction of CPOE (p < 0.001). Two errors with CPOE led to patient harm requiring an increase in length of stay and, if administered, three prescriptions with CPOE could potentially have led to permanent harm or death. Differences in the types of error between systems were noted. There was a reduction in major/moderate patient outcomes with CPOE when non-intercepted and intercepted errors were combined (p = 0.01). The mean baseline APACHE II score did not differ significantly between the HWP and the CPOE periods (19.4 versus 20.0, respectively, p = 0.71). Introduction of CPOE was associated with a reduction in the proportion of MEs and an improvement in the overall patient outcome score (if intercepted errors were included). Moderate and major errors, however, remain a significant concern with CPOE.

  20. The effectiveness of computerized order entry at reducing preventable adverse drug events and medication errors in hospital settings: a systematic review and meta-analysis

    PubMed Central

    2014-01-01

    Background The Health Information Technology for Economic and Clinical Health (HITECH) Act subsidizes implementation by hospitals of electronic health records with computerized provider order entry (CPOE), which may reduce patient injuries caused by medication errors (preventable adverse drug events, pADEs). Effects on pADEs have not been rigorously quantified, and effects on medication errors have been variable. The objectives of this analysis were to assess the effectiveness of CPOE at reducing pADEs in hospital-related settings, and examine reasons for heterogeneous effects on medication errors. Methods Articles were identified using MEDLINE, Cochrane Library, Econlit, web-based databases, and bibliographies of previous systematic reviews (September 2013). Eligible studies compared CPOE with paper-order entry in acute care hospitals, and examined diverse pADEs or medication errors. Studies on children or with limited event-detection methods were excluded. Two investigators extracted data on events and factors potentially associated with effectiveness. We used random effects models to pool data. Results Sixteen studies addressing medication errors met pooling criteria; six also addressed pADEs. Thirteen studies used pre-post designs. Compared with paper-order entry, CPOE was associated with half as many pADEs (pooled risk ratio (RR) = 0.47, 95% CI 0.31 to 0.71) and medication errors (RR = 0.46, 95% CI 0.35 to 0.60). Regarding reasons for heterogeneous effects on medication errors, five intervention factors and two contextual factors were sufficiently reported to support subgroup analyses or meta-regression. Differences between commercial versus homegrown systems, presence and sophistication of clinical decision support, hospital-wide versus limited implementation, and US versus non-US studies were not significant, nor was timing of publication. Higher baseline rates of medication errors predicted greater reductions (P < 0.001). Other context and implementation variables were seldom reported. Conclusions In hospital-related settings, implementing CPOE is associated with a greater than 50% decline in pADEs, although the studies used weak designs. Decreases in medication errors are similar and robust to variations in important aspects of intervention design and context. This suggests that CPOE implementation, as subsidized under the HITECH Act, may benefit public health. More detailed reporting of the context and process of implementation could shed light on factors associated with greater effectiveness. PMID:24894078

  1. Teaching Statistics Online Using "Excel"

    ERIC Educational Resources Information Center

    Jerome, Lawrence

    2011-01-01

    As anyone who has taught or taken a statistics course knows, statistical calculations can be tedious and error-prone, with the details of a calculation sometimes distracting students from understanding the larger concepts. Traditional statistics courses typically use scientific calculators, which can relieve some of the tedium and errors but…

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Boram; Gupta, Rajan; Bhattacharya, Tanmoy

    We present a detailed analysis of methods to reduce statistical errors and excited-state contamination in the calculation of matrix elements of quark bilinear operators in nucleon states. All the calculations were done on a 2+1 flavor ensemble with lattices of sizemore » $$32^3 \\times 64$$ generated using the rational hybrid Monte Carlo algorithm at $a=0.081$~fm and with $$M_\\pi=312$$~MeV. The statistical precision of the data is improved using the all-mode-averaging method. We compare two methods for reducing excited-state contamination: a variational analysis and a two-state fit to data at multiple values of the source-sink separation $$t_{\\rm sep}$$. We show that both methods can be tuned to significantly reduce excited-state contamination and discuss their relative advantages and cost-effectiveness. A detailed analysis of the size of source smearing used in the calculation of quark propagators and the range of values of $$t_{\\rm sep}$$ needed to demonstrate convergence of the isovector charges of the nucleon to the $$t_{\\rm sep} \\to \\infty $$ estimates is presented.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Boram; Gupta, Rajan; Bhattacharya, Tanmoy

    We present a detailed analysis of methods to reduce statistical errors and excited-state contamination in the calculation of matrix elements of quark bilinear operators in nucleon states. All the calculations were done on a 2+1-flavor ensemble with lattices of size 32 3 × 64 generated using the rational hybrid Monte Carlo algorithm at a = 0.081 fm and with M π = 312 MeV. The statistical precision of the data is improved using the all-mode-averaging method. We compare two methods for reducing excited-state contamination: a variational analysis and a 2-state fit to data at multiple values of the source-sink separationmore » t sep. We show that both methods can be tuned to significantly reduce excited-state contamination and discuss their relative advantages and cost effectiveness. As a result, a detailed analysis of the size of source smearing used in the calculation of quark propagators and the range of values of t sep needed to demonstrate convergence of the isovector charges of the nucleon to the t sep → ∞ estimates is presented.« less

  4. Controlling excited-state contamination in nucleon matrix elements

    DOE PAGES

    Yoon, Boram; Gupta, Rajan; Bhattacharya, Tanmoy; ...

    2016-06-08

    We present a detailed analysis of methods to reduce statistical errors and excited-state contamination in the calculation of matrix elements of quark bilinear operators in nucleon states. All the calculations were done on a 2+1-flavor ensemble with lattices of size 32 3 × 64 generated using the rational hybrid Monte Carlo algorithm at a = 0.081 fm and with M π = 312 MeV. The statistical precision of the data is improved using the all-mode-averaging method. We compare two methods for reducing excited-state contamination: a variational analysis and a 2-state fit to data at multiple values of the source-sink separationmore » t sep. We show that both methods can be tuned to significantly reduce excited-state contamination and discuss their relative advantages and cost effectiveness. As a result, a detailed analysis of the size of source smearing used in the calculation of quark propagators and the range of values of t sep needed to demonstrate convergence of the isovector charges of the nucleon to the t sep → ∞ estimates is presented.« less

  5. Wavefront error sensing

    NASA Technical Reports Server (NTRS)

    Tubbs, Eldred F.

    1986-01-01

    A two-step approach to wavefront sensing for the Large Deployable Reflector (LDR) was examined as part of an effort to define wavefront-sensing requirements and to determine particular areas for more detailed study. A Hartmann test for coarse alignment, particularly segment tilt, seems feasible if LDR can operate at 5 microns or less. The direct measurement of the point spread function in the diffraction limited region may be a way to determine piston error, but this can only be answered by a detailed software model of the optical system. The question of suitable astronomical sources for either test must also be addressed.

  6. A path reconstruction method integrating dead-reckoning and position fixes applied to humpback whales.

    PubMed

    Wensveen, Paul J; Thomas, Len; Miller, Patrick J O

    2015-01-01

    Detailed information about animal location and movement is often crucial in studies of natural behaviour and how animals respond to anthropogenic activities. Dead-reckoning can be used to infer such detailed information, but without additional positional data this method results in uncertainty that grows with time. Combining dead-reckoning with new Fastloc-GPS technology should provide good opportunities for reconstructing georeferenced fine-scale tracks, and should be particularly useful for marine animals that spend most of their time under water. We developed a computationally efficient, Bayesian state-space modelling technique to estimate humpback whale locations through time, integrating dead-reckoning using on-animal sensors with measurements of whale locations using on-animal Fastloc-GPS and visual observations. Positional observation models were based upon error measurements made during calibrations. High-resolution 3-dimensional movement tracks were produced for 13 whales using a simple process model in which errors caused by water current movements, non-location sensor errors, and other dead-reckoning errors were accumulated into a combined error term. Positional uncertainty quantified by the track reconstruction model was much greater for tracks with visual positions and few or no GPS positions, indicating a strong benefit to using Fastloc-GPS for track reconstruction. Compared to tracks derived only from position fixes, the inclusion of dead-reckoning data greatly improved the level of detail in the reconstructed tracks of humpback whales. Using cross-validation, a clear improvement in the predictability of out-of-set Fastloc-GPS data was observed compared to more conventional track reconstruction methods. Fastloc-GPS observation errors during calibrations were found to vary by number of GPS satellites received and by orthogonal dimension analysed; visual observation errors varied most by distance to the whale. By systematically accounting for the observation errors in the position fixes, our model provides a quantitative estimate of location uncertainty that can be appropriately incorporated into analyses of animal movement. This generic method has potential application for a wide range of marine animal species and data recording systems.

  7. RR Lyrae stars in eclipsing systems -- historical candidates

    NASA Astrophysics Data System (ADS)

    Liška, J.; Skarka, M.; Hájková, P.; Auer, R. F.

    2016-03-01

    Discovery of binary systems among RR Lyrae stars belongs to challenges of present astronomy. So far, none of classical RR Lyrae stars was clearly confirmed, that it is a part of an eclipsing system. From this reason we studied two RR Lyrae stars, VX Her and RW Ari, in which changes assigned to eclipses were detected in sixties and seventies of the 20th century. In this paper our preliminary results based on analysis of new photometric measurements are presented as well as the results from the detailed analysis of original measurements. A new possible eclipsing system, RZ Cet was identified in the archive data. Our analysis rather indicates errors in measurements and reductions of the old data than real changes for all three stars.

  8. Characterisation of the PXIE Allison-type emittance scanner

    DOE PAGES

    D'Arcy, R.; Alvarez, M.; Gaynier, J.; ...

    2016-01-26

    An Allison-type emittance scanner has been designed for PXIE at FNAL with the goal of providing fast and accurate phase space reconstruction. The device has been modified from previous LBNL/SNS designs to operate in both pulsed and DC modes with the addition of water-cooled front slits. Extensive calibration techniques and error analysis allowed confinement of uncertainty to the <5% level (with known caveats). With a 16-bit, 1 MHz electronics scheme the device is able to analyse a pulse with a resolution of 1 μs, allowing for analysis of neutralisation effects. As a result, this paper describes a detailed breakdown ofmore » the R&D, as well as post-run analysis techniques.« less

  9. Transient analysis of a superconducting AC generator using the compensated 2-D model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chun, Y.D.; Lee, H.W.; Lee, J.

    1999-09-01

    A SCG has many advantages over conventional generators, such as reduction in width and size, improvement in efficiency, and better steady-state stability. The paper presents a 2-D transient analysis of a superconducting AC generator (SCG) using the finite element method (FEM). The compensated 2-D model obtained by lengthening the airgap of the original 2-D model is proposed for the accurate and efficient transient analysis. The accuracy of the compensated 2-D model is verified by the small error 6.4% compared to experimental data. The transient characteristics of the 30 KVA SCG model have been investigated in detail and the damper performancemore » on various design parameters is examined.« less

  10. Introduction to Forward-Error-Correcting Coding

    NASA Technical Reports Server (NTRS)

    Freeman, Jon C.

    1996-01-01

    This reference publication introduces forward error correcting (FEC) and stresses definitions and basic calculations for use by engineers. The seven chapters include 41 example problems, worked in detail to illustrate points. A glossary of terms is included, as well as an appendix on the Q function. Block and convolutional codes are covered.

  11. Model-based virtual VSB mask writer verification for efficient mask error checking and optimization prior to MDP

    NASA Astrophysics Data System (ADS)

    Pack, Robert C.; Standiford, Keith; Lukanc, Todd; Ning, Guo Xiang; Verma, Piyush; Batarseh, Fadi; Chua, Gek Soon; Fujimura, Akira; Pang, Linyong

    2014-10-01

    A methodology is described wherein a calibrated model-based `Virtual' Variable Shaped Beam (VSB) mask writer process simulator is used to accurately verify complex Optical Proximity Correction (OPC) and Inverse Lithography Technology (ILT) mask designs prior to Mask Data Preparation (MDP) and mask fabrication. This type of verification addresses physical effects which occur in mask writing that may impact lithographic printing fidelity and variability. The work described here is motivated by requirements for extreme accuracy and control of variations for today's most demanding IC products. These extreme demands necessitate careful and detailed analysis of all potential sources of uncompensated error or variation and extreme control of these at each stage of the integrated OPC/ MDP/ Mask/ silicon lithography flow. The important potential sources of variation we focus on here originate on the basis of VSB mask writer physics and other errors inherent in the mask writing process. The deposited electron beam dose distribution may be examined in a manner similar to optical lithography aerial image analysis and image edge log-slope analysis. This approach enables one to catch, grade, and mitigate problems early and thus reduce the likelihood for costly long-loop iterations between OPC, MDP, and wafer fabrication flows. It moreover describes how to detect regions of a layout or mask where hotspots may occur or where the robustness to intrinsic variations may be improved by modification to the OPC, choice of mask technology, or by judicious design of VSB shots and dose assignment.

  12. Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Cooley, R.L.; Christensen, S.

    2006-01-01

    Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.

  13. Learning from Past Classification Errors: Exploring Methods for Improving the Performance of a Deep Learning-based Building Extraction Model through Quantitative Analysis of Commission Errors for Optimal Sample Selection

    NASA Astrophysics Data System (ADS)

    Swan, B.; Laverdiere, M.; Yang, L.

    2017-12-01

    In the past five years, deep Convolutional Neural Networks (CNN) have been increasingly favored for computer vision applications due to their high accuracy and ability to generalize well in very complex problems; however, details of how they function and in turn how they may be optimized are still imperfectly understood. In particular, their complex and highly nonlinear network architecture, including many hidden layers and self-learned parameters, as well as their mathematical implications, presents open questions about how to effectively select training data. Without knowledge of the exact ways the model processes and transforms its inputs, intuition alone may fail as a guide to selecting highly relevant training samples. Working in the context of improving a CNN-based building extraction model used for the LandScan USA gridded population dataset, we have approached this problem by developing a semi-supervised, highly-scalable approach to select training samples from a dataset of identified commission errors. Due to the large scope this project, tens of thousands of potential samples could be derived from identified commission errors. To efficiently trim those samples down to a manageable and effective set for creating additional training sample, we statistically summarized the spectral characteristics of areas with rates of commission errors at the image tile level and grouped these tiles using affinity propagation. Highly representative members of each commission error cluster were then used to select sites for training sample creation. The model will be incrementally re-trained with the new training data to allow for an assessment of how the addition of different types of samples affects the model performance, such as precision and recall rates. By using quantitative analysis and data clustering techniques to select highly relevant training samples, we hope to improve model performance in a manner that is resource efficient, both in terms of training process and in sample creation.

  14. On the isobaric space of 25-hydroxyvitamin D in human serum: potential for interferences in liquid chromatography/tandem mass spectrometry, systematic errors and accuracy issues.

    PubMed

    Qi, Yulin; Geib, Timon; Schorr, Pascal; Meier, Florian; Volmer, Dietrich A

    2015-01-15

    Isobaric interferences in human serum can potentially influence the measured concentration levels of 25-hydroxyvitamin D [25(OH)D], when low resolving power liquid chromatography/tandem mass spectrometry (LC/MS/MS) instruments and non-specific MS/MS product ions are employed for analysis. In this study, we provide a detailed characterization of these interferences and a technical solution to reduce the associated systematic errors. Detailed electrospray ionization Fourier transform ion cyclotron resonance (FTICR) high-resolution mass spectrometry (HRMS) experiments were used to characterize co-extracted isobaric components of 25(OH)D from human serum. Differential ion mobility spectrometry (DMS), as a gas-phase ion filter, was implemented on a triple quadrupole mass spectrometer for separation of the isobars. HRMS revealed the presence of multiple isobaric compounds in extracts of human serum for different sample preparation methods. Several of these isobars had the potential to increase the peak areas measured for 25(OH)D on low-resolution MS instruments. A major isobaric component was identified as pentaerythritol oleate, a technical lubricant, which was probably an artifact from the analytical instrumentation. DMS was able to remove several of these isobars prior to MS/MS, when implemented on the low-resolution triple quadrupole mass spectrometer. It was shown in this proof-of-concept study that DMS-MS has the potential to significantly decrease systematic errors, and thus improve accuracy of vitamin D measurements using LC/MS/MS. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Improvements in the MGA Code Provide Flexibility and Better Error Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruhter, W D; Kerr, J

    2005-05-26

    The Multi-Group Analysis (MGA) code is widely used to determine nondestructively the relative isotopic abundances of plutonium by gamma-ray spectrometry. MGA users have expressed concern about the lack of flexibility and transparency in the code. Users often have to ask the code developers for modifications to the code to accommodate new measurement situations, such as additional peaks being present in the plutonium spectrum or expected peaks being absent. We are testing several new improvements to a prototype, general gamma-ray isotopic analysis tool with the intent of either revising or replacing the MGA code. These improvements will give the user themore » ability to modify, add, or delete the gamma- and x-ray energies and branching intensities used by the code in determining a more precise gain and in the determination of the relative detection efficiency. We have also fully integrated the determination of the relative isotopic abundances with the determination of the relative detection efficiency to provide a more accurate determination of the errors in the relative isotopic abundances. We provide details in this paper on these improvements and a comparison of results obtained with current versions of the MGA code.« less

  16. Analysis of operator splitting errors for near-limit flame simulations

    NASA Astrophysics Data System (ADS)

    Lu, Zhen; Zhou, Hua; Li, Shan; Ren, Zhuyin; Lu, Tianfeng; Law, Chung K.

    2017-04-01

    High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction-diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction of ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory combustion from cool flames, both the Strang splitting and the midpoint method can successfully capture the dynamic behavior, whereas the balanced splitting scheme results in significant errors.

  17. Analysis of operator splitting errors for near-limit flame simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Zhen; Zhou, Hua; Li, Shan

    High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction–diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction ofmore » ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory combustion from cool flames, both the Strang splitting and the midpoint method can successfully capture the dynamic behavior, whereas the balanced splitting scheme results in significant errors.« less

  18. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  19. Distortion of Digital Image Correlation (DIC) Displacements and Strains from Heat Waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, E. M. C.; Reu, P. L.

    “Heat waves” is a colloquial term used to describe convective currents in air formed when different objects in an area are at different temperatures. In the context of Digital Image Correlation (DIC) and other optical-based image processing techniques, imaging an object of interest through heat waves can significantly distort the apparent location and shape of the object. We present that there are many potential heat sources in DIC experiments, including but not limited to lights, cameras, hot ovens, and sunlight, yet error caused by heat waves is often overlooked. This paper first briefly presents three practical situations in which heatmore » waves contributed significant error to DIC measurements to motivate the investigation of heat waves in more detail. Then the theoretical background of how light is refracted through heat waves is presented, and the effects of heat waves on displacements and strains computed from DIC are characterized in detail. Finally, different filtering methods are investigated to reduce the displacement and strain errors caused by imaging through heat waves. The overarching conclusions from this work are that errors caused by heat waves are significantly higher than typical noise floors for DIC measurements, and that the errors are difficult to filter because the temporal and spatial frequencies of the errors are in the same range as those of typical signals of interest. In conclusion, eliminating or mitigating the effects of heat sources in a DIC experiment is the best solution to minimizing errors caused by heat waves.« less

  20. Distortion of Digital Image Correlation (DIC) Displacements and Strains from Heat Waves

    DOE PAGES

    Jones, E. M. C.; Reu, P. L.

    2017-11-28

    “Heat waves” is a colloquial term used to describe convective currents in air formed when different objects in an area are at different temperatures. In the context of Digital Image Correlation (DIC) and other optical-based image processing techniques, imaging an object of interest through heat waves can significantly distort the apparent location and shape of the object. We present that there are many potential heat sources in DIC experiments, including but not limited to lights, cameras, hot ovens, and sunlight, yet error caused by heat waves is often overlooked. This paper first briefly presents three practical situations in which heatmore » waves contributed significant error to DIC measurements to motivate the investigation of heat waves in more detail. Then the theoretical background of how light is refracted through heat waves is presented, and the effects of heat waves on displacements and strains computed from DIC are characterized in detail. Finally, different filtering methods are investigated to reduce the displacement and strain errors caused by imaging through heat waves. The overarching conclusions from this work are that errors caused by heat waves are significantly higher than typical noise floors for DIC measurements, and that the errors are difficult to filter because the temporal and spatial frequencies of the errors are in the same range as those of typical signals of interest. In conclusion, eliminating or mitigating the effects of heat sources in a DIC experiment is the best solution to minimizing errors caused by heat waves.« less

  1. Rotation Matrix Method Based on Ambiguity Function for GNSS Attitude Determination.

    PubMed

    Yang, Yingdong; Mao, Xuchu; Tian, Weifeng

    2016-06-08

    Global navigation satellite systems (GNSS) are well suited for attitude determination. In this study, we use the rotation matrix method to resolve the attitude angle. This method achieves better performance in reducing computational complexity and selecting satellites. The condition of the baseline length is combined with the ambiguity function method (AFM) to search for integer ambiguity, and it is validated in reducing the span of candidates. The noise error is always the key factor to the success rate. It is closely related to the satellite geometry model. In contrast to the AFM, the LAMBDA (Least-squares AMBiguity Decorrelation Adjustment) method gets better results in solving the relationship of the geometric model and the noise error. Although the AFM is more flexible, it is lack of analysis on this aspect. In this study, the influence of the satellite geometry model on the success rate is analyzed in detail. The computation error and the noise error are effectively treated. Not only is the flexibility of the AFM inherited, but the success rate is also increased. An experiment is conducted in a selected campus, and the performance is proved to be effective. Our results are based on simulated and real-time GNSS data and are applied on single-frequency processing, which is known as one of the challenging case of GNSS attitude determination.

  2. Deriving Color-Color Transformations for VRI Photometry

    NASA Astrophysics Data System (ADS)

    Taylor, B. J.; Joner, M. D.

    2006-12-01

    In this paper, transformations between Cousins R-I and other indices are considered. New transformations to Cousins V-R and Johnson V-K are derived, a published transformation involving T1-T2 on the Washington system is rederived, and the basis for a transformation involving b-y is considered. In addition, a statistically rigorous procedure for deriving such transformations is presented and discussed in detail. Highlights of the discussion include (1) the need for statistical analysis when least-squares relations are determined and interpreted, (2) the permitted forms and best forms for such relations, (3) the essential role played by accidental errors, (4) the decision process for selecting terms to appear in the relations, (5) the use of plots of residuals, (6) detection of influential data, (7) a protocol for assessing systematic effects from absorption features and other sources, (8) the reasons for avoiding extrapolation of the relations, (9) a protocol for ensuring uniformity in data used to determine the relations, and (10) the derivation and testing of the accidental errors of those data. To put the last of these subjects in perspective, it is shown that rms errors for VRI photometry have been as small as 6 mmag for more than three decades and that standard errors for quantities derived from such photometry can be as small as 1 mmag or less.

  3. Effects of Heavy Ion Exposure on Nanocrystal Nonvolatile Memory

    NASA Technical Reports Server (NTRS)

    Oldham, Timothy R.; Suhail, Mohammed; Kuhn, Peter; Prinz, Erwin; Kim, Hak; LaBel, Kenneth A.

    2004-01-01

    We have irradiated engineering samples of Freescale 4M nonvolatile memories with heavy ions. They use Silicon nanocrystals as the storage element, rather than the more common floating gate. The irradiations were performed using the Texas A&M University cyclotron Single Event Effects Test Facility. The chips were tested in the static mode, and in the dynamic read mode, dynamic write (program) mode, and dynamic erase mode. All the errors observed appeared to be due to single, isolated bits, even in the program and erase modes. These errors appeared to be related to the micro-dose mechanism. All the errors corresponded to the loss of electrons from a programmed cell. The underlying physical mechanisms will be discussed in more detail later. There were no errors, which could be attributed to malfunctions of the control circuits. At the highest LET used in the test (85 MeV/mg/sq cm), however, there appeared to be a failure due to gate rupture. Failure analysis is being conducted to confirm this conclusion. There was no unambiguous evidence of latchup under any test conditions. Generally, the results on the nanocrystal technology compare favorably with results on currently available commercial floating gate technology, indicating that the technology is promising for future space applications, both civilian and military.

  4. 2010 Workplace and Gender Relations Survey of Active Duty Members. Overview Report on Sexual Harassment

    DTIC Science & Technology

    2011-04-01

    getting out of your Service Your work performace decreased WGRA 2010 Q37 Margins of error range from ±1 to ±2 Note. “Large extent” includes the...Mental health care doesn’t work ........ a. b. c. d. e. f. g. h. i. j. k. GENDER-RELATED EXPERIENCES Yes, and your gender was a factor Yes, but your...months prior to taking the survey and the details of incidents they have experienced. The report also includes an analysis of the effectiveness of

  5. Assessment and Verification of SLS Block 1-B Exploration Upper Stage and Stage Disposal Performance

    NASA Technical Reports Server (NTRS)

    Patrick, Sean; Oliver, T. Emerson; Anzalone, Evan J.

    2018-01-01

    Delta-v allocation to correct for insertion errors caused by state uncertainty is one of the key performance requirements imposed on the SLS Navigation System. Additionally, SLS mission requirements include the need for the Exploration Up-per Stage (EUS) to be disposed of successfully. To assess these requirements, the SLS navigation team has developed and implemented a series of analysis methods. Here the authors detail the Delta-Delta-V approach to assessing delta-v allocation as well as the EUS disposal optimization approach.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nielsen, Erik; Blume-Kohout, Robin; Rudinger, Kenneth

    PyGSTi is an implementation of Gate Set Tomography in the python programming language. Gate Set Tomography (GST) is a theory and protocol for simultaneously estimating the state preparation, gate operations, and measurement effects of a physical system of one or many quantum bits (qubits). These estimates are based entirely on the statistics of experimental measurements, and their interpretation and analysis can provide a detailed understanding of the types of errors/imperfections in the physical system. In this way, GST provides not only a means of certifying the "goodness" of qubits but also a means of debugging (i.e. improving) them.

  7. Error protection capability of space shuttle data bus designs

    NASA Technical Reports Server (NTRS)

    Proch, G. E.

    1974-01-01

    Error protection assurance in the reliability of digital data communications is discussed. The need for error protection on the space shuttle data bus system has been recognized and specified as a hardware requirement. The error protection techniques of particular concern are those designed into the Shuttle Main Engine Interface (MEI) and the Orbiter Multiplex Interface Adapter (MIA). The techniques and circuit design details proposed for these hardware are analyzed in this report to determine their error protection capability. The capability is calculated in terms of the probability of an undetected word error. Calculated results are reported for a noise environment that ranges from the nominal noise level stated in the hardware specifications to burst levels which may occur in extreme or anomalous conditions.

  8. Tailoring a Human Reliability Analysis to Your Industry Needs

    NASA Technical Reports Server (NTRS)

    DeMott, D. L.

    2016-01-01

    Companies at risk of accidents caused by human error that result in catastrophic consequences include: airline industry mishaps, medical malpractice, medication mistakes, aerospace failures, major oil spills, transportation mishaps, power production failures and manufacturing facility incidents. Human Reliability Assessment (HRA) is used to analyze the inherent risk of human behavior or actions introducing errors into the operation of a system or process. These assessments can be used to identify where errors are most likely to arise and the potential risks involved if they do occur. Using the basic concepts of HRA, an evolving group of methodologies are used to meet various industry needs. Determining which methodology or combination of techniques will provide a quality human reliability assessment is a key element to developing effective strategies for understanding and dealing with risks caused by human errors. There are a number of concerns and difficulties in "tailoring" a Human Reliability Assessment (HRA) for different industries. Although a variety of HRA methodologies are available to analyze human error events, determining the most appropriate tools to provide the most useful results can depend on industry specific cultures and requirements. Methodology selection may be based on a variety of factors that include: 1) how people act and react in different industries, 2) expectations based on industry standards, 3) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 4) type and availability of data, 5) how the industry views risk & reliability, and 6) types of emergencies, contingencies and routine tasks. Other considerations for methodology selection should be based on what information is needed from the assessment. If the principal concern is determination of the primary risk factors contributing to the potential human error, a more detailed analysis method may be employed versus a requirement to provide a numerical value as part of a probabilistic risk assessment. Industries involved with humans operating large equipment or transport systems (ex. railroads or airlines) would have more need to address the man machine interface than medical workers administering medications. Human error occurs in every industry; in most cases the consequences are relatively benign and occasionally beneficial. In cases where the results can have disastrous consequences, the use of Human Reliability techniques to identify and classify the risk of human errors allows a company more opportunities to mitigate or eliminate these types of risks and prevent costly tragedies.

  9. Extending "Deep Blue" aerosol retrieval coverage to cases of absorbing aerosols above clouds: Sensitivity analysis and first case studies

    NASA Astrophysics Data System (ADS)

    Sayer, A. M.; Hsu, N. C.; Bettenhausen, C.; Lee, J.; Redemann, J.; Schmid, B.; Shinozuka, Y.

    2016-05-01

    Cases of absorbing aerosols above clouds (AACs), such as smoke or mineral dust, are omitted from most routinely processed space-based aerosol optical depth (AOD) data products, including those from the Moderate Resolution Imaging Spectroradiometer (MODIS). This study presents a sensitivity analysis and preliminary algorithm to retrieve above-cloud AOD and liquid cloud optical depth (COD) for AAC cases from MODIS or similar sensors, for incorporation into a future version of the "Deep Blue" AOD data product. Detailed retrieval simulations suggest that these sensors should be able to determine AAC AOD with a typical level of uncertainty ˜25-50% (with lower uncertainties for more strongly absorbing aerosol types) and COD with an uncertainty ˜10-20%, if an appropriate aerosol optical model is known beforehand. Errors are larger, particularly if the aerosols are only weakly absorbing, if the aerosol optical properties are not known, and the appropriate model to use must also be retrieved. Actual retrieval errors are also compared to uncertainty envelopes obtained through the optimal estimation (OE) technique; OE-based uncertainties are found to be generally reasonable for COD but larger than actual retrieval errors for AOD, due in part to difficulties in quantifying the degree of spectral correlation of forward model error. The algorithm is also applied to two MODIS scenes (one smoke and one dust) for which near-coincident NASA Ames Airborne Tracking Sun photometer (AATS) data were available to use as a ground truth AOD data source, and found to be in good agreement, demonstrating the validity of the technique with real observations.

  10. A critical analysis of the accuracy of several numerical techniques for combustion kinetic rate equations

    NASA Technical Reports Server (NTRS)

    Radhadrishnan, Krishnan

    1993-01-01

    A detailed analysis of the accuracy of several techniques recently developed for integrating stiff ordinary differential equations is presented. The techniques include two general-purpose codes EPISODE and LSODE developed for an arbitrary system of ordinary differential equations, and three specialized codes CHEMEQ, CREK1D, and GCKP4 developed specifically to solve chemical kinetic rate equations. The accuracy study is made by application of these codes to two practical combustion kinetics problems. Both problems describe adiabatic, homogeneous, gas-phase chemical reactions at constant pressure, and include all three combustion regimes: induction, heat release, and equilibration. To illustrate the error variation in the different combustion regimes the species are divided into three types (reactants, intermediates, and products), and error versus time plots are presented for each species type and the temperature. These plots show that CHEMEQ is the most accurate code during induction and early heat release. During late heat release and equilibration, however, the other codes are more accurate. A single global quantity, a mean integrated root-mean-square error, that measures the average error incurred in solving the complete problem is used to compare the accuracy of the codes. Among the codes examined, LSODE is the most accurate for solving chemical kinetics problems. It is also the most efficient code, in the sense that it requires the least computational work to attain a specified accuracy level. An important finding is that use of the algebraic enthalpy conservation equation to compute the temperature can be more accurate and efficient than integrating the temperature differential equation.

  11. A comparison of three approaches to non-stationary flood frequency analysis

    NASA Astrophysics Data System (ADS)

    Debele, S. E.; Strupczewski, W. G.; Bogdanowicz, E.

    2017-08-01

    Non-stationary flood frequency analysis (FFA) is applied to statistical analysis of seasonal flow maxima from Polish and Norwegian catchments. Three non-stationary estimation methods, namely, maximum likelihood (ML), two stage (WLS/TS) and GAMLSS (generalized additive model for location, scale and shape parameters), are compared in the context of capturing the effect of non-stationarity on the estimation of time-dependent moments and design quantiles. The use of a multimodel approach is recommended, to reduce the errors due to the model misspecification in the magnitude of quantiles. The results of calculations based on observed seasonal daily flow maxima and computer simulation experiments showed that GAMLSS gave the best results with respect to the relative bias and root mean square error in the estimates of trend in the standard deviation and the constant shape parameter, while WLS/TS provided better accuracy in the estimates of trend in the mean value. Within three compared methods the WLS/TS method is recommended to deal with non-stationarity in short time series. Some practical aspects of the GAMLSS package application are also presented. The detailed discussion of general issues related to consequences of climate change in the FFA is presented in the second part of the article entitled "Around and about an application of the GAMLSS package in non-stationary flood frequency analysis".

  12. Critical Analysis of the Mathematical Formalism of Theoretical Physics. V. Foundations of the Theory of Negative Numbers

    NASA Astrophysics Data System (ADS)

    Kalanov, Temur Z.

    2015-04-01

    Analysis of the foundations of the theory of negative numbers is proposed. The unity of formal logic and of rational dialectics is methodological basis of the analysis. Statement of the problem is as follows. As is known, point O in the Cartesian coordinate system XOY determines the position of zero on the scale. The number ``zero'' belongs to both the scale of positive numbers and the scale of negative numbers. In this case, the following formallogical contradiction arises: the number 0 is both positive number and negative number; or, equivalently, the number 0 is neither positive number nor negative number, i.e. number 0 has no sign. Then the following question arises: Do negative numbers exist in science and practice? A detailed analysis of the problem shows that negative numbers do not exist because the foundations of the theory of negative numbers contrary to the formal-logical laws. It is proved that: (a) all numbers have no signs; (b) the concepts ``negative number'' and ``negative sign of number'' represent a formallogical error; (c) signs ``plus'' and ``minus'' are only symbols of mathematical operations. The logical errors determine the essence of the theory of negative numbers: the theory of negative number is a false theory.

  13. A revised 5 minute gravimetric geoid and associated errors for the North Atlantic calibration area

    NASA Technical Reports Server (NTRS)

    Mader, G. L.

    1979-01-01

    A revised 5 minute gravimetric geoid and its errors were computed for the North Atlantic calibration area using GEM-8 potential coefficients and the latest gravity data available from the Defense Mapping Agency. This effort was prompted by a number of inconsistencies and small errors found in previous calculations of this geoid. The computational method and constants used are given in detail to serve as a reference for future work.

  14. Thermal Property Analysis of Axle Load Sensors for Weighing Vehicles in Weigh-in-Motion System

    PubMed Central

    Burnos, Piotr; Gajda, Janusz

    2016-01-01

    Systems which permit the weighing of vehicles in motion are called dynamic Weigh-in-Motion scales. In such systems, axle load sensors are embedded in the pavement. Among the influencing factors that negatively affect weighing accuracy is the pavement temperature. This paper presents a detailed analysis of this phenomenon and describes the properties of polymer, quartz and bending plate load sensors. The studies were conducted in two ways: at roadside Weigh-in-Motion sites and at a laboratory using a climate chamber. For accuracy assessment of roadside systems, the reference vehicle method was used. The pavement temperature influence on the weighing error was experimentally investigated as well as a non-uniform temperature distribution along and across the Weigh-in-Motion site. Tests carried out in the climatic chamber allowed the influence of temperature on the sensor intrinsic error to be determined. The results presented clearly show that all kinds of sensors are temperature sensitive. This is a new finding, as up to now the quartz and bending plate sensors were considered insensitive to this factor. PMID:27983704

  15. Non-airborne conflicts: The causes and effects of runway transgressions

    NASA Technical Reports Server (NTRS)

    Tarrel, Richard J.

    1985-01-01

    The 1210 ASRS runway transgression reports are studied and expanded to yield descriptive statistics. Additionally, a one of three subset was studied in detail for purposes of evaluating the causes, risks, and consequences behind trangression events. Occurrences are subdivided by enabling factor and flight phase designations. It is concluded that a larger risk of collision is associated with controller enabled departure transgressions over all other categories. The influence of this type is especially evident during the period following the air traffic controllers' strike of 1981. Causal analysis indicates that, coincidentally, controller enabled departure transgressions also, show the strongest correlations between causal factors. It shows that departure errors occur more often when visibility is reduced, and when multiple takeoff runways or intersection takeoffs are employed. In general, runway transgressions attributable to both pilot and controller errors arise from three problem areas: information transfer, awareness, and spatial judgement. Enhanced awareness by controllers will probably reduce controller enabled incidents.

  16. Pulsed Airborne Lidar Measurements of C02 Column Absorption

    NASA Technical Reports Server (NTRS)

    Abshire, James B.; Riris, Haris; Allan, Graham R.; Weaver, Clark J.; Mao, Jianping; Sun, Xiaoli; Hasselbrack, William E.; Rodriquez, Michael; Browell, Edward V.

    2011-01-01

    We report on airborne lidar measurements of atmospheric CO2 column density for an approach being developed as a candidate for NASA's ASCENDS mission. It uses a pulsed dual-wavelength lidar measurement based on the integrated path differential absorption (IPDA) technique. We demonstrated the approach using the CO2 measurement from aircraft in July and August 2009 over four locations. The results show clear CO2 line shape and absorption signals, which follow the expected changes with aircraft altitude from 3 to 13 km. The 2009 measurements have been analyzed in detail and the results show approx.1 ppm random errors for 8-10 km altitudes and approx.30 sec averaging times. Airborne measurements were also made in 2010 with stronger signals and initial analysis shows approx. 0.3 ppm random errors for 80 sec averaging times for measurements at altitudes> 6 km.

  17. Design and testing of focusing magnets for a compact electron linac

    NASA Astrophysics Data System (ADS)

    Chen, Qushan; Qin, Bin; Liu, Kaifeng; Liu, Xu; Fu, Qiang; Tan, Ping; Hu, Tongning; Pei, Yuanji

    2015-10-01

    Solenoid field errors have great influence on electron beam qualities. In this paper, design and testing of high precision solenoids for a compact electron linac is presented. We proposed an efficient and practical method to solve the peak field of the solenoid for relativistic electron beams based on the reduced envelope equation. Beam dynamics involving space charge force were performed to predict the focusing effects. Detailed optimization methods were introduced to achieve an ultra-compact configuration as well as high accuracy, with the help of the POISSON and OPERA packages. Efforts were attempted to restrain system errors in the off-line testing, which showed the short lens and the main solenoid produced a peak field of 0.13 T and 0.21 T respectively. Data analysis involving central and off axes was carried out and demonstrated that the testing results fitted well with the design.

  18. Optimization of a sensor cluster for determination of trajectories and velocities of supersonic objects

    NASA Astrophysics Data System (ADS)

    Cannella, Marco; Sciuto, Salvatore Andrea

    2001-04-01

    An evaluation of errors for a method for determination of trajectories and velocities of supersonic objects is conducted. The analytical study of a cluster, composed of three pressure transducers and generally used as an apparatus for cinematic determination of parameters of supersonic objects, is developed. Furthermore, detailed investigation into the accuracy of this cluster on determination of the slope of an incoming shock wave is carried out for optimization of the device. In particular, a specific non-dimensional parameter is proposed in order to evaluate accuracies for various values of parameters and reference graphs are provided in order to properly design the sensor cluster. Finally, on the basis of the error analysis conducted, a discussion on the best estimation of the relative distance for the sensor as a function of temporal resolution of the measuring system is presented.

  19. Fault-tolerant clock synchronization validation methodology. [in computer systems

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.

    1987-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.

  20. The Higgs transverse momentum distribution at NNLL and its theoretical errors

    DOE PAGES

    Neill, Duff; Rothstein, Ira Z.; Vaidya, Varun

    2015-12-15

    In this letter, we present the NNLL-NNLO transverse momentum Higgs distribution arising from gluon fusion. In the regime p ⊥ << m h we include the resummation of the large logs at next to next-to leading order and then match on to the α 2 s fixed order result near p ⊥~m h. By utilizing the rapidity renormalization group (RRG) we are able to smoothly match between the resummed, small p ⊥ regime and the fixed order regime. We give a detailed discussion of the scale dependence of the result including an analysis of the rapidity scale dependence. Our centralmore » value differs from previous results, in the transition region as well as the tail, by an amount which is outside the error band. Lastly, this difference is due to the fact that the RRG profile allows us to smoothly turn off the resummation.« less

  1. The Effects of Observation Errors on the Attack Vulnerability of Complex Networks

    DTIC Science & Technology

    2012-11-01

    more detail, to construct a true network we select a topology (erdos- renyi (Erdos & Renyi , 1959), scale-free (Barabási & Albert, 1999), small world...Efficiency of Scale-Free Networks: Error and Attack Tolerance. Physica A, Volume 320, pp. 622-642. 6. Erdos, P. & Renyi , A., 1959. On Random Graphs, I

  2. Interaction of Convective Organization and Monsoon Precipitation, Atmosphere, Surface and Sea (INCOMPASS)

    NASA Astrophysics Data System (ADS)

    Turner, Andrew; Bhat, Gs; Evans, Jonathan; Marsham, John; Martin, Gill; Parker, Douglas; Taylor, Chris; Bhattacharya, Bimal; Madan, Ranju; Mitra, Ashis; Mrudula, Gm; Muddu, Sekhar; Pattnaik, Sandeep; Rajagopal, En; Tripathi, Sachida

    2015-04-01

    The monsoon supplies the majority of water in South Asia, making understanding and predicting its rainfall vital for the growing population and economy. However, modelling and forecasting the monsoon from days to the season ahead is limited by large model errors that develop quickly, with significant inter-model differences pointing to errors in physical parametrizations such as convection, the boundary layer and land surface. These errors persist into climate projections and many of these errors persist even when increasing resolution. At the same time, a lack of detailed observations is preventing a more thorough understanding of monsoon circulation and its interaction with the land surface: a process governed by the boundary layer and convective cloud dynamics. The INCOMPASS project will support and develop modelling capability in Indo-UK monsoon research, including test development of a new Met Office Unified Model 100m-resolution domain over India. The first UK detachment of the FAAM research aircraft to India, in combination with an intensive ground-based observation campaign, will gather new observations of the surface, boundary layer structure and atmospheric profiles to go with detailed information on the timing of monsoon rainfall. Observations will be focused on transects in the northern plains of India (covering a range of surface types from irrigated to rain-fed agriculture, and wet to dry climatic zones) and across the Western Ghats and rain shadow in southern India (including transitions from land to ocean and across orography). A pilot observational campaign is planned for summer 2015, with the main field campaign to take place during spring/summer 2016. This project will advance our ability to forecast the monsoon, through a programme of measurements and modelling that aims to capture the key surface-atmosphere feedback processes in models. The observational analysis will allow a unique and unprecedented characterization of monsoon processes that will feed directly into model development at the UK Met Office and Indian NCMRWF, through model evaluation at a range of scales and leading to model improvement by working directly with parametrization developers. The project will institute a new long-term series of measurements of land surface fluxes, a particularly unconstrained observation for India, through eddy covariance flux towers. Combined with detailed land surface modelling using the Joint UK Land Environment Simulator (JULES) model, this will allow testing of land surface initialization in monsoon forecasts and improved land-atmosphere coupling.

  3. An analysis of quantum coherent solar photovoltaic cells

    NASA Astrophysics Data System (ADS)

    Kirk, A. P.

    2012-02-01

    A new hypothesis (Scully et al., Proc. Natl. Acad. Sci. USA 108 (2011) 15097) suggests that it is possible to break the statistical physics-based detailed balance-limiting power conversion efficiency and increase the power output of a solar photovoltaic cell by using “noise-induced quantum coherence” to increase the current. The fundamental errors of this hypothesis are explained here. As part of this analysis, we show that the maximum photogenerated current density for a practical solar cell is a function of the incident spectrum, sunlight concentration factor, and solar cell energy bandgap and thus the presence of quantum coherence is irrelevant as it is unable to lead to increased current output from a solar cell.

  4. Detailed Debunking of Denial

    NASA Astrophysics Data System (ADS)

    Enting, I. G.; Abraham, J. P.

    2012-12-01

    The disinformation campaign against climate science has been compared to a guerilla war whose tactics undermine the traditional checks and balances of science. One comprehensive approach has to been produce archives of generic responses such as the websites of RealClimate and SkepticalScience. We review our experiences with an alternative approach of detailed responses to a small number of high profile cases. Our particular examples were Professor Ian Plimer and Christopher Monckton, the Third Viscount Monckton of Brenchley, each of whom has been taken seriously by political leaders in our respective countries. We relate our experiences to comparable examples such as John Mashey's analysis of the Wegman report and the formal complaints about Lomborg's "Skeptical Environmentalist" and Durkin's "Great Global Warming Swindle". Our two approaches used contrasting approaches: an on-line video of a lecture vs an evolving compendium of misrepresentations. Additionally our approaches differed in the emphasis. The analysis of Monckton concentrated on the misrepresentation of the science, while the analysis of Plimer concentrated on departures from accepted scientific practice: fabrication of data, misrepresentation of cited sources and unattributed use of the work of others. Benefits of an evolving compendium were the ability to incorporate contributions from members of the public who had identified additional errors and the scope for addressing new aspects as they came to public attention. `Detailed debunking' gives non-specialists a reference point for distinguishing non-science when engaging in public debate.

  5. Randomly correcting model errors in the ARPEGE-Climate v6.1 component of CNRM-CM: applications for seasonal forecasts

    NASA Astrophysics Data System (ADS)

    Batté, Lauriane; Déqué, Michel

    2016-06-01

    Stochastic methods are increasingly used in global coupled model climate forecasting systems to account for model uncertainties. In this paper, we describe in more detail the stochastic dynamics technique introduced by Batté and Déqué (2012) in the ARPEGE-Climate atmospheric model. We present new results with an updated version of CNRM-CM using ARPEGE-Climate v6.1, and show that the technique can be used both as a means of analyzing model error statistics and accounting for model inadequacies in a seasonal forecasting framework.The perturbations are designed as corrections of model drift errors estimated from a preliminary weakly nudged re-forecast run over an extended reference period of 34 boreal winter seasons. A detailed statistical analysis of these corrections is provided, and shows that they are mainly made of intra-month variance, thereby justifying their use as in-run perturbations of the model in seasonal forecasts. However, the interannual and systematic error correction terms cannot be neglected. Time correlation of the errors is limited, but some consistency is found between the errors of up to 3 consecutive days.These findings encourage us to test several settings of the random draws of perturbations in seasonal forecast mode. Perturbations are drawn randomly but consistently for all three prognostic variables perturbed. We explore the impact of using monthly mean perturbations throughout a given forecast month in a first ensemble re-forecast (SMM, for stochastic monthly means), and test the use of 5-day sequences of perturbations in a second ensemble re-forecast (S5D, for stochastic 5-day sequences). Both experiments are compared in the light of a REF reference ensemble with initial perturbations only. Results in terms of forecast quality are contrasted depending on the region and variable of interest, but very few areas exhibit a clear degradation of forecasting skill with the introduction of stochastic dynamics. We highlight some positive impacts of the method, mainly on Northern Hemisphere extra-tropics. The 500 hPa geopotential height bias is reduced, and improvements project onto the representation of North Atlantic weather regimes. A modest impact on ensemble spread is found over most regions, which suggests that this method could be complemented by other stochastic perturbation techniques in seasonal forecasting mode.

  6. Monte Carlo errors with less errors

    NASA Astrophysics Data System (ADS)

    Wolff, Ulli; Alpha Collaboration

    2004-01-01

    We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.

  7. Error in Airspeed Measurement Due to the Static-Pressure Field Ahead of an Airplane at Transonic Speeds

    NASA Technical Reports Server (NTRS)

    O'Bryan, Thomas C; Danforth, Edward C B; Johnston, J Ford

    1955-01-01

    The magnitude and variation of the static-pressure error for various distances ahead of sharp-nose bodies and open-nose air inlets and for a distance of 1 chord ahead of the wing tip of a swept wing are defined by a combination of experiment and theory. The mechanism of the error is discussed in some detail to show the contributing factors that make up the error. The information presented provides a useful means for choosing a proper location for measurement of static pressure for most purposes.

  8. How many drinks did you have on September 11, 2001?

    PubMed

    Perrine, M W Bud; Schroder, Kerstin E E

    2005-07-01

    This study tested the predictability of error in retrospective self-reports of alcohol consumption on September 11, 2001, among 80 Vermont light, medium and heavy drinkers. Subjects were 52 men and 28 women participating in daily self-reports of alcohol consumption for a total of 2 years, collected via interactive voice response technology (IVR). In addition, retrospective self-reports of alcohol consumption on September 11, 2001, were collected by telephone interview 4-5 days following the terrorist attacks. Retrospective error was calculated as the difference between the IVR self-report of drinking behavior on September 11 and the retrospective self-report collected by telephone interview. Retrospective error was analyzed as a function of gender and baseline drinking behavior during the 365 days preceding September 11, 2001 (termed "the baseline"). The intraclass correlation (ICC) between daily IVR and retrospective self-reports of alcohol consumption on September 11 was .80. Women provided, on average, more accurate self-reports (ICC = .96) than men (ICC = .72) but displayed more underreporting bias in retrospective responses. Amount and individual variability of alcohol consumption during the 1-year baseline explained, on average, 11% of the variance in overreporting (r = .33), 9% of the variance in underreporting (r = .30) and 25% of the variance in the overall magnitude of error (r = .50), with correlations up to .62 (r2 = .38). The size and direction of error were clearly predictable from the amount and variation in drinking behavior during the 1-year baseline period. The results demonstrate the utility and detail of information that can be derived from daily IVR self-reports in the analysis of retrospective error.

  9. Criticality of Adaptive Control Dynamics

    NASA Astrophysics Data System (ADS)

    Patzelt, Felix; Pawelzik, Klaus

    2011-12-01

    We show, that stabilization of a dynamical system can annihilate observable information about its structure. This mechanism induces critical points as attractors in locally adaptive control. It also reveals, that previously reported criticality in simple controllers is caused by adaptation and not by other controller details. We apply these results to a real-system example: human balancing behavior. A model of predictive adaptive closed-loop control subject to some realistic constraints is introduced and shown to reproduce experimental observations in unprecedented detail. Our results suggests, that observed error distributions in between the Lévy and Gaussian regimes may reflect a nearly optimal compromise between the elimination of random local trends and rare large errors.

  10. Measuring the relationship between interruptions, multitasking and prescribing errors in an emergency department: a study protocol.

    PubMed

    Raban, Magdalena Z; Walter, Scott R; Douglas, Heather E; Strumpman, Dana; Mackenzie, John; Westbrook, Johanna I

    2015-10-13

    Interruptions and multitasking are frequent in clinical settings, and have been shown in the cognitive psychology literature to affect performance, increasing the risk of error. However, comparatively less is known about their impact on errors in clinical work. This study will assess the relationship between prescribing errors, interruptions and multitasking in an emergency department (ED) using direct observations and chart review. The study will be conducted in an ED of a 440-bed teaching hospital in Sydney, Australia. Doctors will be shadowed at proximity by observers for 2 h time intervals while they are working on day shift (between 0800 and 1800). Time stamped data on tasks, interruptions and multitasking will be recorded on a handheld computer using the validated Work Observation Method by Activity Timing (WOMBAT) tool. The prompts leading to interruptions and multitasking will also be recorded. When doctors prescribe medication, type of chart and chart sections written on, along with the patient's medical record number (MRN) will be recorded. A clinical pharmacist will access patient records and assess the medication orders for prescribing errors. The prescribing error rate will be calculated per prescribing task and is defined as the number of errors divided by the number of medication orders written during the prescribing task. The association between prescribing error rates, and rates of prompts, interruptions and multitasking will be assessed using statistical modelling. Ethics approval has been obtained from the hospital research ethics committee. Eligible doctors will be provided with written information sheets and written consent will be obtained if they agree to participate. Doctor details and MRNs will be kept separate from the data on prescribing errors, and will not appear in the final data set for analysis. Study results will be disseminated in publications and feedback to the ED. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  11. Classification and reduction of pilot error

    NASA Technical Reports Server (NTRS)

    Rogers, W. H.; Logan, A. L.; Boley, G. D.

    1989-01-01

    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.

  12. DEPEND: A simulation-based environment for system level dependability analysis

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar; Iyer, Ravishankar K.

    1992-01-01

    The design and evaluation of highly reliable computer systems is a complex issue. Designers mostly develop such systems based on prior knowledge and experience and occasionally from analytical evaluations of simplified designs. A simulation-based environment called DEPEND which is especially geared for the design and evaluation of fault-tolerant architectures is presented. DEPEND is unique in that it exploits the properties of object-oriented programming to provide a flexible framework with which a user can rapidly model and evaluate various fault-tolerant systems. The key features of the DEPEND environment are described, and its capabilities are illustrated with a detailed analysis of a real design. In particular, DEPEND is used to simulate the Unix based Tandem Integrity fault-tolerance and evaluate how well it handles near-coincident errors caused by correlated and latent faults. Issues such as memory scrubbing, re-integration policies, and workload dependent repair times which affect how the system handles near-coincident errors are also evaluated. Issues such as the method used by DEPEND to simulate error latency and the time acceleration technique that provides enormous simulation speed up are also discussed. Unlike any other simulation-based dependability studies, the use of these approaches and the accuracy of the simulation model are validated by comparing the results of the simulations, with measurements obtained from fault injection experiments conducted on a production Tandem Integrity machine.

  13. Application of Ensemble Kalman Filter in Power System State Tracking and Sensitivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yulan; Huang, Zhenyu; Zhou, Ning

    2012-05-01

    Ensemble Kalman Filter (EnKF) is proposed to track dynamic states of generators. The algorithm of EnKF and its application to generator state tracking are presented in detail. The accuracy and sensitivity of the method are analyzed with respect to initial state errors, measurement noise, unknown fault locations, time steps and parameter errors. It is demonstrated through simulation studies that even with some errors in the parameters, the developed EnKF can effectively track generator dynamic states using disturbance data.

  14. Quantification of the Uncertainties for the Space Launch System Liftoff/Transition and Ascent Databases

    NASA Technical Reports Server (NTRS)

    Favaregh, Amber L.; Houlden, Heather P.; Pinier, Jeremy T.

    2016-01-01

    A detailed description of the uncertainty quantification process for the Space Launch System Block 1 vehicle configuration liftoff/transition and ascent 6-Degree-of-Freedom (DOF) aerodynamic databases is presented. These databases were constructed from wind tunnel test data acquired in the NASA Langley Research Center 14- by 22-Foot Subsonic Wind Tunnel and the Boeing Polysonic Wind Tunnel in St. Louis, MO, respectively. The major sources of error for these databases were experimental error and database modeling errors.

  15. RF transient analysis and stabilization of the phase and energy of the proposed PIP-II LINAC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edelen, J. P.; Chase, B. E.

    This paper describes a recent effort to develop and benchmark a simulation tool for the analysis of RF transients and their compensation in an H- linear accelerator. Existing tools in this area either focus on electron LINACs or lack fundamental details about the LLRF system that are necessary to provide realistic performance estimates. In our paper we begin with a discussion of our computational models followed by benchmarking with existing beam-dynamics codes and measured data. We then analyze the effect of RF transients and their compensation in the PIP-II LINAC, followed by an analysis of calibration errors and how amore » Newton’s Method based feedback scheme can be used to regulate the beam energy to within the specified limits.« less

  16. Addressing and Presenting Quality of Satellite Data via Web-Based Services

    NASA Technical Reports Server (NTRS)

    Leptoukh, Gregory; Lynnes, C.; Ahmad, S.; Fox, P.; Zednik, S.; West, P.

    2011-01-01

    With the recent attention to climate change and proliferation of remote-sensing data utilization, climate model and various environmental monitoring and protection applications have begun to increasingly rely on satellite measurements. Research application users seek good quality satellite data, with uncertainties and biases provided for each data point. However, different communities address remote sensing quality issues rather inconsistently and differently. We describe our attempt to systematically characterize, capture, and provision quality and uncertainty information as it applies to the NASA MODIS Aerosol Optical Depth data product. In particular, we note the semantic differences in quality/bias/uncertainty at the pixel, granule, product, and record levels. We outline various factors contributing to uncertainty or error budget; errors. Web-based science analysis and processing tools allow users to access, analyze, and generate visualizations of data while alleviating users from having directly managing complex data processing operations. These tools provide value by streamlining the data analysis process, but usually shield users from details of the data processing steps, algorithm assumptions, caveats, etc. Correct interpretation of the final analysis requires user understanding of how data has been generated and processed and what potential biases, anomalies, or errors may have been introduced. By providing services that leverage data lineage provenance and domain-expertise, expert systems can be built to aid the user in understanding data sources, processing, and the suitability for use of products generated by the tools. We describe our experiences developing a semantic, provenance-aware, expert-knowledge advisory system applied to NASA Giovanni web-based Earth science data analysis tool as part of the ESTO AIST-funded Multi-sensor Data Synergy Advisor project.

  17. Structure design and characteristic analysis of micro-nano probe based on six dimensional micro-force measuring principle

    NASA Astrophysics Data System (ADS)

    Yang, Hong-tao; Cai, Chun-mei; Fang, Chuan-zhi; Wu, Tian-feng

    2013-10-01

    In order to develop micro-nano probe having error self-correcting function and good rigidity structure, a new micro-nano probe system was developed based on six-dimensional micro-force measuring principle. The structure and working principle of the probe was introduced in detail. The static nonlinear decoupling method was established with BP neural network to do the static decoupling for the dimension coupling existing in each direction force measurements. The optimal parameters of BP neural network were selected and the decoupling simulation experiments were done. The maximum probe coupling rate after decoupling is 0.039% in X direction, 0.025% in Y direction and 0.027% in Z direction. The static measurement sensitivity of the probe can reach 10.76μɛ / mN in Z direction and 14.55μɛ / mN in X and Y direction. The modal analysis and harmonic response analysis under three dimensional harmonic load of the probe were done by using finite element method. The natural frequencies under different vibration modes were obtained and the working frequency of the probe was determined, which is higher than 10000 Hz . The transient response analysis of the probe was done, which indicates that the response time of the probe can reach 0.4 ms. From the above results, it is shown that the developed micro-nano probe meets triggering requirements of micro-nano probe. Three dimension measuring force can be measured precisely by the developed probe, which can be used to predict and correct the force deformation error and the touch error of the measuring ball and the measuring rod.

  18. A three-dimensional image processing program for accurate, rapid, and semi-automated segmentation of neuronal somata with dense neurite outgrowth

    PubMed Central

    Ross, James D.; Cullen, D. Kacy; Harris, James P.; LaPlaca, Michelle C.; DeWeerth, Stephen P.

    2015-01-01

    Three-dimensional (3-D) image analysis techniques provide a powerful means to rapidly and accurately assess complex morphological and functional interactions between neural cells. Current software-based identification methods of neural cells generally fall into two applications: (1) segmentation of cell nuclei in high-density constructs or (2) tracing of cell neurites in single cell investigations. We have developed novel methodologies to permit the systematic identification of populations of neuronal somata possessing rich morphological detail and dense neurite arborization throughout thick tissue or 3-D in vitro constructs. The image analysis incorporates several novel automated features for the discrimination of neurites and somata by initially classifying features in 2-D and merging these classifications into 3-D objects; the 3-D reconstructions automatically identify and adjust for over and under segmentation errors. Additionally, the platform provides for software-assisted error corrections to further minimize error. These features attain very accurate cell boundary identifications to handle a wide range of morphological complexities. We validated these tools using confocal z-stacks from thick 3-D neural constructs where neuronal somata had varying degrees of neurite arborization and complexity, achieving an accuracy of ≥95%. We demonstrated the robustness of these algorithms in a more complex arena through the automated segmentation of neural cells in ex vivo brain slices. These novel methods surpass previous techniques by improving the robustness and accuracy by: (1) the ability to process neurites and somata, (2) bidirectional segmentation correction, and (3) validation via software-assisted user input. This 3-D image analysis platform provides valuable tools for the unbiased analysis of neural tissue or tissue surrogates within a 3-D context, appropriate for the study of multi-dimensional cell-cell and cell-extracellular matrix interactions. PMID:26257609

  19. Cognitive stimulation therapy as a sustainable intervention for dementia in sub-Saharan Africa: Feasibility and clinical efficacy using a stepped-wedge design - ERRATUM.

    PubMed

    Paddick, Stella-Maria; Mkenda, Sarah; Mbowe, Godfrey; Kisoli, Aloyce; Gray, William K; Dotchin, Catherine L; Ternent, Laura; Ogunniyi, Adesola; Kissima, John; Olakehinde, Olaide; Mushi, Declare; Walker, Richard W

    2017-06-01

    In the above article (Paddick, 2017) The corresponding author's details were previously listed incorrectly. The correct details are; contact number +44 191 293 2709 and email address William.gray@nhct.nhs.uk. The original article has been updated with the correct contact details. The publishers apologise for any inconvenience and confusion this error has caused.

  20. Quantification of LiDAR measurement uncertainty through propagation of errors due to sensor sub-systems and terrain morphology

    NASA Astrophysics Data System (ADS)

    Goulden, T.; Hopkinson, C.

    2013-12-01

    The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future work in LiDAR sensor measurement uncertainty must focus on the development of vegetative error models to create more robust error prediction algorithms. To achieve this objective, comprehensive empirical exploratory analysis is recommended to relate vegetative parameters to observed errors.

  1. Spatial abstraction for autonomous robot navigation.

    PubMed

    Epstein, Susan L; Aroor, Anoop; Evanusa, Matthew; Sklar, Elizabeth I; Parsons, Simon

    2015-09-01

    Optimal navigation for a simulated robot relies on a detailed map and explicit path planning, an approach problematic for real-world robots that are subject to noise and error. This paper reports on autonomous robots that rely on local spatial perception, learning, and commonsense rationales instead. Despite realistic actuator error, learned spatial abstractions form a model that supports effective travel.

  2. Detailed analysis of an optimized FPP-based 3D imaging system

    NASA Astrophysics Data System (ADS)

    Tran, Dat; Thai, Anh; Duong, Kiet; Nguyen, Thanh; Nehmetallah, Georges

    2016-05-01

    In this paper, we present detail analysis and a step-by-step implementation of an optimized fringe projection profilometry (FPP) based 3D shape measurement system. First, we propose a multi-frequency and multi-phase shifting sinusoidal fringe pattern reconstruction approach to increase accuracy and sensitivity of the system. Second, phase error compensation caused by the nonlinear transfer function of the projector and camera is performed through polynomial approximation. Third, phase unwrapping is performed using spatial and temporal techniques and the tradeoff between processing speed and high accuracy is discussed in details. Fourth, generalized camera and system calibration are developed for phase to real world coordinate transformation. The calibration coefficients are estimated accurately using a reference plane and several gauge blocks with precisely known heights and by employing a nonlinear least square fitting method. Fifth, a texture will be attached to the height profile by registering a 2D real photo to the 3D height map. The last step is to perform 3D image fusion and registration using an iterative closest point (ICP) algorithm for a full field of view reconstruction. The system is experimentally constructed using compact, portable, and low cost off-the-shelf components. A MATLAB® based GUI is developed to control and synchronize the whole system.

  3. Large-Scale Uncertainty and Error Analysis for Time-dependent Fluid/Structure Interactions in Wind Turbine Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alonso, Juan J.; Iaccarino, Gianluca

    2013-08-25

    The following is the final report covering the entire period of this aforementioned grant, June 1, 2011 - May 31, 2013 for the portion of the effort corresponding to Stanford University (SU). SU has partnered with Sandia National Laboratories (PI: Mike S. Eldred) and Purdue University (PI: Dongbin Xiu) to complete this research project and this final report includes those contributions made by the members of the team at Stanford. Dr. Eldred is continuing his contributions to this project under a no-cost extension and his contributions to the overall effort will be detailed at a later time (once his effortmore » has concluded) on a separate project submitted by Sandia National Laboratories. At Stanford, the team is made up of Profs. Alonso, Iaccarino, and Duraisamy, post-doctoral researcher Vinod Lakshminarayan, and graduate student Santiago Padron. At Sandia National Laboratories, the team includes Michael Eldred, Matt Barone, John Jakeman, and Stefan Domino, and at Purdue University, we have Prof. Dongbin Xiu as our main collaborator. The overall objective of this project was to develop a novel, comprehensive methodology for uncertainty quantification by combining stochastic expansions (nonintrusive polynomial chaos and stochastic collocation), the adjoint approach, and fusion with experimental data to account for aleatory and epistemic uncertainties from random variable, random field, and model form sources. The expected outcomes of this activity were detailed in the proposal and are repeated here to set the stage for the results that we have generated during the time period of execution of this project: 1. The rigorous determination of an error budget comprising numerical errors in physical space and statistical errors in stochastic space and its use for optimal allocation of resources; 2. A considerable increase in efficiency when performing uncertainty quantification with a large number of uncertain variables in complex non-linear multi-physics problems; 3. A solution to the long-time integration problem of spectral chaos approaches; 4. A rigorous methodology to account for aleatory and epistemic uncertainties, to emphasize the most important variables via dimension reduction and dimension-adaptive refinement, and to support fusion with experimental data using Bayesian inference; 5. The application of novel methodologies to time-dependent reliability studies in wind turbine applications including a number of efforts relating to the uncertainty quantification in vertical-axis wind turbine applications. In this report, we summarize all accomplishments in the project (during the time period specified) focusing on advances in UQ algorithms and deployment efforts to the wind turbine application area. Detailed publications in each of these areas have also been completed and are available from the respective conference proceedings and journals as detailed in a later section.« less

  4. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less

  5. Spelling in adolescents with dyslexia: errors and modes of assessment.

    PubMed

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc

    2014-01-01

    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three main error categories were distinguished: phonological, orthographic, and grammatical errors (on the basis of morphology and language-specific spelling rules). The results indicated that higher-education students with dyslexia made on average twice as many spelling errors as the controls, with effect sizes of d ≥ 2. When the errors were classified as phonological, orthographic, or grammatical, we found a slight dominance of phonological errors in students with dyslexia. Sentence dictation did not provide more information than word dictation in the correct classification of students with and without dyslexia. © Hammill Institute on Disabilities 2012.

  6. Assessment of radargrammetric DSMs from TerraSAR-X Stripmap images in a mountainous relief area of the Amazon region

    NASA Astrophysics Data System (ADS)

    de Oliveira, Cleber Gonzales; Paradella, Waldir Renato; da Silva, Arnaldo de Queiroz

    The Brazilian Amazon is a vast territory with an enormous need for mapping and monitoring of renewable and non-renewable resources. Due to the adverse environmental condition (rain, cloud, dense vegetation) and difficult access, topographic information is still poor, and when available needs to be updated or re-mapped. In this paper, the feasibility of using Digital Surface Models (DSMs) extracted from TerraSAR-X Stripmap stereo-pair images for detailed topographic mapping was investigated for a mountainous area in the Carajás Mineral Province, located on the easternmost border of the Brazilian Amazon. The quality of the radargrammetric DSMs was evaluated regarding field altimetric measurements. Precise topographic field information acquired from a Global Positioning System (GPS) was used as Ground Control Points (GCPs) for the modeling of the stereoscopic DSMs and as Independent Check Points (ICPs) for the calculation of elevation accuracies. The analysis was performed following two ways: (1) the use of Root Mean Square Error (RMSE) and (2) calculations of systematic error (bias) and precision. The test for significant systematic error was based on the Student's-t distribution and the test of precision was based on the Chi-squared distribution. The investigation has shown that the accuracy of the TerraSAR-X Stripmap DSMs met the requirements for 1:50,000 map (Class A) as requested by the Brazilian Standard for Cartographic Accuracy. Thus, the use of TerraSAR-X Stripmap images can be considered a promising alternative for detailed topographic mapping in similar environments of the Amazon region, where available topographic information is rare or presents low quality.

  7. Optical laboratory solution and error model simulation of a linear time-varying finite element equation

    NASA Technical Reports Server (NTRS)

    Taylor, B. K.; Casasent, D. P.

    1989-01-01

    The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.

  8. Stream network analysis from orbital and suborbital imagery, Colorado River Basin, Texas

    NASA Technical Reports Server (NTRS)

    Baker, V. R. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Orbital SL-2 imagery (earth terrain camera S-190B), received September 5, 1973, was subjected to quantitative network analysis and compared to 7.5 minute topographic mapping (scale: 1/24,000) and U.S.D.A. conventional black and white aerial photography (scale: 1/22,200). Results can only be considered suggestive because detail on the SL-2 imagery was badly obscured by heavy cloud cover. The upper Bee Creek basin was chosen for analysis because it appeared in a relatively cloud-free portion of the orbital imagery. Drainage maps were drawn from the three sources digitized into a computer-compatible format, and analyzed by the WATER system computer program. Even at its small scale (1/172,000) and with bad haze the orbital photo showed much drainage detail. The contour-like character of the Glen Rose Formation's resistant limestone units allowed channel definition. The errors in pattern recognition can be attributed to local areas of dense vegetation and to other areas of very high albedo caused by surficial exposure of caliche. The latter effect caused particular difficulty in the determination of drainage divides.

  9. RGB-to-RGBG conversion algorithm with adaptive weighting factors based on edge detection and minimal square error.

    PubMed

    Huang, Chengqiang; Yang, Youchang; Wu, Bo; Yu, Weize

    2018-06-01

    The sub-pixel arrangement of the RGBG panel and the image with RGB format are different and the algorithm that converts RGB to RGBG is urgently needed to display an image with RGB arrangement on the RGBG panel. However, the information loss is still large although color fringing artifacts are weakened in the published papers that study this conversion. In this paper, an RGB-to-RGBG conversion algorithm with adaptive weighting factors based on edge detection and minimal square error (EDMSE) is proposed. The main points of innovation include the following: (1) the edge detection is first proposed to distinguish image details with serious color fringing artifacts and image details which are prone to be lost in the process of RGB-RGBG conversion; (2) for image details with serious color fringing artifacts, the weighting factor 0.5 is applied to weaken the color fringing artifacts; and (3) for image details that are prone to be lost in the process of RGB-RGBG conversion, a special mechanism to minimize square error is proposed. The experiment shows that the color fringing artifacts are slightly improved by EDMSE, and the values of MSE of the image processed are 19.6% and 7% smaller than those of the image processed by the direct assignment and weighting factor algorithm, respectively. The proposed algorithm is implemented on a field programmable gate array to enable the image display on the RGBG panel.

  10. The Influence of Observation Errors on Analysis Error and Forecast Skill Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, R. M.; Tai, K.-S.

    2013-01-01

    The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.

  11. Efficient Methods to Assimilate Satellite Retrievals Based on Information Content. Part 2; Suboptimal Retrieval Assimilation

    NASA Technical Reports Server (NTRS)

    Joiner, J.; Dee, D. P.

    1998-01-01

    One of the outstanding problems in data assimilation has been and continues to be how best to utilize satellite data while balancing the tradeoff between accuracy and computational cost. A number of weather prediction centers have recently achieved remarkable success in improving their forecast skill by changing the method by which satellite data are assimilated into the forecast model from the traditional approach of assimilating retrievals to the direct assimilation of radiances in a variational framework. The operational implementation of such a substantial change in methodology involves a great number of technical details, e.g., pertaining to quality control procedures, systematic error correction techniques, and tuning of the statistical parameters in the analysis algorithm. Although there are clear theoretical advantages to the direct radiance assimilation approach, it is not obvious at all to what extent the improvements that have been obtained so far can be attributed to the change in methodology, or to various technical aspects of the implementation. The issue is of interest because retrieval assimilation retains many practical and logistical advantages which may become even more significant in the near future when increasingly high-volume data sources become available. The central question we address here is: how much improvement can we expect from assimilating radiances rather than retrievals, all other things being equal? We compare the two approaches in a simplified one-dimensional theoretical framework, in which problems related to quality control and systematic error correction are conveniently absent. By assuming a perfect radiative transfer model and perfect knowledge of radiance and background error covariances, we are able to formulate a nonlinear local error analysis for each assimilation method. Direct radiance assimilation is optimal in this idealized context, while the traditional method of assimilating retrievals is suboptimal because it ignores the cross-covariances between background errors and retrieval errors. We show that interactive retrieval assimilation (where the same background used for assimilation is also used in the retrieval step) is equivalent to direct assimilation of radiances with suboptimal analysis weights. We illustrate and extend these theoretical arguments with several one-dimensional assimilation experiments, where we estimate vertical atmospheric profiles using simulated data from both the High-resolution InfraRed Sounder 2 (HIRS2) and the future Atmospheric InfraRed Sounder (AIRS).

  12. Configuration Analysis of the ERS Points in Large-Volume Metrology System

    PubMed Central

    Jin, Zhangjun; Yu, Cijun; Li, Jiangxiong; Ke, Yinglin

    2015-01-01

    In aircraft assembly, multiple laser trackers are used simultaneously to measure large-scale aircraft components. To combine the independent measurements, the transformation matrices between the laser trackers’ coordinate systems and the assembly coordinate system are calculated, by measuring the enhanced referring system (ERS) points. This article aims to understand the influence of the configuration of the ERS points that affect the transformation matrix errors, and then optimize the deployment of the ERS points to reduce the transformation matrix errors. To optimize the deployment of the ERS points, an explicit model is derived to estimate the transformation matrix errors. The estimation model is verified by the experiment implemented in the factory floor. Based on the proposed model, a group of sensitivity coefficients are derived to evaluate the quality of the configuration of the ERS points, and then several typical configurations of the ERS points are analyzed in detail with the sensitivity coefficients. Finally general guidance is established to instruct the deployment of the ERS points in the aspects of the layout, the volume size and the number of the ERS points, as well as the position and orientation of the assembly coordinate system. PMID:26402685

  13. The DiskMass Survey. II. Error Budget

    NASA Astrophysics Data System (ADS)

    Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas

    2010-06-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  14. Extending "Deep Blue" Aerosol Retrieval Coverage to Cases of Absorbing Aerosols Above Clouds: Sensitivity Analysis and First Case Studies

    NASA Technical Reports Server (NTRS)

    Sayer, A. M.; Hsu, N. C.; Bettenhausen, C.; Lee, J.; Redemann, J.; Schmid, B.; Shinozuka, Y.

    2016-01-01

    Cases of absorbing aerosols above clouds (AACs), such as smoke or mineral dust, are omitted from most routinely processed space-based aerosol optical depth (AOD) data products, including those from the Moderate Resolution Imaging Spectroradiometer (MODIS). This study presents a sensitivity analysis and preliminary algorithm to retrieve above-cloud AOD and liquid cloud optical depth (COD) for AAC cases from MODIS or similar sensors, for incorporation into a future version of the "Deep Blue" AOD data product. Detailed retrieval simulations suggest that these sensors should be able to determine AAC AOD with a typical level of uncertainty approximately 25-50 percent (with lower uncertainties for more strongly absorbing aerosol types) and COD with an uncertainty approximately10-20 percent, if an appropriate aerosol optical model is known beforehand. Errors are larger, particularly if the aerosols are only weakly absorbing, if the aerosol optical properties are not known, and the appropriate model to use must also be retrieved. Actual retrieval errors are also compared to uncertainty envelopes obtained through the optimal estimation (OE) technique; OE-based uncertainties are found to be generally reasonable for COD but larger than actual retrieval errors for AOD, due in part to difficulties in quantifying the degree of spectral correlation of forward model error. The algorithm is also applied to two MODIS scenes (one smoke and one dust) for which near-coincident NASA Ames Airborne Tracking Sun photometer (AATS) data were available to use as a ground truth AOD data source, and found to be in good agreement, demonstrating the validity of the technique with real observations.

  15. Application of GPS to Enable Launch Vehicle Upper Stage Heliocentric Disposal

    NASA Technical Reports Server (NTRS)

    Anzalone, Evan J.; Oliver, T. Emerson

    2017-01-01

    To properly dispose of the upper stage of the Space Launch System, the vehicle must perform a burn in Earth orbit to perform a close flyby of the Lunar surface to gain adequate energy to enter into heliocentric space. This architecture was selected to meet NASA requirements to limit orbital debris in the Earth-Moon system. The choice of a flyby for heliocentric disposal was driven by mission and vehicle constraints. This paper describes the SLS mission for Exploration Mission -1, a high level overview of the Block 1 vehicle, and the various disposal options considered. The research focuses on this analysis in terms of the mission design and navigation problem, focusing on the vehicle-level requirements that enable a successful mission. An inertial-only system is shown to be insufficient for heliocentric flyby due to large inertial integration errors from launch through disposal maneuver while on a trans-lunar trajectory. The various options for aiding the navigation system are presented and details are provided on the use of GPS to bound the state errors in orbit to improve the capability for stage disposal. The state estimation algorithm used is described as well as its capability in determination of the vehicle state at the start of the planned maneuver. This data, both dispersions on state and on errors, is then used to develop orbital targets to use for meeting the required Lunar flyby for entering onto a heliocentric trajectory. The effect of guidance and navigation errors on this capability is described as well as the identified constraints for achieving the disposal requirements. Additionally, discussion is provided on continued analysis and identification of system considerations that can drive the ability to integrate onto a vehicle intended for deep space.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    DiCostanzo, D; Ayan, A; Woollard, J

    Purpose: To automate the daily verification of each patient’s treatment by utilizing the trajectory log files (TLs) written by the Varian TrueBeam linear accelerator while reducing the number of false positives including jaw and gantry positioning errors, that are displayed in the Treatment History tab of Varian’s Chart QA module. Methods: Small deviations in treatment parameters are difficult to detect in weekly chart checks, but may be significant in reducing delivery errors, and would be critical if detected daily. Software was developed in house to read TLs. Multiple functions were implemented within the software that allow it to operate viamore » a GUI to analyze TLs, or as a script to run on a regular basis. In order to determine tolerance levels for the scripted analysis, 15,241 TLs from seven TrueBeams were analyzed. The maximum error of each axis for each TL was written to a CSV file and statistically analyzed to determine the tolerance for each axis accessible in the TLs to flag for manual review. The software/scripts developed were tested by varying the tolerance values to ensure veracity. After tolerances were determined, multiple weeks of manual chart checks were performed simultaneously with the automated analysis to ensure validity. Results: The tolerance values for the major axis were determined to be, 0.025 degrees for the collimator, 1.0 degree for the gantry, 0.002cm for the y-jaws, 0.01cm for the x-jaws, and 0.5MU for the MU. The automated verification of treatment parameters has been in clinical use for 4 months. During that time, no errors in machine delivery of the patient treatments were found. Conclusion: The process detailed here is a viable and effective alternative to manually checking treatment parameters during weekly chart checks.« less

  17. Recognition Errors Suggest Fast Familiarity and Slow Recollection in Rhesus Monkeys

    ERIC Educational Resources Information Center

    Basile, Benjamin M.; Hampton, Robert R.

    2013-01-01

    One influential model of recognition posits two underlying memory processes: recollection, which is detailed but relatively slow, and familiarity, which is quick but lacks detail. Most of the evidence for this dual-process model in nonhumans has come from analyses of receiver operating characteristic (ROC) curves in rats, but whether ROC analyses…

  18. Operation room tool handling and miscommunication scenarios: an object-process methodology conceptual model.

    PubMed

    Wachs, Juan P; Frenkel, Boaz; Dori, Dov

    2014-11-01

    Errors in the delivery of medical care are the principal cause of inpatient mortality and morbidity, accounting for around 98,000 deaths in the United States of America (USA) annually. Ineffective team communication, especially in the operation room (OR), is a major root of these errors. This miscommunication can be reduced by analyzing and constructing a conceptual model of communication and miscommunication in the OR. We introduce the principles underlying Object-Process Methodology (OPM)-based modeling of the intricate interactions between the surgeon and the surgical technician while handling surgical instruments in the OR. This model is a software- and hardware-independent description of the agents engaged in communication events, their physical activities, and their interactions. The model enables assessing whether the task-related objectives of the surgical procedure were achieved and completed successfully and what errors can occur during the communication. The facts used to construct the model were gathered from observations of various types of operations miscommunications in the operating room and its outcomes. The model takes advantage of the compact ontology of OPM, which is comprised of stateful objects - things that exist physically or informatically, and processes - things that transform objects by creating them, consuming them or changing their state. The modeled communication modalities are verbal and non-verbal, and errors are modeled as processes that deviate from the "sunny day" scenario. Using OPM refinement mechanism of in-zooming, key processes are drilled into and elaborated, along with the objects that are required as agents or instruments, or objects that these processes transform. The model was developed through an iterative process of observation, modeling, group discussions, and simplification. The model faithfully represents the processes related to tool handling that take place in an OR during an operation. The specification is at various levels of detail, each level is depicted in a separate diagram, and all the diagrams are "aware" of each other as part of the whole model. Providing ontology of verbal and non-verbal modalities of communication in the OR, the resulting conceptual model is a solid basis for analyzing and understanding the source of the large variety of errors occurring in the course of an operation, providing an opportunity to decrease the quantity and severity of mistakes related to the use and misuse of surgical instrumentations. Since the model is event driven, rather than person driven, the focus is on the factors causing the errors, rather than the specific person. This approach advocates searching for technological solutions to alleviate tool-related errors rather than finger-pointing. Concretely, the model was validated through a structured questionnaire and it was found that surgeons agreed that the conceptual model was flexible (3.8 of 5, std=0.69), accurate, and it generalizable (3.7 of 5, std=0.37 and 3.7 of 5, std=0.85, respectively). The detailed conceptual model of the tools handling subsystem of the operation performed in an OR focuses on the details of the communication and the interactions taking place between the surgeon and the surgical technician during an operation, with the objective of pinpointing the exact circumstances in which errors can happen. Exact and concise specification of the communication events in general and the surgical instrument requests in particular is a prerequisite for a methodical analysis of the various modes of errors and the circumstances under which they occur. This has significant potential value in both reduction in tool-handling-related errors during an operation and providing a solid formal basis for designing a cybernetic agent which can replace a surgical technician in routine tool handling activities during an operation, freeing the technician to focus on quality assurance, monitoring and control of the cybernetic agent activities. This is a critical step in designing the next generation of cybernetic OR assistants. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Analysis of the Capability and Limitations of Relativistic Gravity Measurements Using Radio Astronomy Methods

    NASA Technical Reports Server (NTRS)

    Shapiro, I. I.; Counselman, C. C., III

    1975-01-01

    The uses of radar observations of planets and very-long-baseline radio interferometric observations of extragalactic objects to test theories of gravitation are described in detail with special emphasis on sources of error. The accuracy achievable in these tests with data already obtained, can be summarized in terms of: retardation of signal propagation (radar), deflection of radio waves (interferometry), advance of planetary perihelia (radar), gravitational quadrupole moment of sun (radar), and time variation of gravitational constant (radar). The analyses completed to date have yielded no significant disagreement with the predictions of general relativity.

  20. COLAcode: COmoving Lagrangian Acceleration code

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin V.

    2016-02-01

    COLAcode is a serial particle mesh-based N-body code illustrating the COLA (COmoving Lagrangian Acceleration) method; it solves for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). It differs from standard N-body code by trading accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is useful for generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing; such catalogs are needed to perform detailed error analysis for ongoing and future surveys of LSS.

  1. Improving designer productivity

    NASA Technical Reports Server (NTRS)

    Hill, Gary C.

    1992-01-01

    Designer and design team productivity improves with skill, experience, and the tools available. The design process involves numerous trials and errors, analyses, refinements, and addition of details. Computerized tools have greatly speeded the analysis, and now new theories and methods, emerging under the label Artificial Intelligence (AI), are being used to automate skill and experience. These tools improve designer productivity by capturing experience, emulating recognized skillful designers, and making the essence of complex programs easier to grasp. This paper outlines the aircraft design process in today's technology and business climate, presenting some of the challenges ahead and some of the promising AI methods for meeting those challenges.

  2. Comparative study of solar optics for paraboloidal concentrators

    NASA Technical Reports Server (NTRS)

    Wen, L.; Poon, P.; Carley, W.; Huang, L.

    1979-01-01

    Different analytical methods for computing the flux distribution on the focal plane of a paraboloidal solar concentrator are reviewed. An analytical solution in algebraic form is also derived for an idealized model. The effects resulting from using different assumptions in the definition of optical parameters used in these methodologies are compared and discussed in detail. These parameters include solar irradiance distribution (limb darkening and circumsolar), reflector surface specular spreading, surface slope error, and concentrator pointing inaccuracy. The type of computational method selected for use depends on the maturity of the design and the data available at the time the analysis is made.

  3. Airborne spectroradiometry: The application of AIS data to detecting subtle mineral absorption features

    NASA Technical Reports Server (NTRS)

    Cocks, T. D.; Green, A. A.

    1986-01-01

    Analysis of Airborne Imaging Spectrometer (AIS) data acquired in Australia has revealed a number of operational problems. Horizontal striping in AIS imagery and spectral distortions due to order overlap were investigated. Horizontal striping, caused by grating position errors can be removed with little or no effect on spectral details. Order overlap remains a problem that seriously compromises identification of subtle mineral absorption features within AIS spectra. A spectrometric model of the AIS was developed to assist in identifying spurious spectral features, and will be used in efforts to restore the spectral integrity of the data.

  4. Method for a quantitative investigation of the frozen flow hypothesis

    PubMed

    Schock; Spillar

    2000-09-01

    We present a technique to test the frozen flow hypothesis quantitatively, using data from wave-front sensors such as those found in adaptive optics systems. Detailed treatments of the theoretical background of the method and of the error analysis are presented. Analyzing data from the 1.5-m and 3.5-m telescopes at the Starfire Optical Range, we find that the frozen flow hypothesis is an accurate description of the temporal development of atmospheric turbulence on time scales of the order of 1-10 ms but that significant deviations from the frozen flow behavior are present for longer time scales.

  5. The implementation and use of Ada on distributed systems with high reliability requirements

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1987-01-01

    A preliminary analysis of the Ada implementation of the Advanced Transport Operating System (ATOPS), an experimental computer control system developed at NASA Langley for a modified Boeing 737 aircraft, is presented. The criteria that was determined for the evaluation of this approach is described. A preliminary version of the requirements for the ATOPS is contained. This requirements specification is not a formal document, but rather a description of certain aspects of the ATOPS system at a level of detail that best suits the needs of the research. The survey of backward error recovery techniques is also presented.

  6. A female advantage in the serial production of non-representational learned gestures.

    PubMed

    Chipman, Karen; Hampson, Elizabeth

    2006-01-01

    Clinical research has demonstrated a sex difference in the neuroanatomical organization of the limb praxis system. To test for a corresponding sex difference in the functioning of this system, we compared healthy men and women on a gesture production task modeled after those used in apraxia research. In two separate studies, participants were taught to perform nine non-representational gestures in response to computer-generated color cues. After extensive practice with the gestures, the color cues were placed on a timer and presented in randomized sequences at progressively faster speeds. A detailed videotape analysis revealed that women in both studies committed significantly fewer 'praxic' errors than men (i.e., errors that resembled those seen in limb apraxia). This was true during both the untimed practice trials and the speeded trials of the task, despite equivalent numbers of errors between the sexes in the 'non-praxic' (i.e., executory) error categories. Women in both studies also performed the task at significantly faster speeds than men. This finding was not accounted for by a female advantage in extraneous elements of the task, i.e., speed of color processing, associative retrieval, or motor execution. Together, the two studies provide convergent support for a female advantage in the efficiency of forelimb gesture production. They are consistent with emerging evidence of a sex difference in the anatomical organization of the praxis system.

  7. An algorithm for selecting the most accurate protocol for contact angle measurement by drop shape analysis.

    PubMed

    Xu, Z N

    2014-12-01

    In this study, an error analysis is performed to study real water drop images and the corresponding numerically generated water drop profiles for three widely used static contact angle algorithms: the circle- and ellipse-fitting algorithms and the axisymmetric drop shape analysis-profile (ADSA-P) algorithm. The results demonstrate the accuracy of the numerically generated drop profiles based on the Laplace equation. A significant number of water drop profiles with different volumes, contact angles, and noise levels are generated, and the influences of the three factors on the accuracies of the three algorithms are systematically investigated. The results reveal that the above-mentioned three algorithms are complementary. In fact, the circle- and ellipse-fitting algorithms show low errors and are highly resistant to noise for water drops with small/medium volumes and contact angles, while for water drop with large volumes and contact angles just the ADSA-P algorithm can meet accuracy requirement. However, this algorithm introduces significant errors in the case of small volumes and contact angles because of its high sensitivity to noise. The critical water drop volumes of the circle- and ellipse-fitting algorithms corresponding to a certain contact angle error are obtained through a significant amount of computation. To improve the precision of the static contact angle measurement, a more accurate algorithm based on a combination of the three algorithms is proposed. Following a systematic investigation, the algorithm selection rule is described in detail, while maintaining the advantages of the three algorithms and overcoming their deficiencies. In general, static contact angles over the entire hydrophobicity range can be accurately evaluated using the proposed algorithm. The ease of erroneous judgment in static contact angle measurements is avoided. The proposed algorithm is validated by a static contact angle evaluation of real and numerically generated water drop images with different hydrophobicity values and volumes.

  8. Distributions in the error space: goal-directed movements described in time and state-space representations.

    PubMed

    Fisher, Moria E; Huang, Felix C; Wright, Zachary A; Patton, James L

    2014-01-01

    Manipulation of error feedback has been of great interest to recent studies in motor control and rehabilitation. Typically, motor adaptation is shown as a change in performance with a single scalar metric for each trial, yet such an approach might overlook details about how error evolves through the movement. We believe that statistical distributions of movement error through the extent of the trajectory can reveal unique patterns of adaption and possibly reveal clues to how the motor system processes information about error. This paper describes different possible ordinate domains, focusing on representations in time and state-space, used to quantify reaching errors. We hypothesized that the domain with the lowest amount of variability would lead to a predictive model of reaching error with the highest accuracy. Here we showed that errors represented in a time domain demonstrate the least variance and allow for the highest predictive model of reaching errors. These predictive models will give rise to more specialized methods of robotic feedback and improve previous techniques of error augmentation.

  9. Linear least-squares method for global luminescent oil film skin friction field analysis

    NASA Astrophysics Data System (ADS)

    Lee, Taekjin; Nonomura, Taku; Asai, Keisuke; Liu, Tianshu

    2018-06-01

    A data analysis method based on the linear least-squares (LLS) method was developed for the extraction of high-resolution skin friction fields from global luminescent oil film (GLOF) visualization images of a surface in an aerodynamic flow. In this method, the oil film thickness distribution and its spatiotemporal development are measured by detecting the luminescence intensity of the thin oil film. From the resulting set of GLOF images, the thin oil film equation is solved to obtain an ensemble-averaged (steady) skin friction field as an inverse problem. In this paper, the formulation of a discrete linear system of equations for the LLS method is described, and an error analysis is given to identify the main error sources and the relevant parameters. Simulations were conducted to evaluate the accuracy of the LLS method and the effects of the image patterns, image noise, and sample numbers on the results in comparison with the previous snapshot-solution-averaging (SSA) method. An experimental case is shown to enable the comparison of the results obtained using conventional oil flow visualization and those obtained using both the LLS and SSA methods. The overall results show that the LLS method is more reliable than the SSA method and the LLS method can yield a more detailed skin friction topology in an objective way.

  10. Percent area coverage through image analysis

    NASA Astrophysics Data System (ADS)

    Wong, Chung M.; Hong, Sung M.; Liu, De-Ling

    2016-09-01

    The notion of percent area coverage (PAC) has been used to characterize surface cleanliness levels in the spacecraft contamination control community. Due to the lack of detailed particle data, PAC has been conventionally calculated by multiplying the particle surface density in predetermined particle size bins by a set of coefficients per MIL-STD-1246C. In deriving the set of coefficients, the surface particle size distribution is assumed to follow a log-normal relation between particle density and particle size, while the cross-sectional area function is given as a combination of regular geometric shapes. For particles with irregular shapes, the cross-sectional area function cannot describe the true particle area and, therefore, may introduce error in the PAC calculation. Other errors may also be introduced by using the lognormal surface particle size distribution function that highly depends on the environmental cleanliness and cleaning process. In this paper, we present PAC measurements from silicon witness wafers that collected fallouts from a fabric material after vibration testing. PAC calculations were performed through analysis of microscope images and compare them to values derived through the MIL-STD-1246C method. Our results showed that the MIL-STD-1246C method does provide a reasonable upper bound to the PAC values determined through image analysis, in particular for PAC values below 0.1.

  11. A Satellite Mortality Study to Support Space Systems Lifetime Prediction

    NASA Technical Reports Server (NTRS)

    Fox, George; Salazar, Ronald; Habib-Agahi, Hamid; Dubos, Gregory

    2013-01-01

    Estimating the operational lifetime of satellites and spacecraft is a complex process. Operational lifetime can differ from mission design lifetime for a variety of reasons. Unexpected mortality can occur due to human errors in design and fabrication, to human errors in launch and operations, to random anomalies of hardware and software or even satellite function degradation or technology change, leading to unrealized economic or mission return. This study focuses on data collection of public information using, for the first time, a large, publically available dataset, and preliminary analysis of satellite lifetimes, both operational lifetime and design lifetime. The objective of this study is the illustration of the relationship of design life to actual lifetime for some representative classes of satellites and spacecraft. First, a Weibull and Exponential lifetime analysis comparison is performed on the ratio of mission operating lifetime to design life, accounting for terminated and ongoing missions. Next a Kaplan-Meier survivor function, standard practice for clinical trials analysis, is estimated from operating lifetime. Bootstrap resampling is used to provide uncertainty estimates of selected survival probabilities. This study highlights the need for more detailed databases and engineering reliability models of satellite lifetime that include satellite systems and subsystems, operations procedures and environmental characteristics to support the design of complex, multi-generation, long-lived space systems in Earth orbit.

  12. Spelling in Adolescents with Dyslexia: Errors and Modes of Assessment

    ERIC Educational Resources Information Center

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc

    2014-01-01

    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three…

  13. A Reduced-Order Model For Zero-Mass Synthetic Jet Actuators

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.; Vatsa, Veer S.

    2007-01-01

    Accurate details of the general performance of fluid actuators is desirable over a range of flow conditions, within some predetermined error tolerance. Designers typically model actuators with different levels of fidelity depending on the acceptable level of error in each circumstance. Crude properties of the actuator (e.g., peak mass rate and frequency) may be sufficient for some designs, while detailed information is needed for other applications (e.g., multiple actuator interactions). This work attempts to address two primary objectives. The first objective is to develop a systematic methodology for approximating realistic 3-D fluid actuators, using quasi-1-D reduced-order models. Near full fidelity can be achieved with this approach at a fraction of the cost of full simulation and only a modest increase in cost relative to most actuator models used today. The second objective, which is a direct consequence of the first, is to determine the approximate magnitude of errors committed by actuator model approximations of various fidelities. This objective attempts to identify which model (ranging from simple orifice exit boundary conditions to full numerical simulations of the actuator) is appropriate for a given error tolerance.

  14. A strategy for reducing gross errors in the generalized Born models of implicit solvation

    PubMed Central

    Onufriev, Alexey V.; Sigalov, Grigori

    2011-01-01

    The “canonical” generalized Born (GB) formula [C. Still, A. Tempczyk, R. C. Hawley, and T. Hendrickson, J. Am. Chem. Soc. 112, 6127 (1990)] is known to provide accurate estimates for total electrostatic solvation energies ΔGel of biomolecules if the corresponding effective Born radii are accurate. Here we show that even if the effective Born radii are perfectly accurate, the canonical formula still exhibits significant number of gross errors (errors larger than 2kBT relative to numerical Poisson equation reference) in pairwise interactions between individual atomic charges. Analysis of exact analytical solutions of the Poisson equation (PE) for several idealized nonspherical geometries reveals two distinct spatial modes of the PE solution; these modes are also found in realistic biomolecular shapes. The canonical GB Green function misses one of two modes seen in the exact PE solution, which explains the observed gross errors. To address the problem and reduce gross errors of the GB formalism, we have used exact PE solutions for idealized nonspherical geometries to suggest an alternative analytical Green function to replace the canonical GB formula. The proposed functional form is mathematically nearly as simple as the original, but depends not only on the effective Born radii but also on their gradients, which allows for better representation of details of nonspherical molecular shapes. In particular, the proposed functional form captures both modes of the PE solution seen in nonspherical geometries. Tests on realistic biomolecular structures ranging from small peptides to medium size proteins show that the proposed functional form reduces gross pairwise errors in all cases, with the amount of reduction varying from more than an order of magnitude for small structures to a factor of 2 for the largest ones. PMID:21528947

  15. Comparative Analysis of Daytime Fire Detection Algorithms, Using AVHRR Data for the 1995 Fire Season in Canda: Perspective for MODIS

    NASA Technical Reports Server (NTRS)

    Ichoku, Charles; Kaufman, Y. J.; Fraser, R. H.; Jin, J.-Z.; Park, W. M.; Lau, William K. M. (Technical Monitor)

    2001-01-01

    Two fixed-threshold Canada Centre for Remote Sensing and European Space Agency (CCRS and ESA) and three contextual GIGLIO, International Geosphere and Biosphere Project, and Moderate Resolution Imaging Spectroradiometer (GIGLIO, IGBP, and MODIS) algorithms were used for fire detection with Advanced Very High Resolution Radiometer (AVHRR) data acquired over Canada during the 1995 fire season. The CCRS algorithm was developed for the boreal ecosystem, while the other four are for global application. The MODIS algorithm, although developed specifically for use with the MODIS sensor data, was applied to AVHRR in this study for comparative purposes. Fire detection accuracy assessment for the algorithms was based on comparisons with available 1995 burned area ground survey maps covering five Canadian provinces. Overall accuracy estimations in terms of omission (CCRS=46%, ESA=81%, GIGLIO=75%, IGBP=51%, MODIS=81%) and commission (CCRS=0.35%, ESA=0.08%, GIGLIO=0.56%, IGBP=0.75%, MODIS=0.08%) errors over forested areas revealed large differences in performance between the algorithms, with no relevance to type (fixed-threshold or contextual). CCRS performed best in detecting real forest fires, with the least omission error, while ESA and MODIS produced the highest omission error, probably because of their relatively high threshold values designed for global application. The commission error values appear small because the area of pixels falsely identified by each algorithm was expressed as a ratio of the vast unburned forest area. More detailed study shows that most commission errors in all the algorithms were incurred in nonforest agricultural areas, especially on days with very high surface temperatures. The advantage of the high thresholds in ESA and MODIS was that they incurred the least commission errors.

  16. Advanced technology development multi-color holography

    NASA Technical Reports Server (NTRS)

    Vikram, Chandra S.

    1994-01-01

    Several key aspects of multi-color holography and some non-conventional ways to study the holographic reconstructions are considered. The error analysis of three-color holography is considered in detail with particular example of a typical triglycine sulfate crystal growth situation. For the numerical analysis of the fringe patterns, a new algorithm is introduced with experimental verification using sugar-water solution. The role of the phase difference among component holograms is also critically considered with examples of several two- and three-color situations. The status of experimentation on two-color holography and fabrication of a small breadboard system is also reported. Finally, some successful demonstrations of unconventional ways to study holographic reconstructions are described. These methods are deflectometry and confocal optical processing using some Spacelab III holograms.

  17. Major strengths and weaknesses of the lod score method.

    PubMed

    Ott, J

    2001-01-01

    Strengths and weaknesses of the lod score method for human genetic linkage analysis are discussed. The main weakness is its requirement for the specification of a detailed inheritance model for the trait. Various strengths are identified. For example, the lod score (likelihood) method has optimality properties when the trait to be studied is known to follow a Mendelian mode of inheritance. The ELOD is a useful measure for information content of the data. The lod score method can emulate various "nonparametric" methods, and this emulation is equivalent to the nonparametric methods. Finally, the possibility of building errors into the analysis will prove to be essential for the large amount of linkage and disequilibrium data expected in the near future.

  18. Further Developments of the Fringe-Imaging Skin Friction Technique

    NASA Technical Reports Server (NTRS)

    Zilliac, Gregory C.

    1996-01-01

    Various aspects and extensions of the Fringe-Imaging Skin Friction technique (FISF) have been explored through the use of several benchtop experiments and modeling. The technique has been extended to handle three-dimensional flow fields with mild shear gradients. The optical and imaging system has been refined and a PC-based application has been written that has made it possible to obtain high resolution skin friction field measurements in a reasonable period of time. The improved method was tested on a wingtip and compared with Navier-Stokes computations. Additionally, a general approach to interferogram-fringe spacing analysis has been developed that should have applications in other areas of interferometry. A detailed error analysis of the FISF technique is also included.

  19. ERP Reliability Analysis (ERA) Toolbox: An open-source toolbox for analyzing the reliability of event-related brain potentials.

    PubMed

    Clayson, Peter E; Miller, Gregory A

    2017-01-01

    Generalizability theory (G theory) provides a flexible, multifaceted approach to estimating score reliability. G theory's approach to estimating score reliability has important advantages over classical test theory that are relevant for research using event-related brain potentials (ERPs). For example, G theory does not require parallel forms (i.e., equal means, variances, and covariances), can handle unbalanced designs, and provides a single reliability estimate for designs with multiple sources of error. This monograph provides a detailed description of the conceptual framework of G theory using examples relevant to ERP researchers, presents the algorithms needed to estimate ERP score reliability, and provides a detailed walkthrough of newly-developed software, the ERP Reliability Analysis (ERA) Toolbox, that calculates score reliability using G theory. The ERA Toolbox is open-source, Matlab software that uses G theory to estimate the contribution of the number of trials retained for averaging, group, and/or event types on ERP score reliability. The toolbox facilitates the rigorous evaluation of psychometric properties of ERP scores recommended elsewhere in this special issue. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. From Constraints to Resolution Rules Part II : chains, braids, confluence and T&E

    NASA Astrophysics Data System (ADS)

    Berthier, Denis

    In this Part II, we apply the general theory developed in Part I to a detailed analysis of the Constraint Satisfaction Problem (CSP). We show how specific types of resolution rules can be defined. In particular, we introduce the general notions of a chain and a braid. As in Part I, these notions are illustrated in detail with the Sudoku example - a problem known to be NP-complete and which is therefore typical of a broad class of hard problems. For Sudoku, we also show how far one can go in "approximating" a CSP with a resolution theory and we give an empirical statistical analysis of how the various puzzles, corresponding to different sets of entries, can be classified along a natural scale of complexity. For any CSP, we also prove the confluence property of some Resolution Theories based on braids and we show how it can be used to define different resolution strategies. Finally, we prove that, in any CSP, braids have the same solving capacity as Trial-and-Error (T&E) with no guessing and we comment this result in the Sudoku case.

  1. How to minimize perceptual error and maximize expertise in medical imaging

    NASA Astrophysics Data System (ADS)

    Kundel, Harold L.

    2007-03-01

    Visual perception is such an intimate part of human experience that we assume that it is entirely accurate. Yet, perception accounts for about half of the errors made by radiologists using adequate imaging technology. The true incidence of errors that directly affect patient well being is not known but it is probably at the lower end of the reported values of 3 to 25%. Errors in screening for lung and breast cancer are somewhat better characterized than errors in routine diagnosis. About 25% of cancers actually recorded on the images are missed and cancer is falsely reported in about 5% of normal people. Radiologists must strive to decrease error not only because of the potential impact on patient care but also because substantial variation among observers undermines confidence in the reliability of imaging diagnosis. Observer variation also has a major impact on technology evaluation because the variation between observers is frequently greater than the difference in the technologies being evaluated. This has become particularly important in the evaluation of computer aided diagnosis (CAD). Understanding the basic principles that govern the perception of medical images can provide a rational basis for making recommendations for minimizing perceptual error. It is convenient to organize thinking about perceptual error into five steps. 1) The initial acquisition of the image by the eye-brain (contrast and detail perception). 2) The organization of the retinal image into logical components to produce a literal perception (bottom-up, global, holistic). 3) Conversion of the literal perception into a preferred perception by resolving ambiguities in the literal perception (top-down, simulation, synthesis). 4) Selective visual scanning to acquire details that update the preferred perception. 5) Apply decision criteria to the preferred perception. The five steps are illustrated with examples from radiology with suggestions for minimizing error. The role of perceptual learning in the development of expertise is also considered.

  2. An Iterated Global Mascon Solution with Focus on Land Ice Mass Evolution

    NASA Technical Reports Server (NTRS)

    Luthcke, S. B.; Sabaka, T.; Rowlands, D. D.; Lemoine, F. G.; Loomis, B. D.; Boy, J. P.

    2012-01-01

    Land ice mass evolution is determined from a new GRACE global mascon solution. The solution is estimated directly from the reduction of the inter-satellite K-band range rate observations taking into account the full noise covariance, and formally iterating the solution. The new solution increases signal recovery while reducing the GRACE KBRR observation residuals. The mascons are estimated with 10-day and 1-arc-degree equal area sampling, applying anisotropic constraints for enhanced temporal and spatial resolution of the recovered land ice signal. The details of the solution are presented including error and resolution analysis. An Ensemble Empirical Mode Decomposition (EEMD) adaptive filter is applied to the mascon solution time series to compute timing of balance seasons and annual mass balances. The details and causes of the spatial and temporal variability of the land ice regions studied are discussed.

  3. Identification and Remediation of Phonological and Motor Errors in Acquired Sound Production Impairment

    PubMed Central

    Gagnon, Bernadine; Miozzo, Michele

    2017-01-01

    Purpose This study aimed to test whether an approach to distinguishing errors arising in phonological processing from those arising in motor planning also predicts the extent to which repetition-based training can lead to improved production of difficult sound sequences. Method Four individuals with acquired speech production impairment who produced consonant cluster errors involving deletion were examined using a repetition task. We compared the acoustic details of productions with deletion errors in target consonant clusters to singleton consonants. Changes in accuracy over the course of the study were also compared. Results Two individuals produced deletion errors consistent with a phonological locus of the errors, and 2 individuals produced errors consistent with a motoric locus of the errors. The 2 individuals who made phonologically driven errors showed no change in performance on a repetition training task, whereas the 2 individuals with motoric errors improved in their production of both trained and untrained items. Conclusions The results extend previous findings about a metric for identifying the source of sound production errors in individuals with both apraxia of speech and aphasia. In particular, this work may provide a tool for identifying predominant error types in individuals with complex deficits. PMID:28655044

  4. Forward scattering in two-beam laser interferometry

    NASA Astrophysics Data System (ADS)

    Mana, G.; Massa, E.; Sasso, C. P.

    2018-04-01

    A fractional error as large as 25 pm mm-1 at the zero optical-path difference has been observed in an optical interferometer measuring the displacement of an x-ray interferometer used to determine the lattice parameter of silicon. Detailed investigations have brought to light that the error was caused by light forward-scattered from the beam feeding the interferometer. This paper reports on the impact of forward-scattered light on the accuracy of two-beam optical interferometry applied to length metrology, and supplies a model capable of explaining the observed error.

  5. Technical approaches for measurement of human errors

    NASA Technical Reports Server (NTRS)

    Clement, W. F.; Heffley, R. K.; Jewell, W. F.; Mcruer, D. T.

    1980-01-01

    Human error is a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents. The technical details of a variety of proven approaches for the measurement of human errors in the context of the national airspace system are presented. Unobtrusive measurements suitable for cockpit operations and procedures in part of full mission simulation are emphasized. Procedure, system performance, and human operator centered measurements are discussed as they apply to the manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations.

  6. An end-to-end X-IFU simulator: constraints on ICM kinematics

    NASA Astrophysics Data System (ADS)

    Roncarelli, M.; Gaspari, M.; Ettori, S.; Brighenti, F.

    2017-10-01

    In the next years the study of ICM physics will benefit from a completely new type of oservations made available by the X-IFU microcalorimeter of the ATHENA X-ray telescope. X-IFU will combine energy and spatial resolution (2.5 eV and 5 arcsec) allowing to map line emission and, potentially, to characterise the ICM dynamics with an unprecedented detail. I will present an end-to-end simulator aimed at describing the ability of X-IFU to characterise ICM velocity features. Starting from hydrodynamical simulations of ICM turbulence (Gaspari et al. 2013) we went through a detailed and realistic spectral analysis of simulated observations to derive mapped quantities of gas density, temperature, metallicity and, most notably, centroid shift and velocity broadening of the emission lines, with relative errors. Our results show that X-IFU will be able to map in great detail the ICM velocity features and provide precise measurements of the broadening power spectrum. This will provide interesting constraints on the characteristics of turbulent motions, both on large and small scales.

  7. Development of X-ray laser media. Measurement of gain and development of cavity resonators for wavelengths near 130 angstroms, volume 3

    NASA Astrophysics Data System (ADS)

    Forsyth, J. M.

    1983-02-01

    In this document the authors summarize our investigation of the reflecting properties of X-ray multilayers. The breadth of this investigation indicates the utility of the difference equation formalism in the analysis of such structure. The formalism is particularly useful in analyzing multilayers whose structure is not a simple periodic bilayer. The complexity in structure can be either intentional, as in multilayers made by in-situ reflectance monitoring, or it can be a consequence of a degradation mechanism, such as random thickness errors or interlayer diffusion. Both the analysis of thickness errors and the analysis of interlayer diffusion are conceptually simple, effectively one dimensional problems that are straightforwared to pose. In the authors analysis of in-situ reflectance monitoring, they provide a quantitative understanding of an experimentally successful process that has not previously been treated theoretically. As X-ray multilayers come into wider use, there will undoubtedly be an increasing need for a more precise understanding of their reflecting properties. Thus, it is expected that in the future more detailed modeling will be undertaken of less easily specified structures than those above. The authors believe that their formalism will continue to prove useful in the modeling of these more complex structures. One such structure that may be of interest is that of a multilayer degraded by interfacial roughness.

  8. Safety Guided Design Based on Stamp/STPA for Manned Vehicle in Concept Design Phase

    NASA Astrophysics Data System (ADS)

    Ujiie, Ryo; Katahira, Masafumi; Miyamoto, Yuko; Umeda, Hiroki; Leveson, Nancy; Hoshino, Nobuyuki

    2013-09-01

    In manned vehicles, such as the Soyuz and the Space Shuttle, the crew and computer system cooperate to succeed in returning to the earth. While computers increase the functionality of system, they also increase the complexity of the interaction between the controllers (human and computer) and the target dynamics. In some cases, the complexity can produce a serious accident. To prevent such losses, traditional hazard analysis such as FTA has been applied to system development, however it can be used after creating a detailed system because it focuses on detailed component failures. As a result, it's more difficult to eliminate hazard cause early in the process when it is most feasible.STAMP/STPA is a new hazard analysis that can be applied from the early development phase, with the analysis being refined as more detailed decisions are made. In essence, the analysis and design decisions are intertwined and go hand-in-hand. We have applied STAMP/STPA to a concept design of a new JAXA manned vehicle and tried safety guided design of the vehicle. As a result of this trial, it has been shown that STAMP/STPA can be accepted easily by system engineers and the design has been made more sophisticated from a safety viewpoint. The result also shows that the consequences of human errors on system safety can be analysed in the early development phase and the system designed to prevent them. Finally, the paper will discuss an effective way to harmonize this safety guided design approach with system engineering process based on the result of this experience in this project.

  9. Reduced change blindness suggests enhanced attention to detail in individuals with autism.

    PubMed

    Smith, Hayley; Milne, Elizabeth

    2009-03-01

    The phenomenon of change blindness illustrates that a limited number of items within the visual scene are attended to at any one time. It has been suggested that individuals with autism focus attention on less contextually relevant aspects of the visual scene, show superior perceptual discrimination and notice details which are often ignored by typical observers. In this study we investigated change blindness in autism by asking participants to detect continuity errors deliberately introduced into a short film. Whether the continuity errors involved central/marginal or social/non-social aspects of the visual scene was varied. Thirty adolescent participants, 15 with autistic spectrum disorder (ASD) and 15 typically developing (TD) controls participated. The participants with ASD detected significantly more errors than the TD participants. Both groups identified more errors involving central rather than marginal aspects of the scene, although this effect was larger in the TD participants. There was no difference in the number of social or non-social errors detected by either group of participants. In line with previous data suggesting an abnormally broad attentional spotlight and enhanced perceptual function in individuals with ASD, the results of this study suggest enhanced awareness of the visual scene in ASD. The results of this study could reflect superior top-down control of visual search in autism, enhanced perceptual function, or inefficient filtering of visual information in ASD.

  10. Over-Distribution in Source Memory

    PubMed Central

    Brainerd, C. J.; Reyna, V. F.; Holliday, R. E.; Nakamura, K.

    2012-01-01

    Semantic false memories are confounded with a second type of error, over-distribution, in which items are attributed to contradictory episodic states. Over-distribution errors have proved to be more common than false memories when the two are disentangled. We investigated whether over-distribution is prevalent in another classic false memory paradigm: source monitoring. It is. Conventional false memory responses (source misattributions) were predominantly over-distribution errors, but unlike semantic false memory, over-distribution also accounted for more than half of true memory responses (correct source attributions). Experimental control of over-distribution was achieved via a series of manipulations that affected either recollection of contextual details or item memory (concreteness, frequency, list-order, number of presentation contexts, and individual differences in verbatim memory). A theoretical model was used to analyze the data (conjoint process dissociation) that predicts that predicts that (a) over-distribution is directly proportional to item memory but inversely proportional to recollection and (b) item memory is not a necessary precondition for recollection of contextual details. The results were consistent with both predictions. PMID:21942494

  11. Delusions and prediction error: clarifying the roles of behavioural and brain responses

    PubMed Central

    Corlett, Philip Robert; Fletcher, Paul Charles

    2015-01-01

    Griffiths and colleagues provided a clear and thoughtful review of the prediction error model of delusion formation [Cognitive Neuropsychiatry, 2014 April 4 (Epub ahead of print)]. As well as reviewing the central ideas and concluding that the existing evidence base is broadly supportive of the model, they provide a detailed critique of some of the experiments that we have performed to study it. Though they conclude that the shortcomings that they identify in these experiments do not fundamentally challenge the prediction error model, we nevertheless respond to these criticisms. We begin by providing a more detailed outline of the model itself as there are certain important aspects of it that were not covered in their review. We then respond to their specific criticisms of the empirical evidence. We defend the neuroimaging contrasts that we used to explore this model of psychosis arguing that, while any single contrast entails some ambiguity, our assumptions have been justified by our extensive background work before and since. PMID:25559871

  12. Who Do Hospital Physicians and Nurses Go to for Advice About Medications? A Social Network Analysis and Examination of Prescribing Error Rates.

    PubMed

    Creswick, Nerida; Westbrook, Johanna Irene

    2015-09-01

    To measure the weekly medication advice-seeking networks of hospital staff, to compare patterns across professional groups, and to examine these in the context of prescribing error rates. A social network analysis was conducted. All 101 staff in 2 wards in a large, academic teaching hospital in Sydney, Australia, were surveyed (response rate, 90%) using a detailed social network questionnaire. The extent of weekly medication advice seeking was measured by density of connections, proportion of reciprocal relationships by reciprocity, number of colleagues to whom each person provided advice by in-degree, and perceptions of amount and impact of advice seeking between physicians and nurses. Data on prescribing error rates from the 2 wards were compared. Weekly medication advice-seeking networks were sparse (density: 7% ward A and 12% ward B). Information sharing across professional groups was modest, and rates of reciprocation of advice were low (9% ward A, 14% ward B). Pharmacists provided advice to most people, and junior physicians also played central roles. Senior physicians provided medication advice to few people. Many staff perceived that physicians rarely sought advice from nurses when prescribing, but almost all believed that an increase in communication between physicians and nurses about medications would improve patient safety. The medication networks in ward B had higher measures for density, reciprocation, and fewer senior physicians who were isolates. Ward B had a significantly lower rate of both procedural and clinical prescribing errors than ward A (0.63 clinical prescribing errors per admission [95%CI, 0.47-0.79] versus 1.81/ admission [95%CI, 1.49-2.13]). Medication advice-seeking networks among staff on hospital wards are limited. Hubs of advice provision include pharmacists, junior physicians, and senior nurses. Senior physicians are poorly integrated into medication advice networks. Strategies to improve the advice-giving networks between senior and junior physicians may be a fruitful area for intervention to improve medication safety. We found that one ward with stronger networks also had a significantly lower prescribing error rate, suggesting a promising area for further investigation.

  13. Use of geographic information systems to assess the error associated with the use of place of residence in injury research.

    PubMed

    Amram, Ofer; Schuurman, Nadine; Yanchar, Natalie L; Pike, Ian; Friger, Michael; Griesdale, Donald

    In any spatial research, the use of accurate location data is critical to the reliability of the results. Unfortunately, however, many of the administrative data sets used in injury research do not include the location at which the injury takes place. The aim of this paper is to examine the error associated with using place of residence as opposed to place of injury when identifying injury hotspots and hospital access. Traumatic Brian Injury (TBI) data from the BC Trauma Registry (BCTR) was used to identify all TBI patients admitted to BC hospitals between January 2000 and March 2013. In order to estimate how locational error impacts the identification of injury hotspots, the data was aggregated to the level of dissemination area (DA) and census tract (CT) and a linear regression was performed using place of residence as a predictor for place of injury. In order to assess the impact of locational error in studies examining hospital access, an analysis of the driving time between place of injury and place of residence and the difference in driving time between place of residence and the treatment hospital, and place of injury and the same hospital was conducted. The driving time analysis indicated that 73.3 % of the injuries occurred within 5 min of place of residence, 11.2 % between five and ten minutes and 15.5 % over 20 min. Misclassification error occurs at both the DA and CT level. The residual map of the DA clearly shows more detailed misclassification. As expected, the driving time between place of residence and place of injury and the difference between these same two locations and the treatment hospital share a positive relationship. In fact, the larger the distance was between the two locations, the larger the error was when estimating access to hospital. Our results highlight the need for more systematic recording of place of injury as this will allow researchers to more accurately pinpoint where injuries occur. It will also allow researchers to identify the causes of these injuries and to determine how these injuries might be prevented.

  14. Rapid production of optimal-quality reduced-resolution representations of very large databases

    DOEpatents

    Sigeti, David E.; Duchaineau, Mark; Miller, Mark C.; Wolinsky, Murray; Aldrich, Charles; Mineev-Weinstein, Mark B.

    2001-01-01

    View space representation data is produced in real time from a world space database representing terrain features. The world space database is first preprocessed. A database is formed having one element for each spatial region corresponding to a finest selected level of detail. A multiresolution database is then formed by merging elements and a strict error metric is computed for each element at each level of detail that is independent of parameters defining the view space. The multiresolution database and associated strict error metrics are then processed in real time for real time frame representations. View parameters for a view volume comprising a view location and field of view are selected. The error metric with the view parameters is converted to a view-dependent error metric. Elements with the coarsest resolution are chosen for an initial representation. Data set first elements from the initial representation data set are selected that are at least partially within the view volume. The first elements are placed in a split queue ordered by the value of the view-dependent error metric. If the number of first elements in the queue meets or exceeds a predetermined number of elements or whether the largest error metric is less than or equal to a selected upper error metric bound, the element at the head of the queue is force split and the resulting elements are inserted into the queue. Force splitting is continued until the determination is positive to form a first multiresolution set of elements. The first multiresolution set of elements is then outputted as reduced resolution view space data representing the terrain features.

  15. Determination of Shift/Bias in Digital Aerial Triangulation of UAV Imagery Sequences

    NASA Astrophysics Data System (ADS)

    Wierzbicki, Damian

    2017-12-01

    Currently UAV Photogrammetry is characterized a largely automated and efficient data processing. Depicting from the low altitude more often gains on the meaning in the uses of applications as: cities mapping, corridor mapping, road and pipeline inspections or mapping of large areas e.g. forests. Additionally, high-resolution video image (HD and bigger) is more often use for depicting from the low altitude from one side it lets deliver a lot of details and characteristics of ground surfaces features, and from the other side is presenting new challenges in the data processing. Therefore, determination of elements of external orientation plays a substantial role the detail of Digital Terrain Models and artefact-free ortophoto generation. Parallel a research on the quality of acquired images from UAV and above the quality of products e.g. orthophotos are conducted. Despite so fast development UAV photogrammetry still exists the necessity of accomplishment Automatic Aerial Triangulation (AAT) on the basis of the observations GPS/INS and via ground control points. During low altitude photogrammetric flight, the approximate elements of external orientation registered by UAV are burdened with the influence of some shift/bias errors. In this article, methods of determination shift/bias error are presented. In the process of the digital aerial triangulation two solutions are applied. In the first method shift/bias error was determined together with the drift/bias error, elements of external orientation and coordinates of ground control points. In the second method shift/bias error was determined together with the elements of external orientation, coordinates of ground control points and drift/bias error equals 0. When two methods were compared the difference for shift/bias error is more than ±0.01 m for all terrain coordinates XYZ.

  16. Parametric representation of weld fillets using shell finite elements—a proposal based on minimum stiffness and inertia errors

    NASA Astrophysics Data System (ADS)

    Echer, L.; Marczak, R. J.

    2018-02-01

    The objective of the present work is to introduce a methodology capable of modelling welded components for structural stress analysis. The modelling technique was based on the recommendations of the International Institute of Welding; however, some geometrical features of the weld fillet were used as design parameters in an optimization problem. Namely, the weld leg length and thickness of the shell elements representing the weld fillet were optimized in such a way that the first natural frequencies were not changed significantly when compared to a reference result. Sequential linear programming was performed for T-joint structures corresponding to two different structural details: with and without full penetration weld fillets. Both structural details were tested in scenarios of various plate thicknesses and depths. Once the optimal parameters were found, a modelling procedure was proposed for T-shaped components. Furthermore, the proposed modelling technique was extended for overlapped welded joints. The results obtained were compared to well-established methodologies presented in standards and in the literature. The comparisons included results for natural frequencies, total mass and structural stress. By these comparisons, it was observed that some established practices produce significant errors in the overall stiffness and inertia. The methodology proposed herein does not share this issue and can be easily extended to other types of structure.

  17. Simulation of 7050 Wrought Aluminum Alloy Wheel Die Forging and its Defects Analysis based on DEFORM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang Shiquan; Yi Youping; Zhang Yuxun

    2010-06-15

    Defects such as folding, intercrystalline cracking and flow lines outcrop are very likely to occur in the forging of aluminum alloy. Moreover, it is difficult to achieve the optimal set of process parameters just by trial and error within an industrial environment. In producing 7050 wrought aluminum alloy wheel, a rigid-plastic finite element method (FEM) analysis has been performed to optimize die forging process. Processing parameters were analyzed, focusing on the effects of punch speed, friction factor and temperature. Meanwhile, mechanism as well as the evolution with respect to the defects of the wrought wheel was studied in details. Frommore » an analysis of the results, isothermal die forging was proposed for producing 7050 aluminum alloy wheel with good mechanical properties. Finally, verification experiment was carried out on hydropress.« less

  18. Scintillation and bit error rate analysis of a phase-locked partially coherent flat-topped array laser beam in oceanic turbulence.

    PubMed

    Yousefi, Masoud; Kashani, Fatemeh Dabbagh; Golmohammady, Shole; Mashal, Ahmad

    2017-12-01

    In this paper, the performance of underwater wireless optical communication (UWOC) links, which is made up of the partially coherent flat-topped (PCFT) array laser beam, has been investigated in detail. Providing high power, array laser beams are employed to increase the range of UWOC links. For characterization of the effects of oceanic turbulence on the propagation behavior of the considered beam, using the extended Huygens-Fresnel principle, an analytical expression for cross-spectral density matrix elements and a semi-analytical one for fourth-order statistical moment have been derived. Then, based on these expressions, the on-axis scintillation index of the mentioned beam propagating through weak oceanic turbulence has been calculated. Furthermore, in order to quantify the performance of the UWOC link, the average bit error rate (BER) has also been evaluated. The effects of some source factors and turbulent ocean parameters on the propagation behavior of the scintillation index and the BER have been studied in detail. The results of this investigation indicate that in comparison with the Gaussian array beam, when the source size of beamlets is larger than the first Fresnel zone, the PCFT array laser beam with the higher flatness order is found to have a lower scintillation index and hence lower BER. Specifically, in the sense of scintillation index reduction, using the PCFT array laser beams has a considerable benefit in comparison with the single PCFT or Gaussian laser beams and also Gaussian array beams. All the simulation results of this paper have been shown by graphs and they have been analyzed in detail.

  19. Generalized fourier analyses of the advection-diffusion equation - Part II: two-dimensional domains

    NASA Astrophysics Data System (ADS)

    Voth, Thomas E.; Martinez, Mario J.; Christon, Mark A.

    2004-07-01

    Part I of this work presents a detailed multi-methods comparison of the spatial errors associated with the one-dimensional finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. In Part II we extend the analysis to two-dimensional domains and also consider the effects of wave propagation direction and grid aspect ratio on the phase speed, and the discrete and artificial diffusivities. The observed dependence of dispersive and diffusive behaviour on propagation direction makes comparison of methods more difficult relative to the one-dimensional results. For this reason, integrated (over propagation direction and wave number) error and anisotropy metrics are introduced to facilitate comparison among the various methods. With respect to these metrics, the consistent mass Galerkin and consistent mass control-volume finite element methods, and their streamline upwind derivatives, exhibit comparable accuracy, and generally out-perform their lumped mass counterparts and finite-difference based schemes. While this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common mathematical framework. Published in 2004 by John Wiley & Sons, Ltd.

  20. Physics-based statistical learning approach to mesoscopic model selection.

    PubMed

    Taverniers, Søren; Haut, Terry S; Barros, Kipton; Alexander, Francis J; Lookman, Turab

    2015-11-01

    In materials science and many other research areas, models are frequently inferred without considering their generalization to unseen data. We apply statistical learning using cross-validation to obtain an optimally predictive coarse-grained description of a two-dimensional kinetic nearest-neighbor Ising model with Glauber dynamics (GD) based on the stochastic Ginzburg-Landau equation (sGLE). The latter is learned from GD "training" data using a log-likelihood analysis, and its predictive ability for various complexities of the model is tested on GD "test" data independent of the data used to train the model on. Using two different error metrics, we perform a detailed analysis of the error between magnetization time trajectories simulated using the learned sGLE coarse-grained description and those obtained using the GD model. We show that both for equilibrium and out-of-equilibrium GD training trajectories, the standard phenomenological description using a quartic free energy does not always yield the most predictive coarse-grained model. Moreover, increasing the amount of training data can shift the optimal model complexity to higher values. Our results are promising in that they pave the way for the use of statistical learning as a general tool for materials modeling and discovery.

  1. Description of data on the Nimbus 7 LIMS map archive tape: Ozone and nitric acid

    NASA Technical Reports Server (NTRS)

    Remsberg, E. E.; Kurzeja, R. J.; Haggard, K. V.; Russell, J. M., III; Gordley, L. L.

    1986-01-01

    The Nimbus 7 Limb Infrared Monitor of the Stratosphere (LIMS) data set has been processed into a Fourier coefficient representation with a Kalman filter algorithm applied to profile data at individual latitudes and pressure levels. The algorithm produces synoptic data at noon Greenwich Mean Time (GMT) from the asynoptic orbital profiles. This form of the data set is easy to use and is appropriate for time series analysis and further data manipulation and display. Ozone and nitric acid results are grouped together in this report because the LIMS vertical field of views (FOV's) and analysis characteristics for these species are similar. A comparison of the orbital input data with mixing ratios derived from Kalman filter coefficients indicates errors in mixing ratio of generally less than 5 percent, with 15 percent being a maximum error. The high quality of the mapped data was indicated by coherence of both the phases and the amplitudes of waves with latitude and pressure. Examples of the mapped fields are presented, and details are given concerning the importance of diurnal variations, the removal of polar stratospheric cloud signatures, and the interpretation of bias effects in the data near the tops of profiles.

  2. Spatio-temporal distribution of Oklahoma earthquakes: Exploring relationships using a nearest-neighbor approach: Nearest-neighbor analysis of Oklahoma

    DOE PAGES

    Vasylkivska, Veronika S.; Huerta, Nicolas J.

    2017-06-24

    Determining the spatiotemporal characteristics of natural and induced seismic events holds the opportunity to gain new insights into why these events occur. Linking the seismicity characteristics with other geologic, geographic, natural, or anthropogenic factors could help to identify the causes and suggest mitigation strategies that reduce the risk associated with such events. The nearest-neighbor approach utilized in this work represents a practical first step toward identifying statistically correlated clusters of recorded earthquake events. Detailed study of the Oklahoma earthquake catalog’s inherent errors, empirical model parameters, and model assumptions is presented. We found that the cluster analysis results are stable withmore » respect to empirical parameters (e.g., fractal dimension) but were sensitive to epicenter location errors and seismicity rates. Most critically, we show that the patterns in the distribution of earthquake clusters in Oklahoma are primarily defined by spatial relationships between events. This observation is a stark contrast to California (also known for induced seismicity) where a comparable cluster distribution is defined by both spatial and temporal interactions between events. These results highlight the difficulty in understanding the mechanisms and behavior of induced seismicity but provide insights for future work.« less

  3. Spatio-temporal distribution of Oklahoma earthquakes: Exploring relationships using a nearest-neighbor approach: Nearest-neighbor analysis of Oklahoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasylkivska, Veronika S.; Huerta, Nicolas J.

    Determining the spatiotemporal characteristics of natural and induced seismic events holds the opportunity to gain new insights into why these events occur. Linking the seismicity characteristics with other geologic, geographic, natural, or anthropogenic factors could help to identify the causes and suggest mitigation strategies that reduce the risk associated with such events. The nearest-neighbor approach utilized in this work represents a practical first step toward identifying statistically correlated clusters of recorded earthquake events. Detailed study of the Oklahoma earthquake catalog’s inherent errors, empirical model parameters, and model assumptions is presented. We found that the cluster analysis results are stable withmore » respect to empirical parameters (e.g., fractal dimension) but were sensitive to epicenter location errors and seismicity rates. Most critically, we show that the patterns in the distribution of earthquake clusters in Oklahoma are primarily defined by spatial relationships between events. This observation is a stark contrast to California (also known for induced seismicity) where a comparable cluster distribution is defined by both spatial and temporal interactions between events. These results highlight the difficulty in understanding the mechanisms and behavior of induced seismicity but provide insights for future work.« less

  4. Error-trellis Syndrome Decoding Techniques for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  5. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  6. Automatic classification of diseases from free-text death certificates for real-time surveillance.

    PubMed

    Koopman, Bevan; Karimi, Sarvnaz; Nguyen, Anthony; McGuire, Rhydwyn; Muscatello, David; Kemp, Madonna; Truran, Donna; Zhang, Ming; Thackway, Sarah

    2015-07-15

    Death certificates provide an invaluable source for mortality statistics which can be used for surveillance and early warnings of increases in disease activity and to support the development and monitoring of prevention or response strategies. However, their value can be realised only if accurate, quantitative data can be extracted from death certificates, an aim hampered by both the volume and variable nature of certificates written in natural language. This study aims to develop a set of machine learning and rule-based methods to automatically classify death certificates according to four high impact diseases of interest: diabetes, influenza, pneumonia and HIV. Two classification methods are presented: i) a machine learning approach, where detailed features (terms, term n-grams and SNOMED CT concepts) are extracted from death certificates and used to train a set of supervised machine learning models (Support Vector Machines); and ii) a set of keyword-matching rules. These methods were used to identify the presence of diabetes, influenza, pneumonia and HIV in a death certificate. An empirical evaluation was conducted using 340,142 death certificates, divided between training and test sets, covering deaths from 2000-2007 in New South Wales, Australia. Precision and recall (positive predictive value and sensitivity) were used as evaluation measures, with F-measure providing a single, overall measure of effectiveness. A detailed error analysis was performed on classification errors. Classification of diabetes, influenza, pneumonia and HIV was highly accurate (F-measure 0.96). More fine-grained ICD-10 classification effectiveness was more variable but still high (F-measure 0.80). The error analysis revealed that word variations as well as certain word combinations adversely affected classification. In addition, anomalies in the ground truth likely led to an underestimation of the effectiveness. The high accuracy and low cost of the classification methods allow for an effective means for automatic and real-time surveillance of diabetes, influenza, pneumonia and HIV deaths. In addition, the methods are generally applicable to other diseases of interest and to other sources of medical free-text besides death certificates.

  7. iTOUGH2 V6.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, Stefan A.

    2010-11-01

    iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional , multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. It performs sensitivity analysis, parameter estimation, and uncertainty propagation, analysis in geosciences and reservoir engineering and other application areas. It supports a number of different combination of fluids and components [equation-of-state (EOS) modules]. In addition, the optimization routines implemented in iTOUGH2 can also be used or sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files. This link is achieved by means of the PEST application programmingmore » interface. iTOUGH2 solves the inverse problem by minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative fee, gradient-based and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlos simulation for uncertainty propagation analysis. A detailed residual and error analysis is provided. This upgrade includes new EOS modules (specifically EOS7c, ECO2N and TMVOC), hysteretic relative permeability and capillary pressure functions and the PEST API. More details can be found at http://esd.lbl.gov/iTOUGH2 and the publications cited there. Hardware Req.: Multi-platform; Related/auxiliary software PVM (if running in parallel).« less

  8. Fault-Tolerant Signal Processing Architectures with Distributed Error Control.

    DTIC Science & Technology

    1985-01-01

    Zm, Revisited," Information and Control, Vol. 37, pp. 100-104, 1978. 13. J. Wakerly , Error Detecting Codes. SeIf-Checkino Circuits and Applications ...However, the newer results concerning applications of real codes are still in the publication process. Hence, two very detailed appendices are included to...significant entities to be protected. While the distributed finite field approach afforded adequate protection, its applicability was restricted and

  9. Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students

    NASA Astrophysics Data System (ADS)

    Priyani, H. A.; Ekawati, R.

    2018-01-01

    Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.

  10. Error Propagation Analysis in the SAE Architecture Analysis and Design Language (AADL) and the EDICT Tool Framework

    NASA Technical Reports Server (NTRS)

    LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.

    2011-01-01

    This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.

  11. Improved parallel image reconstruction using feature refinement.

    PubMed

    Cheng, Jing; Jia, Sen; Ying, Leslie; Liu, Yuanyuan; Wang, Shanshan; Zhu, Yanjie; Li, Ye; Zou, Chao; Liu, Xin; Liang, Dong

    2018-07-01

    The aim of this study was to develop a novel feature refinement MR reconstruction method from highly undersampled multichannel acquisitions for improving the image quality and preserve more detail information. The feature refinement technique, which uses a feature descriptor to pick up useful features from residual image discarded by sparsity constrains, is applied to preserve the details of the image in compressed sensing and parallel imaging in MRI (CS-pMRI). The texture descriptor and structure descriptor recognizing different types of features are required for forming the feature descriptor. Feasibility of the feature refinement was validated using three different multicoil reconstruction methods on in vivo data. Experimental results show that reconstruction methods with feature refinement improve the quality of reconstructed image and restore the image details more accurately than the original methods, which is also verified by the lower values of the root mean square error and high frequency error norm. A simple and effective way to preserve more useful detailed information in CS-pMRI is proposed. This technique can effectively improve the reconstruction quality and has superior performance in terms of detail preservation compared with the original version without feature refinement. Magn Reson Med 80:211-223, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  12. Introduction to CAUSES: Description of Weather and Climate Models and Their Near-Surface Temperature Errors in 5 day Hindcasts Near the Southern Great Plains

    DOE PAGES

    Morcrette, C. J.; Van Weverberg, K.; Ma, H. -Y.; ...

    2018-02-16

    We introduce the Clouds Above the United States and Errors at the Surface (CAUSES) project with its aim of better understanding the physical processes leading to warm screen temperature biases over the American Midwest in many numerical models. In this first of four companion papers, 11 different models, from nine institutes, perform a series of 5 day hindcasts, each initialized from reanalyses. After describing the common experimental protocol and detailing each model configuration, a gridded temperature data set is derived from observations and used to show that all the models have a warm bias over parts of the Midwest. Additionally,more » a strong diurnal cycle in the screen temperature bias is found in most models. In some models the bias is largest around midday, while in others it is largest during the night. At the Department of Energy Atmospheric Radiation Measurement Southern Great Plains (SGP) site, the model biases are shown to extend several kilometers into the atmosphere. Finally, to provide context for the companion papers, in which observations from the SGP site are used to evaluate the different processes contributing to errors there, it is shown that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP. This suggests that conclusions drawn from detailed evaluation of models using instruments located at SGP will be representative of errors that are prevalent over a larger spatial scale.« less

  13. Introduction to CAUSES: Description of Weather and Climate Models and Their Near-Surface Temperature Errors in 5 day Hindcasts Near the Southern Great Plains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morcrette, C. J.; Van Weverberg, K.; Ma, H. -Y.

    We introduce the Clouds Above the United States and Errors at the Surface (CAUSES) project with its aim of better understanding the physical processes leading to warm screen temperature biases over the American Midwest in many numerical models. In this first of four companion papers, 11 different models, from nine institutes, perform a series of 5 day hindcasts, each initialized from reanalyses. After describing the common experimental protocol and detailing each model configuration, a gridded temperature data set is derived from observations and used to show that all the models have a warm bias over parts of the Midwest. Additionally,more » a strong diurnal cycle in the screen temperature bias is found in most models. In some models the bias is largest around midday, while in others it is largest during the night. At the Department of Energy Atmospheric Radiation Measurement Southern Great Plains (SGP) site, the model biases are shown to extend several kilometers into the atmosphere. Finally, to provide context for the companion papers, in which observations from the SGP site are used to evaluate the different processes contributing to errors there, it is shown that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP. This suggests that conclusions drawn from detailed evaluation of models using instruments located at SGP will be representative of errors that are prevalent over a larger spatial scale.« less

  14. Introduction to CAUSES: Description of Weather and Climate Models and Their Near-Surface Temperature Errors in 5 day Hindcasts Near the Southern Great Plains

    NASA Astrophysics Data System (ADS)

    Morcrette, C. J.; Van Weverberg, K.; Ma, H.-Y.; Ahlgrimm, M.; Bazile, E.; Berg, L. K.; Cheng, A.; Cheruy, F.; Cole, J.; Forbes, R.; Gustafson, W. I.; Huang, M.; Lee, W.-S.; Liu, Y.; Mellul, L.; Merryfield, W. J.; Qian, Y.; Roehrig, R.; Wang, Y.-C.; Xie, S.; Xu, K.-M.; Zhang, C.; Klein, S.; Petch, J.

    2018-03-01

    We introduce the Clouds Above the United States and Errors at the Surface (CAUSES) project with its aim of better understanding the physical processes leading to warm screen temperature biases over the American Midwest in many numerical models. In this first of four companion papers, 11 different models, from nine institutes, perform a series of 5 day hindcasts, each initialized from reanalyses. After describing the common experimental protocol and detailing each model configuration, a gridded temperature data set is derived from observations and used to show that all the models have a warm bias over parts of the Midwest. Additionally, a strong diurnal cycle in the screen temperature bias is found in most models. In some models the bias is largest around midday, while in others it is largest during the night. At the Department of Energy Atmospheric Radiation Measurement Southern Great Plains (SGP) site, the model biases are shown to extend several kilometers into the atmosphere. Finally, to provide context for the companion papers, in which observations from the SGP site are used to evaluate the different processes contributing to errors there, it is shown that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP. This suggests that conclusions drawn from detailed evaluation of models using instruments located at SGP will be representative of errors that are prevalent over a larger spatial scale.

  15. Implementation and reporting of causal mediation analysis in 2015: a systematic review in epidemiological studies.

    PubMed

    Liu, Shao-Hsien; Ulbricht, Christine M; Chrysanthopoulou, Stavroula A; Lapane, Kate L

    2016-07-20

    Causal mediation analysis is often used to understand the impact of variables along the causal pathway of an occurrence relation. How well studies apply and report the elements of causal mediation analysis remains unknown. We systematically reviewed epidemiological studies published in 2015 that employed causal mediation analysis to estimate direct and indirect effects of observed associations between an exposure on an outcome. We identified potential epidemiological studies through conducting a citation search within Web of Science and a keyword search within PubMed. Two reviewers independently screened studies for eligibility. For eligible studies, one reviewer performed data extraction, and a senior epidemiologist confirmed the extracted information. Empirical application and methodological details of the technique were extracted and summarized. Thirteen studies were eligible for data extraction. While the majority of studies reported and identified the effects of measures, most studies lacked sufficient details on the extent to which identifiability assumptions were satisfied. Although most studies addressed issues of unmeasured confounders either from empirical approaches or sensitivity analyses, the majority did not examine the potential bias arising from the measurement error of the mediator. Some studies allowed for exposure-mediator interaction and only a few presented results from models both with and without interactions. Power calculations were scarce. Reporting of causal mediation analysis is varied and suboptimal. Given that the application of causal mediation analysis will likely continue to increase, developing standards of reporting of causal mediation analysis in epidemiological research would be prudent.

  16. WE-G-213CD-03: A Dual Complementary Verification Method for Dynamic Tumor Tracking on Vero SBRT.

    PubMed

    Poels, K; Depuydt, T; Verellen, D; De Ridder, M

    2012-06-01

    to use complementary cine EPID and gimbals log file analysis for in-vivo tracking accuracy monitoring. A clinical prototype of dynamic tracking (DT) was installed on the Vero SBRT system. This prototype version allowed tumor tracking by gimballed linac rotations using an internal-external correspondence model. The DT prototype software allowed the detailed logging of all applied gimbals rotations during tracking. The integration of an EPID on the vero system allowed the acquisition of cine EPID images during DT. We quantified the tracking error on cine EPID (E-EPID) by subtracting the target center (fiducial marker detection) and the field centroid. Dynamic gimbals log file information was combined with orthogonal x-ray verification images to calculate the in-vivo tracking error (E-kVLog). The correlation between E-kVLog and E-EPID was calculated for validation of the gimbals log file. Further, we investigated the sensitivity of the log file tracking error by introducing predefined systematic tracking errors. As an application we calculate gimbals log file tracking error for dynamic hidden target tests to investigate gravity effects and decoupled gimbals rotation from gantry rotation. Finally, calculating complementary cine EPID and log file tracking errors evaluated the clinical accuracy of dynamic tracking. A strong correlation was found between log file and cine EPID tracking error distribution during concurrent measurements (R=0.98). We found sensitivity in the gimbals log files to detect a systematic tracking error up to 0.5 mm. Dynamic hidden target tests showed no gravity influence on tracking performance and high degree of decoupled gimbals and gantry rotation during dynamic arc dynamic tracking. A submillimetric agreement between clinical complementary tracking error measurements was found. Redundancy of the internal gimbals log file with x-ray verification images with complementary independent cine EPID images was implemented to monitor the accuracy of gimballed tumor tracking on Vero SBRT. Research was financially supported by the Flemish government (FWO), Hercules Foundation and BrainLAB AG. © 2012 American Association of Physicists in Medicine.

  17. RNA Imaging with Multiplexed Error Robust Fluorescence in situ Hybridization

    PubMed Central

    Moffitt, Jeffrey R.; Zhuang, Xiaowei

    2016-01-01

    Quantitative measurements of both the copy number and spatial distribution of large fractions of the transcriptome in single-cells could revolutionize our understanding of a variety of cellular and tissue behaviors in both healthy and diseased states. Single-molecule Fluorescence In Situ Hybridization (smFISH)—an approach where individual RNAs are labeled with fluorescent probes and imaged in their native cellular and tissue context—provides both the copy number and spatial context of RNAs but has been limited in the number of RNA species that can be measured simultaneously. Here we describe Multiplexed Error Robust Fluorescence In Situ Hybridization (MERFISH), a massively parallelized form of smFISH that can image and identify hundreds to thousands of different RNA species simultaneously with high accuracy in individual cells in their native spatial context. We provide detailed protocols on all aspects of MERFISH, including probe design, data collection, and data analysis to allow interested laboratories to perform MERFISH measurements themselves. PMID:27241748

  18. The decoding of Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.

    1988-01-01

    Reed-Solomon (RS) codes form an important part of the high-rate downlink telemetry system for the Magellan mission, and the RS decoding function for this project will be done by DSN. Although the basic idea behind all Reed-Solomon decoding algorithms was developed by Berlekamp in 1968, there are dozens of variants of Berlekamp's algorithm in current use. An attempt to restore order is made by presenting a mathematical theory which explains the working of almost all known RS decoding algorithms. The key innovation that makes this possible is the unified approach to the solution of the key equation, which simultaneously describes the Berlekamp, Berlekamp-Massey, Euclid, and continued fractions approaches. Additionally, a detailed analysis is made of what can happen to a generic RS decoding algorithm when the number of errors and erasures exceeds the code's designed correction capability, and it is shown that while most published algorithms do not detect as many of these error-erasure patterns as possible, by making a small change in the algorithms, this problem can be overcome.

  19. A novel beamformer design method for medical ultrasound. Part I: Theory.

    PubMed

    Ranganathan, Karthik; Walker, William F

    2003-01-01

    The design of transmit and receive aperture weightings is a critical step in the development of ultrasound imaging systems. Current design methods are generally iterative, and consequently time consuming and inexact. We describe a new and general ultrasound beamformer design method, the minimum sum squared error (MSSE) technique. The MSSE technique enables aperture design for arbitrary beam patterns (within fundamental limitations imposed by diffraction). It uses a linear algebra formulation to describe the system point spread function (psf) as a function of the aperture weightings. The sum squared error (SSE) between the system psf and the desired or goal psf is minimized, yielding the optimal aperture weightings. We present detailed analysis for continuous wave (CW) and broadband systems. We also discuss several possible applications of the technique, such as the design of aperture weightings that improve the system depth of field, generate limited diffraction transmit beams, and improve the correlation depth of field in translated aperture system geometries. Simulation results are presented in an accompanying paper.

  20. Error Reduction Program. [combustor performance evaluation codes

    NASA Technical Reports Server (NTRS)

    Syed, S. A.; Chiappetta, L. M.; Gosman, A. D.

    1985-01-01

    The details of a study to select, incorporate and evaluate the best available finite difference scheme to reduce numerical error in combustor performance evaluation codes are described. The combustor performance computer programs chosen were the two dimensional and three dimensional versions of Pratt & Whitney's TEACH code. The criteria used to select schemes required that the difference equations mirror the properties of the governing differential equation, be more accurate than the current hybrid difference scheme, be stable and economical, be compatible with TEACH codes, use only modest amounts of additional storage, and be relatively simple. The methods of assessment used in the selection process consisted of examination of the difference equation, evaluation of the properties of the coefficient matrix, Taylor series analysis, and performance on model problems. Five schemes from the literature and three schemes developed during the course of the study were evaluated. This effort resulted in the incorporation of a scheme in 3D-TEACH which is usuallly more accurate than the hybrid differencing method and never less accurate.

  1. Security issues of Internet-based biometric authentication systems: risks of Man-in-the-Middle and BioPhishing on the example of BioWebAuth

    NASA Astrophysics Data System (ADS)

    Zeitz, Christian; Scheidat, Tobias; Dittmann, Jana; Vielhauer, Claus; González Agulla, Elisardo; Otero Muras, Enrique; García Mateo, Carmen; Alba Castro, José L.

    2008-02-01

    Beside the optimization of biometric error rates the overall security system performance in respect to intentional security attacks plays an important role for biometric enabled authentication schemes. As traditionally most user authentication schemes are knowledge and/or possession based, firstly in this paper we present a methodology for a security analysis of Internet-based biometric authentication systems by enhancing known methodologies such as the CERT attack-taxonomy with a more detailed view on the OSI-Model. Secondly as proof of concept, the guidelines extracted from this methodology are strictly applied to an open source Internet-based biometric authentication system (BioWebAuth). As case studies, two exemplary attacks, based on the found security leaks, are investigated and the attack performance is presented to show that during the biometric authentication schemes beside biometric error performance tuning also security issues need to be addressed. Finally, some design recommendations are given in order to ensure a minimum security level.

  2. A procedure used for a ground truth study of a land use map of North Alabama generated from LANDSAT data

    NASA Technical Reports Server (NTRS)

    Downs, S. W., Jr.; Sharma, G. C.; Bagwell, C.

    1977-01-01

    A land use map of a five county area in North Alabama was generated from LANDSAT data using a supervised classification algorithm. There was good overall agreement between the land use designated and known conditions, but there were also obvious discrepancies. In ground checking the map, two types of errors were encountered - shift and misclassification - and a method was developed to eliminate or greatly reduce the errors. Randomly selected study areas containing 2,525 pixels were analyzed. Overall, 76.3 percent of the pixels were correctly classified. A contingency coefficient of correlation was calculated to be 0.7 which is significant at the alpha = 0.01 level. The land use maps generated by computers from LANDSAT data are useful for overall land use by regional agencies. However, care must be used when making detailed analysis of small areas. The procedure used for conducting the ground truth study together with data from representative study areas is presented.

  3. Error Analyses of the North Alabama Lightning Mapping Array (LMA)

    NASA Technical Reports Server (NTRS)

    Koshak, W. J.; Solokiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J. M.; Bailey, J. C.; Krider, E. P.; Bateman, M. G.; Boccippio, D. J.

    2003-01-01

    Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA-MSFC and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results, except that the chi-squared theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.

  4. Simulating a transmon implementation of the surface code, Part I

    NASA Astrophysics Data System (ADS)

    Tarasinski, Brian; O'Brien, Thomas; Rol, Adriaan; Bultink, Niels; Dicarlo, Leo

    Current experimental efforts aim to realize Surface-17, a distance-3 surface-code logical qubit, using transmon qubits in a circuit QED architecture. Following experimental proposals for this device, and currently achieved fidelities on physical qubits, we define a detailed error model that takes experimentally relevant error sources into account, such as amplitude and phase damping, imperfect gate pulses, and coherent errors due to low-frequency flux noise. Using the GPU-accelerated software package 'quantumsim', we simulate the density matrix evolution of the logical qubit under this error model. Combining the simulation results with a minimum-weight matching decoder, we obtain predictions for the error rate of the resulting logical qubit when used as a quantum memory, and estimate the contribution of different error sources to the logical error budget. Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.

  5. Measurements of the toroidal torque balance of error field penetration locked modes

    DOE PAGES

    Shiraki, Daisuke; Paz-Soldan, Carlos; Hanson, Jeremy M.; ...

    2015-01-05

    Here, detailed measurements from the DIII-D tokamak of the toroidal dynamics of error field penetration locked modes under the influence of slowly evolving external fields, enable study of the toroidal torques on the mode, including interaction with the intrinsic error field. The error field in these low density Ohmic discharges is well known based on the mode penetration threshold, allowing resonant and non-resonant torque effects to be distinguished. These m/n = 2/1 locked modes are found to be well described by a toroidal torque balance between the resonant interaction with n = 1 error fields, and a viscous torque inmore » the electron diamagnetic drift direction which is observed to scale as the square of the perturbed field due to the island. Fitting to this empirical torque balance allows a time-resolved measurement of the intrinsic error field of the device, providing evidence for a time-dependent error field in DIII-D due to ramping of the Ohmic coil current.« less

  6. Generalized Fourier analyses of the advection-diffusion equation - Part I: one-dimensional domains

    NASA Astrophysics Data System (ADS)

    Christon, Mark A.; Martinez, Mario J.; Voth, Thomas E.

    2004-07-01

    This paper presents a detailed multi-methods comparison of the spatial errors associated with finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. The errors are reported in terms of non-dimensional phase and group speed, discrete diffusivity, artificial diffusivity, and grid-induced anisotropy. It is demonstrated that Fourier analysis provides an automatic process for separating the discrete advective operator into its symmetric and skew-symmetric components and characterizing the spectral behaviour of each operator. For each of the numerical methods considered, asymptotic truncation error and resolution estimates are presented for the limiting cases of pure advection and pure diffusion. It is demonstrated that streamline upwind Petrov-Galerkin and its control-volume finite element analogue, the streamline upwind control-volume method, produce both an artificial diffusivity and a concomitant phase speed adjustment in addition to the usual semi-discrete artifacts observed in the phase speed, group speed and diffusivity. The Galerkin finite element method and its streamline upwind derivatives are shown to exhibit super-convergent behaviour in terms of phase and group speed when a consistent mass matrix is used in the formulation. In contrast, the CVFEM method and its streamline upwind derivatives yield strictly second-order behaviour. In Part II of this paper, we consider two-dimensional semi-discretizations of the advection-diffusion equation and also assess the affects of grid-induced anisotropy observed in the non-dimensional phase speed, and the discrete and artificial diffusivities. Although this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common analysis framework. Published in 2004 by John Wiley & Sons, Ltd.

  7. Erratum: ``Spectroscopic Survey of M Dwarfs within 100 Parsecs of the Sun'' (AJ, 130, 1871 [2005])

    NASA Astrophysics Data System (ADS)

    Bochanski, John J.; Hawley, Suzanne L.; Reid, I. Neill; Covey, Kevin R.; West, Andrew A.; Tinney, C. G.; Gizis, John E.

    2006-06-01

    In Table 2 of the recent paper titled ``Spectroscopic Survey of M Dwarfs within 100 Parsecs of the Sun'' by Bochanski et al., the authors presented UVW space velocities, proper motions, radial velocities, and distances to the 574 M dwarfs within their sample. The UVW motions were then examined as a function of vertical distance from the Galactic plane, with a discussion on the significance of the results and their application to dynamic heating models. The authors have discovered an error in the calculation of the UVW motions. During the preparation of the manuscript, the computed space motions were not accurately recorded for a given star, resulting in sporadic errors throughout Table 2 and the subsequent analysis. In addition, the authors want to explicitly state that the UVW motions, corrected to the local standard of rest, are in a right-handed system, with a positive U-velocity in the direction of the Galactic center. The new space velocities for the M dwarfs within this sample affect Tables 2 and 4-6 and Figures 8 and 9. The new values are included below, but the authors stress that the original conclusions presented in § 6 of the original paper remain valid. In the new version of Figure 9, the general decrease in velocity dispersion of the broad component (circles) with distance from the plane is preserved, along with the mostly constant dispersion of the narrow velocity dispersion component (squares). For completeness, a new illustrative demonstration of our kinematic analysis is shown, along with updated versions of Tables 4-6, which present the details of the kinematic analysis for UVW. The authors sincerely regret any confusion introduced by this error and wish to thank Francesca Figueras for helpful discussion.

  8. Linear regression analysis: part 14 of a series on evaluation of scientific publications.

    PubMed

    Schneider, Astrid; Hommel, Gerhard; Blettner, Maria

    2010-11-01

    Regression analysis is an important statistical method for the analysis of medical data. It enables the identification and characterization of relationships among multiple factors. It also enables the identification of prognostically relevant risk factors and the calculation of risk scores for individual prognostication. This article is based on selected textbooks of statistics, a selective review of the literature, and our own experience. After a brief introduction of the uni- and multivariable regression models, illustrative examples are given to explain what the important considerations are before a regression analysis is performed, and how the results should be interpreted. The reader should then be able to judge whether the method has been used correctly and interpret the results appropriately. The performance and interpretation of linear regression analysis are subject to a variety of pitfalls, which are discussed here in detail. The reader is made aware of common errors of interpretation through practical examples. Both the opportunities for applying linear regression analysis and its limitations are presented.

  9. SU-E-T-261: Plan Quality Assurance of VMAT Using Fluence Images Reconstituted From Log-Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsuta, Y; Shimizu, E; Matsunaga, K

    2014-06-01

    Purpose: A successful VMAT plan delivery includes precise modulations of dose rate, gantry rotational and multi-leaf collimator (MLC) shapes. One of the main problem in the plan quality assurance is dosimetric errors associated with leaf-positional errors are difficult to analyze because they vary with MU delivered and leaf number. In this study, we calculated integrated fluence error image (IFEI) from log-files and evaluated plan quality in the area of all and individual MLC leaves scanned. Methods: The log-file reported the expected and actual position for inner 20 MLC leaves and the dose fraction every 0.25 seconds during prostate VMAT onmore » Elekta Synergy. These data were imported to in-house software that developed to calculate expected and actual fluence images from the difference of opposing leaf trajectories and dose fraction at each time. The IFEI was obtained by adding all of the absolute value of the difference between expected and actual fluence images corresponding. Results: In the area all MLC leaves scanned in the IFEI, the average and root mean square (rms) were 2.5 and 3.6 MU, the area of errors below 10, 5 and 3 MU were 98.5, 86.7 and 68.1 %, the 95 % of area was covered with less than error of 7.1 MU. In the area individual MLC leaves scanned in the IFEI, the average and rms value were 2.1 – 3.0 and 3.1 – 4.0 MU, the area of errors below 10, 5 and 3 MU were 97.6 – 99.5, 81.7 – 89.5 and 51.2 – 72.8 %, the 95 % of area was covered with less than error of 6.6 – 8.2 MU. Conclusion: The analysis of the IFEI reconstituted from log-file was provided detailed information about the delivery in the area of all and individual MLC leaves scanned.« less

  10. Digital Mirror Device Application in Reduction of Wave-front Phase Errors

    PubMed Central

    Zhang, Yaping; Liu, Yan; Wang, Shuxue

    2009-01-01

    In order to correct the image distortion created by the mixing/shear layer, creative and effectual correction methods are necessary. First, a method combining adaptive optics (AO) correction with a digital micro-mirror device (DMD) is presented. Second, performance of an AO system using the Phase Diverse Speckle (PDS) principle is characterized in detail. Through combining the DMD method with PDS, a significant reduction in wavefront phase error is achieved in simulations and experiments. This kind of complex correction principle can be used to recovery the degraded images caused by unforeseen error sources. PMID:22574016

  11. Development of a simple system for simultaneously measuring 6DOF geometric motion errors of a linear guide.

    PubMed

    Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You

    2013-11-04

    A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.

  12. Advanced Information Processing System - Fault detection and error handling

    NASA Technical Reports Server (NTRS)

    Lala, J. H.

    1985-01-01

    The Advanced Information Processing System (AIPS) is designed to provide a fault tolerant and damage tolerant data processing architecture for a broad range of aerospace vehicles, including tactical and transport aircraft, and manned and autonomous spacecraft. A proof-of-concept (POC) system is now in the detailed design and fabrication phase. This paper gives an overview of a preliminary fault detection and error handling philosophy in AIPS.

  13. [Preclinical and clinical management after mass disaster : Experiences from the train collision in Bad Aibling on 9 February 2016].

    PubMed

    Regel, G; Bracht, M; Huth, M; Maier, K J; Böcker, W

    2016-06-01

    Mass casualty incidents (MCI) in this day and age represent a special challenge, which initially require on-site coordination and logistics and then a professional distribution of victims (triage) to surrounding hospitals. Technical, logistical and even specialist errors can impair this flow of events. It therefore seems advisable to make a detailed analysis of every MCI. In this article the railway incident from 9 February 2016 is analyzed taking the preclinical and clinical cirumstances into consideration and conclusions for future management are drawn. As a special entity it could be determined that fixed table units in passenger trains represent a particularly dangerous hazard and in many instances in this analysis led to characteristic abdominal and thoracic injuries.

  14. Digital Droplet PCR: CNV Analysis and Other Applications.

    PubMed

    Mazaika, Erica; Homsy, Jason

    2014-07-14

    Digital droplet PCR (ddPCR) is an assay that combines state-of-the-art microfluidics technology with TaqMan-based PCR to achieve precise target DNA quantification at high levels of sensitivity and specificity. Because quantification is achieved without the need for standard assays in an easy to interpret, unambiguous digital readout, ddPCR is far simpler, faster, and less error prone than real-time qPCR. The basic protocol can be modified with minor adjustments to suit a wide range of applications, such as CNV analysis, rare variant detection, SNP genotyping, and transcript quantification. This unit describes the ddPCR workflow in detail for the Bio-Rad QX100 system, but the theory and data interpretation are generalizable to any ddPCR system. Copyright © 2014 John Wiley & Sons, Inc.

  15. A topological multilayer model of the human body.

    PubMed

    Barbeito, Antonio; Painho, Marco; Cabral, Pedro; O'Neill, João

    2015-11-04

    Geographical information systems deal with spatial databases in which topological models are described with alphanumeric information. Its graphical interfaces implement the multilayer concept and provide powerful interaction tools. In this study, we apply these concepts to the human body creating a representation that would allow an interactive, precise, and detailed anatomical study. A vector surface component of the human body is built using a three-dimensional (3-D) reconstruction methodology. This multilayer concept is implemented by associating raster components with the corresponding vector surfaces, which include neighbourhood topology enabling spatial analysis. A root mean square error of 0.18 mm validated the three-dimensional reconstruction technique of internal anatomical structures. The expansion of the identification and the development of a neighbourhood analysis function are the new tools provided in this model.

  16. 42 CFR 431.992 - Corrective action plan.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... CMS, designed to reduce improper payments in each program based on its analysis of the error causes in... State must take the following actions: (1) Data analysis. States must conduct data analysis such as reviewing clusters of errors, general error causes, characteristics, and frequency of errors that are...

  17. 42 CFR 431.992 - Corrective action plan.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... CMS, designed to reduce improper payments in each program based on its analysis of the error causes in... State must take the following actions: (1) Data analysis. States must conduct data analysis such as reviewing clusters of errors, general error causes, characteristics, and frequency of errors that are...

  18. Computed tomography of x-ray images using neural networks

    NASA Astrophysics Data System (ADS)

    Allred, Lloyd G.; Jones, Martin H.; Sheats, Matthew J.; Davis, Anthony W.

    2000-03-01

    Traditional CT reconstruction is done using the technique of Filtered Backprojection. While this technique is widely employed in industrial and medical applications, it is not generally understood that FB has a fundamental flaw. Gibbs phenomena states any Fourier reconstruction will produce errors in the vicinity of all discontinuities, and that the error will equal 28 percent of the discontinuity. A number of years back, one of the authors proposed a biological perception model whereby biological neural networks perceive 3D images from stereo vision. The perception model proports an internal hard-wired neural network which emulates the external physical process. A process is repeated whereby erroneous unknown internal values are used to generate an emulated signal with is compared to external sensed data, generating an error signal. Feedback from the error signal is then sued to update the erroneous internal values. The process is repeated until the error signal no longer decrease. It was soon realized that the same method could be used to obtain CT from x-rays without having to do Fourier transforms. Neural networks have the additional potential for handling non-linearities and missing data. The technique has been applied to some coral images, collected at the Los Alamos high-energy x-ray facility. The initial images show considerable promise, in some instances showing more detail than the FB images obtained from the same data. Although routine production using this new method would require a massively parallel computer, the method shows promise, especially where refined detail is required.

  19. A Lightning Channel Retrieval Algorithm for the North Alabama Lightning Mapping Array (LMA)

    NASA Technical Reports Server (NTRS)

    Koshak, William; Arnold, James E. (Technical Monitor)

    2002-01-01

    A new multi-station VHF time-of-arrival (TOA) antenna network is, at the time of this writing, coming on-line in Northern Alabama. The network, called the Lightning Mapping Array (LMA), employs GPS timing and detects VHF radiation from discrete segments (effectively point emitters) that comprise the channel of lightning strokes within cloud and ground flashes. The network will support on-going ground validation activities of the low Earth orbiting Lightning Imaging Sensor (LIS) satellite developed at NASA Marshall Space Flight Center (MSFC) in Huntsville, Alabama. It will also provide for many interesting and detailed studies of the distribution and evolution of thunderstorms and lightning in the Tennessee Valley, and will offer many interesting comparisons with other meteorological/geophysical wets associated with lightning and thunderstorms. In order to take full advantage of these benefits, it is essential that the LMA channel mapping accuracy (in both space and time) be fully characterized and optimized. In this study, a new revised channel mapping retrieval algorithm is introduced. The algorithm is an extension of earlier work provided in Koshak and Solakiewicz (1996) in the analysis of the NASA Kennedy Space Center (KSC) Lightning Detection and Ranging (LDAR) system. As in the 1996 study, direct algebraic solutions are obtained by inverting a simple linear system of equations, thereby making computer searches through a multi-dimensional parameter domain of a Chi-Squared function unnecessary. However, the new algorithm is developed completely in spherical Earth-centered coordinates (longitude, latitude, altitude), rather than in the (x, y, z) cartesian coordinates employed in the 1996 study. Hence, no mathematical transformations from (x, y, z) into spherical coordinates are required (such transformations involve more numerical error propagation, more computer program coding, and slightly more CPU computing time). The new algorithm also has a more realistic definition of source altitude that accounts for Earth oblateness (this can become important for sources that are hundreds of kilometers away from the network). In addition, the new algorithm is being applied to analyze computer simulated LMA datasets in order to obtain detailed location/time retrieval error maps for sources in and around the LMA network. These maps will provide a more comprehensive analysis of retrieval errors for LMA than the 1996 study did of LDAR retrieval errors. Finally, we note that the new algorithm can be applied to LDAR, and essentially any other multi-station TWA network that depends on direct line-of-site antenna excitation.

  20. Robust Tracking Control for a Piezoelectric Actuator

    DTIC Science & Technology

    2006-01-01

    1 ε ρ ( kzk )2 kzk2 r ¸ (31) where kr ∈ R+ is a constant gain, ε ∈ R+ is a small constant, and ρ ( kzk ) ∈ R is a function of norm z (t) ∈ R2. The...equality can be developed (see Appendix 3 for further details) ¯̄̄ Ñ ¯̄̄ ≤ ρ ( kzk ) kzk . (33) After substituting (31) into (27), the following...closed- loop error system can be obtained mṙ = ∼ N +Nd − e+ µ Tem Cc ¶ s− krr (34) −1 ε ρ ( kzk )2 kzk2 r. 3.3 Stability Analysis Theorem 1 The controller

  1. Swift/XRT detection of the hard X-ray source IGR J14549-6459

    NASA Astrophysics Data System (ADS)

    Fiocchi, M.; Bazzano, A.; Landi, R.; Bassani, L.; Gehrels, N.; Kennea, J.; Bird, A. J.

    2010-04-01

    We report the result of a short (900 sec) Swift/XRT observation of the field containing IGR J14549-6459, a new INTEGRAL source recently reported in the 4th IBIS catalogue (Bird et al. 2010, ApJS, 186, 1). The XRT data analysis is performed using the standard procedure described in details in Landi et al. 2010 (MNRAS, 403, 945). The XRT observation locates the X-ray counterpart of IGR J14549-6459 at RA(J2000)= 14h 55m 23.9s, Dec(J2000)= -65d 00m 03.2s with an error of 6".

  2. Mapping cumulative noise from shipping to inform marine spatial planning.

    PubMed

    Erbe, Christine; MacGillivray, Alexander; Williams, Rob

    2012-11-01

    Including ocean noise in marine spatial planning requires predictions of noise levels on large spatiotemporal scales. Based on a simple sound transmission model and ship track data (Automatic Identification System, AIS), cumulative underwater acoustic energy from shipping was mapped throughout 2008 in the west Canadian Exclusive Economic Zone, showing high noise levels in critical habitats for endangered resident killer whales, exceeding limits of "good conservation status" under the EU Marine Strategy Framework Directive. Error analysis proved that rough calculations of noise occurrence and propagation can form a basis for management processes, because spending resources on unnecessary detail is wasteful and delays remedial action.

  3. James Dunlop's historical catalogue of southern nebulae and clusters

    NASA Astrophysics Data System (ADS)

    Cozens, Glen; Walsh, Andrew; Orchiston, Wayne

    2010-03-01

    In 1826 James Dunlop compiled the second ever catalogue of southern star clusters, nebulae and galaxies from Parramatta (NSW, Australia) using a 23-cm reflecting telescope. Initially acclaimed, the catalogue and author were later criticised and condemned by others - including Sir John Herschel and both the catalogue and author are now largely unknown. The criticism of the catalogue centred on the large number of fictitious or ‘missing’ objects, yet detailed analysis reveals the remarkable completeness of the catalogue, despite its inherent errors. We believe that James Dunlop was an important early Australian astronomer, and his catalogue should be esteemed as the southern equivalent of Messier's famous northern catalogue.

  4. Improving designer productivity. [artificial intelligence

    NASA Technical Reports Server (NTRS)

    Hill, Gary C.

    1992-01-01

    Designer and design team productivity improves with skill, experience, and the tools available. The design process involves numerous trials and errors, analyses, refinements, and addition of details. Computerized tools have greatly speeded the analysis, and now new theories and methods, emerging under the label Artificial Intelligence (AI), are being used to automate skill and experience. These tools improve designer productivity by capturing experience, emulating recognized skillful designers, and making the essence of complex programs easier to grasp. This paper outlines the aircraft design process in today's technology and business climate, presenting some of the challenges ahead and some of the promising AI methods for meeting these challenges.

  5. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  6. Grid convergence errors in hemodynamic solution of patient-specific cerebral aneurysms.

    PubMed

    Hodis, Simona; Uthamaraj, Susheil; Smith, Andrea L; Dennis, Kendall D; Kallmes, David F; Dragomir-Daescu, Dan

    2012-11-15

    Computational fluid dynamics (CFD) has become a cutting-edge tool for investigating hemodynamic dysfunctions in the body. It has the potential to help physicians quantify in more detail the phenomena difficult to capture with in vivo imaging techniques. CFD simulations in anatomically realistic geometries pose challenges in generating accurate solutions due to the grid distortion that may occur when the grid is aligned with complex geometries. In addition, results obtained with computational methods should be trusted only after the solution has been verified on multiple high-quality grids. The objective of this study was to present a comprehensive solution verification of the intra-aneurysmal flow results obtained on different morphologies of patient-specific cerebral aneurysms. We chose five patient-specific brain aneurysm models with different dome morphologies and estimated the grid convergence errors for each model. The grid convergence errors were estimated with respect to an extrapolated solution based on the Richardson extrapolation method, which accounts for the degree of grid refinement. For four of the five models, calculated velocity, pressure, and wall shear stress values at six different spatial locations converged monotonically, with maximum uncertainty magnitudes ranging from 12% to 16% on the finest grids. Due to the geometric complexity of the fifth model, the grid convergence errors showed oscillatory behavior; therefore, each patient-specific model required its own grid convergence study to establish the accuracy of the analysis. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Variational Bayesian Parameter Estimation Techniques for the General Linear Model

    PubMed Central

    Starke, Ludger; Ostwald, Dirk

    2017-01-01

    Variational Bayes (VB), variational maximum likelihood (VML), restricted maximum likelihood (ReML), and maximum likelihood (ML) are cornerstone parametric statistical estimation techniques in the analysis of functional neuroimaging data. However, the theoretical underpinnings of these model parameter estimation techniques are rarely covered in introductory statistical texts. Because of the widespread practical use of VB, VML, ReML, and ML in the neuroimaging community, we reasoned that a theoretical treatment of their relationships and their application in a basic modeling scenario may be helpful for both neuroimaging novices and practitioners alike. In this technical study, we thus revisit the conceptual and formal underpinnings of VB, VML, ReML, and ML and provide a detailed account of their mathematical relationships and implementational details. We further apply VB, VML, ReML, and ML to the general linear model (GLM) with non-spherical error covariance as commonly encountered in the first-level analysis of fMRI data. To this end, we explicitly derive the corresponding free energy objective functions and ensuing iterative algorithms. Finally, in the applied part of our study, we evaluate the parameter and model recovery properties of VB, VML, ReML, and ML, first in an exemplary setting and then in the analysis of experimental fMRI data acquired from a single participant under visual stimulation. PMID:28966572

  8. Error-Analysis for Correctness, Effectiveness, and Composing Procedure.

    ERIC Educational Resources Information Center

    Ewald, Helen Rothschild

    The assumptions underpinning grammatical mistakes can often be detected by looking for patterns of errors in a student's work. Assumptions that negatively influence rhetorical effectiveness can similarly be detected through error analysis. On a smaller scale, error analysis can also reveal assumptions affecting rhetorical choice. Snags in the…

  9. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  10. The impact of response measurement error on the analysis of designed experiments

    DOE PAGES

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    2016-11-01

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  11. The impact of response measurement error on the analysis of designed experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  12. Autobiographical memory conjunction errors in younger and older adults: Evidence for a role of inhibitory ability

    PubMed Central

    Devitt, Aleea L.; Tippett, Lynette; Schacter, Daniel L.; Addis, Donna Rose

    2016-01-01

    Because of its reconstructive nature, autobiographical memory (AM) is subject to a range of distortions. One distortion involves the erroneous incorporation of features from one episodic memory into another, forming what are known as memory conjunction errors. Healthy aging has been associated with an enhanced susceptibility to conjunction errors for laboratory stimuli, yet it is unclear whether these findings translate to the autobiographical domain. We investigated the impact of aging on vulnerability to AM conjunction errors, and explored potential cognitive processes underlying the formation of these errors. An imagination recombination paradigm was used to elicit AM conjunction errors in young and older adults. Participants also completed a battery of neuropsychological tests targeting relational memory and inhibition ability. Consistent with findings using laboratory stimuli, older adults were more susceptible to AM conjunction errors than younger adults. However, older adults were not differentially vulnerable to the inflating effects of imagination. Individual variation in AM conjunction error vulnerability was attributable to inhibitory capacity. An inability to suppress the cumulative familiarity of individual AM details appears to contribute to the heightened formation of AM conjunction errors with age. PMID:27929343

  13. Actualities and Development of Heavy-Duty CNC Machine Tool Thermal Error Monitoring Technology

    NASA Astrophysics Data System (ADS)

    Zhou, Zu-De; Gui, Lin; Tan, Yue-Gang; Liu, Ming-Yao; Liu, Yi; Li, Rui-Ya

    2017-09-01

    Thermal error monitoring technology is the key technological support to solve the thermal error problem of heavy-duty CNC (computer numerical control) machine tools. Currently, there are many review literatures introducing the thermal error research of CNC machine tools, but those mainly focus on the thermal issues in small and medium-sized CNC machine tools and seldom introduce thermal error monitoring technologies. This paper gives an overview of the research on the thermal error of CNC machine tools and emphasizes the study of thermal error of the heavy-duty CNC machine tool in three areas. These areas are the causes of thermal error of heavy-duty CNC machine tool and the issues with the temperature monitoring technology and thermal deformation monitoring technology. A new optical measurement technology called the "fiber Bragg grating (FBG) distributed sensing technology" for heavy-duty CNC machine tools is introduced in detail. This technology forms an intelligent sensing and monitoring system for heavy-duty CNC machine tools. This paper fills in the blank of this kind of review articles to guide the development of this industry field and opens up new areas of research on the heavy-duty CNC machine tool thermal error.

  14. Clinical use and misuse of automated semen analysis.

    PubMed

    Sherins, R J

    1991-01-01

    During the past six years, there has been an explosion of technology which allows automated machine-vision for sperm analysis. CASA clearly provides an opportunity for objective, systematic assessment of sperm motion. But there are many caveats in using this type of equipment. CASA requires a disciplined and standardized approach to semen collection, specimen preparation, machine settings, calibration and avoidance of sampling bias. Potential sources of error can be minimized. Unfortunately, the rapid commercialization of this technology preceded detailed statistical analysis of such data to allow equally rapid comparisons of data between different CASA machines and among different laboratories. Thus, it is now imperative that we standardize use of this technology and obtain more detailed biological insights into sperm motion parameters in semen and after capacitation before we empirically employ CASA for studies of fertility prediction. In the basic science arena, CASA technology will likely evolve to provide new algorithms for accurate sperm motion analysis and give us an opportunity to address the biophysics of sperm movement. In the clinical arena, CASA instruments provide the opportunity to share and compare sperm motion data among laboratories by virtue of its objectivity, assuming standardized conditions of utilization. Identification of men with specific sperm motion disorders is certain, but the biological relevance of motility dysfunction to actual fertilization remains uncertain and surely the subject for further study.

  15. Scoping a field experiment: error diagnostics of TRMM precipitation radar estimates in complex terrain as a basis for IPHEx2014

    NASA Astrophysics Data System (ADS)

    Duan, Y.; Wilson, A. M.; Barros, A. P.

    2014-10-01

    A diagnostic analysis of the space-time structure of error in Quantitative Precipitation Estimates (QPE) from the Precipitation Radar (PR) on the Tropical Rainfall Measurement Mission (TRMM) satellite is presented here in preparation for the Integrated Precipitation and Hydrology Experiment (IPHEx) in 2014. IPHEx is the first NASA ground-validation field campaign after the launch of the Global Precipitation Measurement (GPM) satellite. In anticipation of GPM, a science-grade high-density raingauge network was deployed at mid to high elevations in the Southern Appalachian Mountains, USA since 2007. This network allows for direct comparison between ground-based measurements from raingauges and satellite-based QPE (specifically, PR 2A25 V7 using 5 years of data 2008-2013). Case studies were conducted to characterize the vertical profiles of reflectivity and rain rate retrievals associated with large discrepancies with respect to ground measurements. The spatial and temporal distribution of detection errors (false alarm, FA, and missed detection, MD) and magnitude errors (underestimation, UND, and overestimation, OVR) for stratiform and convective precipitation are examined in detail toward elucidating the physical basis of retrieval error. The diagnostic error analysis reveals that detection errors are linked to persistent stratiform light rainfall in the Southern Appalachians, which explains the high occurrence of FAs throughout the year, as well as the diurnal MD maximum at midday in the cold season (fall and winter), and especially in the inner region. Although UND dominates the magnitude error budget, underestimation of heavy rainfall conditions accounts for less than 20% of the total consistent with regional hydrometeorology. The 2A25 V7 product underestimates low level orographic enhancement of rainfall associated with fog, cap clouds and cloud to cloud feeder-seeder interactions over ridges, and overestimates light rainfall in the valleys by large amounts, though this behavior is strongly conditioned by the coarse spatial resolution (5 km) of the terrain topography mask used to remove ground clutter effects. Precipitation associated with small-scale systems (< 25 km2) and isolated deep convection tends to be underestimated, which we attribute to non-uniform beam-filling effects due to spatial averaging of reflectivity at the PR resolution. Mixed precipitation events (i.e., cold fronts and snow showers) fall into OVR or FA categories, but these are also the types of events for which observations from standard ground-based raingauge networks are more likely subject to measurement uncertainty, that is raingauge underestimation errors due to under-catch and precipitation phase. Overall, the space-time structure of the errors shows strong links among precipitation, envelope orography, landform (ridge-valley contrasts), and local hydrometeorological regime that is strongly modulated by the diurnal cycle, pointing to three major error causes that are inter-related: (1) representation of concurrent vertically and horizontally varying microphysics; (2) non uniform beam filling (NUBF) effects and ambiguity in the detection of bright band position; and (3) spatial resolution and ground clutter correction.

  16. Scoping a field experiment: error diagnostics of TRMM precipitation radar estimates in complex terrain as a basis for IPHEx2014

    NASA Astrophysics Data System (ADS)

    Duan, Y.; Wilson, A. M.; Barros, A. P.

    2015-03-01

    A diagnostic analysis of the space-time structure of error in quantitative precipitation estimates (QPEs) from the precipitation radar (PR) on the Tropical Rainfall Measurement Mission (TRMM) satellite is presented here in preparation for the Integrated Precipitation and Hydrology Experiment (IPHEx) in 2014. IPHEx is the first NASA ground-validation field campaign after the launch of the Global Precipitation Measurement (GPM) satellite. In anticipation of GPM, a science-grade high-density raingauge network was deployed at mid to high elevations in the southern Appalachian Mountains, USA, since 2007. This network allows for direct comparison between ground-based measurements from raingauges and satellite-based QPE (specifically, PR 2A25 Version 7 using 5 years of data 2008-2013). Case studies were conducted to characterize the vertical profiles of reflectivity and rain rate retrievals associated with large discrepancies with respect to ground measurements. The spatial and temporal distribution of detection errors (false alarm, FA; missed detection, MD) and magnitude errors (underestimation, UND; overestimation, OVR) for stratiform and convective precipitation are examined in detail toward elucidating the physical basis of retrieval error. The diagnostic error analysis reveals that detection errors are linked to persistent stratiform light rainfall in the southern Appalachians, which explains the high occurrence of FAs throughout the year, as well as the diurnal MD maximum at midday in the cold season (fall and winter) and especially in the inner region. Although UND dominates the error budget, underestimation of heavy rainfall conditions accounts for less than 20% of the total, consistent with regional hydrometeorology. The 2A25 V7 product underestimates low-level orographic enhancement of rainfall associated with fog, cap clouds and cloud to cloud feeder-seeder interactions over ridges, and overestimates light rainfall in the valleys by large amounts, though this behavior is strongly conditioned by the coarse spatial resolution (5 km) of the topography mask used to remove ground-clutter effects. Precipitation associated with small-scale systems (< 25 km2) and isolated deep convection tends to be underestimated, which we attribute to non-uniform beam-filling effects due to spatial averaging of reflectivity at the PR resolution. Mixed precipitation events (i.e., cold fronts and snow showers) fall into OVR or FA categories, but these are also the types of events for which observations from standard ground-based raingauge networks are more likely subject to measurement uncertainty, that is raingauge underestimation errors due to undercatch and precipitation phase. Overall, the space-time structure of the errors shows strong links among precipitation, envelope orography, landform (ridge-valley contrasts), and a local hydrometeorological regime that is strongly modulated by the diurnal cycle, pointing to three major error causes that are inter-related: (1) representation of concurrent vertically and horizontally varying microphysics; (2) non-uniform beam filling (NUBF) effects and ambiguity in the detection of bright band position; and (3) spatial resolution and ground-clutter correction.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smentkowski, Vincent S., E-mail: smentkow@ge.com

    Changes in the oxidation state of an element can result in significant changes in the ionization efficiency and hence signal intensity during secondary ion mass spectrometry (SIMS) analysis; this is referred to as the SIMS matrix effect [Secondary Ion Mass Spectrometry: A Practical Handbook for Depth Profiling and Bulk Impurity Analysis, edited by R. G. Wilson, F. A. Stevie, and C. W. Magee (Wiley, New York, 1990)]. The SIMS matrix effect complicates quantitative analysis. Quantification of SIMS data requires the determination of relative sensitivity factors (RSFs), which can be used to convert the as measured intensity into concentration units [Secondarymore » Ion Mass Spectrometry: A Practical Handbook for Depth Profiling and Bulk Impurity Analysis, edited by R. G. Wilson, F. A. Stevie, and C. W. Magee (Wiley, New York, 1990)]. In this manuscript, the authors report both: RSFs which were determined for quantification of B in Si and SiO{sub 2} matrices using a dual beam time of flight secondary ion mass spectrometry (ToF-SIMS) instrument and the protocol they are using to provide quantitative ToF-SIMS images and line scan traces. The authors also compare RSF values that were determined using oxygen and Ar ion beams for erosion, discuss the problems that can be encountered when bulk calibration samples are used to determine RSFs, and remind the reader that errors in molecular details of the matrix (density, volume, etc.) that are used to convert from atoms/cm{sup 3} to other concentration units will propagate into errors in the determined concentrations.« less

  18. Smart Annotation of Cyclic Data Using Hierarchical Hidden Markov Models.

    PubMed

    Martindale, Christine F; Hoenig, Florian; Strohrmann, Christina; Eskofier, Bjoern M

    2017-10-13

    Cyclic signals are an intrinsic part of daily life, such as human motion and heart activity. The detailed analysis of them is important for clinical applications such as pathological gait analysis and for sports applications such as performance analysis. Labeled training data for algorithms that analyze these cyclic data come at a high annotation cost due to only limited annotations available under laboratory conditions or requiring manual segmentation of the data under less restricted conditions. This paper presents a smart annotation method that reduces this cost of labeling for sensor-based data, which is applicable to data collected outside of strict laboratory conditions. The method uses semi-supervised learning of sections of cyclic data with a known cycle number. A hierarchical hidden Markov model (hHMM) is used, achieving a mean absolute error of 0.041 ± 0.020 s relative to a manually-annotated reference. The resulting model was also used to simultaneously segment and classify continuous, 'in the wild' data, demonstrating the applicability of using hHMM, trained on limited data sections, to label a complete dataset. This technique achieved comparable results to its fully-supervised equivalent. Our semi-supervised method has the significant advantage of reduced annotation cost. Furthermore, it reduces the opportunity for human error in the labeling process normally required for training of segmentation algorithms. It also lowers the annotation cost of training a model capable of continuous monitoring of cycle characteristics such as those employed to analyze the progress of movement disorders or analysis of running technique.

  19. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  20. Individual pore and interconnection size analysis of macroporous ceramic scaffolds using high-resolution X-ray tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jerban, Saeed, E-mail: saeed.jerban@usherbrooke.ca

    2016-08-15

    The pore interconnection size of β-tricalcium phosphate scaffolds plays an essential role in the bone repair process. Although, the μCT technique is widely used in the biomaterial community, it is rarely used to measure the interconnection size because of the lack of algorithms. In addition, discrete nature of the μCT introduces large systematic errors due to the convex geometry of interconnections. We proposed, verified and validated a novel pore-level algorithm to accurately characterize the individual pores and interconnections. Specifically, pores and interconnections were isolated, labeled, and individually analyzed with high accuracy. The technique was verified thoroughly by visually inspecting andmore » verifying over 3474 properties of randomly selected pores. This extensive verification process has passed a one-percent accuracy criterion. Scanning errors inherent in the discretization, which lead to both dummy and significantly overestimated interconnections, have been examined using computer-based simulations and additional high-resolution scanning. Then accurate correction charts were developed and used to reduce the scanning errors. Only after the corrections, both the μCT and SEM-based results converged, and the novel algorithm was validated. Material scientists with access to all geometrical properties of individual pores and interconnections, using the novel algorithm, will have a more-detailed and accurate description of the substitute architecture and a potentially deeper understanding of the link between the geometric and biological interaction. - Highlights: •An algorithm is developed to analyze individually all pores and interconnections. •After pore isolating, the discretization errors in interconnections were corrected. •Dummy interconnections and overestimated sizes were due to thin material walls. •The isolating algorithm was verified through visual inspection (99% accurate). •After correcting for the systematic errors, algorithm was validated successfully.« less

  1. Impairment of perception and recognition of faces, mimic expression and gestures in schizophrenic patients.

    PubMed

    Berndl, K; von Cranach, M; Grüsser, O J

    1986-01-01

    The perception and recognition of faces, mimic expression and gestures were investigated in normal subjects and schizophrenic patients by means of a movie test described in a previous report (Berndl et al. 1986). The error scores were compared with results from a semi-quantitative evaluation of psychopathological symptoms and with some data from the case histories. The overall error scores found in the three groups of schizophrenic patients (paranoic, hebephrenic, schizo-affective) were significantly increased (7-fold) over those of normals. No significant difference in the distribution of the error scores in the three different patient groups was found. In 10 different sub-tests following the movie the deficiencies found in the schizophrenic patients were analysed in detail. The error score for the averbal test was on average higher in paranoic patients than in the two other groups of patients, while the opposite was true for the error scores found in the verbal tests. Age and sex had some impact on the test results. In normals, female subjects were somewhat better than male. In schizophrenic patients the reverse was true. Thus female patients were more affected by the disease than male patients with respect to the task performance. The correlation between duration of the disease and error score was small; less than 10% of the error scores could be attributed to factors related to the duration of illness. Evaluation of psychopathological symptoms indicated that the stronger the schizophrenic defect, the higher the error score, but again this relationship was responsible for not more than 10% of the errors. The estimated degree of acute psychosis and overall sum of psychopathological abnormalities as scored in a semi-quantitative exploration did not correlate with the error score, but with each other. Similarly, treatment with psychopharmaceuticals, previous misuse of drugs or of alcohol had practically no effect on the outcome of the test data. The analysis of performance and test data of schizophrenic patients indicated that our findings are most likely not due to a "non-specific" impairment of cognitive function in schizophrenia, but point to a fairly selective defect in elementary cognitive visual functions necessary for averbal social communication. Some possible explanations of the data are discussed in relation to neuropsychological and neurophysiological findings on "face-specific" cortical areas located in the primate temporal lobe.

  2. Performance and evaluation of real-time multicomputer control systems

    NASA Technical Reports Server (NTRS)

    Shin, K. G.

    1983-01-01

    New performance measures, detailed examples, modeling of error detection process, performance evaluation of rollback recovery methods, experiments on FTMP, and optimal size of an NMR cluster are discussed.

  3. Marine Corps Body Composition Program: The Flawed Measurement System

    DTIC Science & Technology

    2006-02-07

    fitness expert and writer for ABC Bodybuilding , an error of 3% in a body fat evaluation is extreme and methods that have this margin of error should not...most other methods. In fact, bodybuilders use a seven to nine point skin fold measurement weekly during their training to monitor body fat...19.95 and recommended and endorsed by “Body-For-Life” and the World Natural Bodybuilding Federation. The caliper comes with detailed instructions

  4. SU-F-T-243: Major Risks in Radiotherapy. A Review Based On Risk Analysis Literature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    López-Tarjuelo, J; Guasp-Tortajada, M; Iglesias-Montenegro, N

    Purpose: We present a literature review of risk analyses in radiotherapy to highlight the most reported risks and facilitate the spread of this valuable information so that professionals can be aware of these major threats before performing their own studies. Methods: We considered studies with at least an estimation of the probability of occurrence of an adverse event (O) and its associated severity (S). They cover external beam radiotherapy, brachytherapy, intraoperative radiotherapy, and stereotactic techniques. We selected only the works containing a detailed ranked series of elements or failure modes and focused on the first fully reported quartile as much.more » Afterward, we sorted the risk elements according to a regular radiotherapy procedure so that the resulting groups were cited in several works and be ranked in this way. Results: 29 references published between 2007 and February 2016 were studied. Publication trend has been generally rising. The most employed analysis has been the Failure mode and effect analysis (FMEA). Among references, we selected 20 works listing 258 ranked risk elements. They were sorted into 31 groups appearing at least in two different works. 11 groups appeared in at least 5 references and 5 groups did it in 7 or more papers. These last sets of risks where choosing another set of images or plan for planning or treating, errors related with contours, errors in patient positioning for treatment, human mistakes when programming treatments, and planning errors. Conclusion: There is a sufficient amount and variety of references for identifying which failure modes or elements should be addressed in a radiotherapy department before attempting a specific analysis. FMEA prevailed, but other studies such as “risk matrix” or “occurrence × severity” analyses can also lead professionals’ efforts. Risk associated with human actions ranks very high; therefore, they should be automated or at least peer-reviewed.« less

  5. Evaluation of arctic multibeam sonar data quality using nadir crossover error analysis and compilation of a full-resolution data product

    NASA Astrophysics Data System (ADS)

    Flinders, Ashton F.; Mayer, Larry A.; Calder, Brian A.; Armstrong, Andrew A.

    2014-05-01

    We document a new high-resolution multibeam bathymetry compilation for the Canada Basin and Chukchi Borderland in the Arctic Ocean - United States Arctic Multibeam Compilation (USAMBC Version 1.0). The compilation preserves the highest native resolution of the bathymetric data, allowing for more detailed interpretation of seafloor morphology than has been previously possible. The compilation was created from multibeam bathymetry data available through openly accessible government and academic repositories. Much of the new data was collected during dedicated mapping cruises in support of the United States effort to map extended continental shelf regions beyond the 200 nm Exclusive Economic Zone. Data quality was evaluated using nadir-beam crossover-error statistics, making it possible to assess the precision of multibeam depth soundings collected from a wide range of vessels and sonar systems. Data were compiled into a single high-resolution grid through a vertical stacking method, preserving the highest quality data source in any specific grid cell. The crossover-error analysis and method of data compilation can be applied to other multi-source multibeam data sets, and is particularly useful for government agencies targeting extended continental shelf regions but with limited hydrographic capabilities. Both the gridded compilation and an easily distributed geospatial PDF map are freely available through the University of New Hampshire's Center for Coastal and Ocean Mapping (ccom.unh.edu/theme/law-sea). The geospatial pdf is a full resolution, small file-size product that supports interpretation of Arctic seafloor morphology without the need for specialized gridding/visualization software.

  6. Error Estimate of the Ares I Vehicle Longitudinal Aerodynamic Characteristics Based on Turbulent Navier-Stokes Analysis

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Ghaffari, Farhad

    2011-01-01

    Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on the unstructured grid, Reynolds-averaged Navier-Stokes flow solver USM3D, with an assumption that the flow is fully turbulent over the entire vehicle. This effort was designed to complement the prior computational activities conducted over the past five years in support of the Ares I Project with the emphasis on the vehicle s last design cycle designated as the A106 configuration. Due to a lack of flight data for this particular design s outer mold line, the initial vehicle s aerodynamic predictions and the associated error estimates were first assessed and validated against the available experimental data at representative wind tunnel flow conditions pertinent to the ascent phase of the trajectory without including any propulsion effects. Subsequently, the established procedures were then applied to obtain the longitudinal aerodynamic predictions at the selected flight flow conditions. Sample computed results and the correlations with the experimental measurements are presented. In addition, the present analysis includes the relevant data to highlight the balance between the prediction accuracy against the grid size and, thus, the corresponding computer resource requirements for the computations at both wind tunnel and flight flow conditions. NOTE: Some details have been removed from selected plots and figures in compliance with the sensitive but unclassified (SBU) restrictions. However, the content still conveys the merits of the technical approach and the relevant results.

  7. Altimetry, Orbits and Tides

    NASA Technical Reports Server (NTRS)

    Colombo, O. L.

    1984-01-01

    The nature of the orbit error and its effect on the sea surface heights calculated with satellite altimetry are explained. The elementary concepts of celestial mechanics required to follow a general discussion of the problem are included. Consideration of errors in the orbits of satellites with precisely repeating ground tracks (SEASAT, TOPEX, ERS-1, POSEIDON, amongst past and future altimeter satellites) are detailed. The theoretical conclusions are illustrated with the numerical results of computer simulations. The nature of the errors in this type of orbits is such that this error can be filtered out by using height differences along repeating (overlapping) passes. This makes them particularly valuable for the study and monitoring of changes in the sea surface, such as tides. Elements of tidal theory, showing how these principles can be combined with those pertinent to the orbit error to make direct maps of the tides using altimetry are presented.

  8. Online Deviation Detection for Medical Processes

    PubMed Central

    Christov, Stefan C.; Avrunin, George S.; Clarke, Lori A.

    2014-01-01

    Human errors are a major concern in many medical processes. To help address this problem, we are investigating an approach for automatically detecting when performers of a medical process deviate from the acceptable ways of performing that process as specified by a detailed process model. Such deviations could represent errors and, thus, detecting and reporting deviations as they occur could help catch errors before harm is done. In this paper, we identify important issues related to the feasibility of the proposed approach and empirically evaluate the approach for two medical procedures, chemotherapy and blood transfusion. For the evaluation, we use the process models to generate sample process executions that we then seed with synthetic errors. The process models describe the coordination of activities of different process performers in normal, as well as in exceptional situations. The evaluation results suggest that the proposed approach could be applied in clinical settings to help catch errors before harm is done. PMID:25954343

  9. Design implementation in model-reference adaptive systems. [application and implementation on space shuttle

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III

    1973-01-01

    The derivation of an approximate error characteristic equation describing the transient system error response is given, along with a procedure for selecting adaptive gain parameters so as to relate to the transient error response. A detailed example of the application and implementation of these methods for a space shuttle type vehicle is included. An extension of the characteristic equation technique is used to provide an estimate of the magnitude of the maximum system error and an estimate of the time of occurrence of this maximum after a plant parameter disturbance. Techniques for relaxing certain stability requirements and the conditions under which this can be done and still guarantee asymptotic stability of the system error are discussed. Such conditions are possible because the Lyapunov methods used in the stability derivation allow for overconstraining a problem in the process of insuring stability.

  10. Database Design to Ensure Anonymous Study of Medical Errors: A Report from the ASIPS collaborative

    PubMed Central

    Pace, Wilson D.; Staton, Elizabeth W.; Higgins, Gregory S.; Main, Deborah S.; West, David R.; Harris, Daniel M.

    2003-01-01

    Medical error reporting systems are important information sources for designing strategies to improve the safety of health care. Applied Strategies for Improving Patient Safety (ASIPS) is a multi-institutional, practice-based research project that collects and analyzes data on primary care medical errors and develops interventions to reduce error. The voluntary ASIPS Patient Safety Reporting System captures anonymous and confidential reports of medical errors. Confidential reports, which are quickly de-identified, provide better detail than do anonymous reports; however, concerns exist about the confidentiality of those reports should the database be subject to legal discovery or other security breaches. Standard database elements, for example, serial ID numbers, date/time stamps, and backups, could enable an outsider to link an ASIPS report to a specific medical error. The authors present the design and implementation of a database and administrative system that reduce this risk, facilitate research, and maintain near anonymity of the events, practices, and clinicians. PMID:12925548

  11. Understanding product cost vs. performance through an in-depth system Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Sanson, Mark C.

    2017-08-01

    The manner in which an optical system is toleranced and compensated greatly affects the cost to build it. By having a detailed understanding of different tolerance and compensation methods, the end user can decide on the balance of cost and performance. A detailed phased approach Monte Carlo analysis can be used to demonstrate the tradeoffs between cost and performance. In complex high performance optical systems, performance is fine-tuned by making adjustments to the optical systems after they are initially built. This process enables the overall best system performance, without the need for fabricating components to stringent tolerance levels that often can be outside of a fabricator's manufacturing capabilities. A good performance simulation of as built performance can interrogate different steps of the fabrication and build process. Such a simulation may aid the evaluation of whether the measured parameters are within the acceptable range of system performance at that stage of the build process. Finding errors before an optical system progresses further into the build process saves both time and money. Having the appropriate tolerances and compensation strategy tied to a specific performance level will optimize the overall product cost.

  12. Transportation of radionuclides in urban environs: draft environmental assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finley, N.C.; Aldrich, D.C.; Daniel, S.L.

    1980-07-01

    This report assesses the environmental consequences of the transportation of radioactive materials in densely populated urban areas, including estimates of the radiological, nonradiological, and social impacts arising from this process. The chapters of the report and the appendices which follow detail the methodology and results for each of four causative event categories: incident free transport, vehicular accidents, human errors or deviations from accepted quality assurance practices, and sabotage or malevolent acts. The numerical results are expressed in terms of the expected radiological and economic impacts from each. Following these discussions, alternatives to the current transport practice are considered. Then, themore » detailed analysis is extended from a limited area of New York city to other urban areas. The appendices contain the data bases and specific models used to evaluate these impacts, as well as discussions of chemical toxicity and the social impacts of radioactive material transport in urban areas. The latter are evaluated for each causative event category in terms of psychological, sociological, political, legal, and organizational impacts. The report is followed by an extensive bibliography covering the many fields of study which were required in performing the analysis.« less

  13. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

    PubMed

    Chiu, Ming-Chuan; Hsieh, Min-Chih

    2016-05-01

    The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  14. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  15. Defining the Relationship Between Human Error Classes and Technology Intervention Strategies

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.; Rantanen, Esa; Crisp, Vicki K. (Technical Monitor)

    2002-01-01

    One of the main factors in all aviation accidents is human error. The NASA Aviation Safety Program (AvSP), therefore, has identified several human-factors safety technologies to address this issue. Some technologies directly address human error either by attempting to reduce the occurrence of errors or by mitigating the negative consequences of errors. However, new technologies and system changes may also introduce new error opportunities or even induce different types of errors. Consequently, a thorough understanding of the relationship between error classes and technology "fixes" is crucial for the evaluation of intervention strategies outlined in the AvSP, so that resources can be effectively directed to maximize the benefit to flight safety. The purpose of the present project, therefore, was to examine the repositories of human factors data to identify the possible relationship between different error class and technology intervention strategies. The first phase of the project, which is summarized here, involved the development of prototype data structures or matrices that map errors onto "fixes" (and vice versa), with the hope of facilitating the development of standards for evaluating safety products. Possible follow-on phases of this project are also discussed. These additional efforts include a thorough and detailed review of the literature to fill in the data matrix and the construction of a complete database and standards checklists.

  16. LASER BIOLOGY AND MEDICINE: Visualisation of details of a complicated inner structure of model objects by the method of diffusion optical tomography

    NASA Astrophysics Data System (ADS)

    Tret'yakov, Evgeniy V.; Shuvalov, Vladimir V.; Shutov, I. V.

    2002-11-01

    An approximate algorithm is tested for solving the problem of diffusion optical tomography in experiments on the visualisation of details of the inner structure of strongly scattering model objects containing scattering and semitransparent inclusions, as well as absorbing inclusions located inside other optical inhomogeneities. The stability of the algorithm to errors is demonstrated, which allows its use for a rapid (2 — 3 min) image reconstruction of the details of objects with a complicated inner structure.

  17. Effects of Correlated Errors on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, Andres; Jacobs, C. S.

    2011-01-01

    As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.

  18. Using EHR Data to Detect Prescribing Errors in Rapidly Discontinued Medication Orders.

    PubMed

    Burlison, Jonathan D; McDaniel, Robert B; Baker, Donald K; Hasan, Murad; Robertson, Jennifer J; Howard, Scott C; Hoffman, James M

    2018-01-01

    Previous research developed a new method for locating prescribing errors in rapidly discontinued electronic medication orders. Although effective, the prospective design of that research hinders its feasibility for regular use. Our objectives were to assess a method to retrospectively detect prescribing errors, to characterize the identified errors, and to identify potential improvement opportunities. Electronically submitted medication orders from 28 randomly selected days that were discontinued within 120 minutes of submission were reviewed and categorized as most likely errors, nonerrors, or not enough information to determine status. Identified errors were evaluated by amount of time elapsed from original submission to discontinuation, error type, staff position, and potential clinical significance. Pearson's chi-square test was used to compare rates of errors across prescriber types. In all, 147 errors were identified in 305 medication orders. The method was most effective for orders that were discontinued within 90 minutes. Duplicate orders were most common; physicians in training had the highest error rate ( p  < 0.001), and 24 errors were potentially clinically significant. None of the errors were voluntarily reported. It is possible to identify prescribing errors in rapidly discontinued medication orders by using retrospective methods that do not require interrupting prescribers to discuss order details. Future research could validate our methods in different clinical settings. Regular use of this measure could help determine the causes of prescribing errors, track performance, and identify and evaluate interventions to improve prescribing systems and processes. Schattauer GmbH Stuttgart.

  19. Sustainable Design Approach: A case study of BIM use

    NASA Astrophysics Data System (ADS)

    Abdelhameed, Wael

    2017-11-01

    Achieving sustainable design in areas such as energy-efficient design depends largely on the accuracy of the analysis performed after the design is completed with all its components and material details. There are different analysis approaches and methods that predict relevant values and metrics such as U value, energy use and energy savings. Although certain differences in the accuracy of these approaches and methods have been recorded, this research paper does not focus on such matter, where determining the reason for discrepancies between those approaches and methods is difficult, because all error sources act simultaneously. The research paper rather introduces an approach through which BIM, building information modelling, can be utilised during the initial phases of the designing process, by analysing the values and metrics of sustainable design before going into the design details of a building. Managing all of the project drawings in a single file, BIM -building information modelling- is well known as one digital platform that offers a multidisciplinary detailed design -AEC model (Barison and Santos, 2010, Welle et.al., 2011). The paper presents in general BIM use in the early phases of the design process, in order to achieve certain required areas of sustainable design. The paper proceeds to introduce BIM use in specific areas such as site selection, wind velocity and building orientation, in terms of reaching the farther possible sustainable solution. In the initial phases of designing, material details and building components are not fully specified or selected yet. The designer usually focuses on zoning, topology, circulations, and other design requirements. The proposed approach employs the strategies and analysis of BIM use during those initial design phases in order to have the analysis and results of each solution or alternative design. The stakeholders and designers would have a better effective decision making process with a full clarity of each alternative's consequences. The architect would settle down and proceed in the alternative design of the best sustainable analysis. In later design stages, using the sustainable types of materials such as insulation, cladding, etc., and applying sustainable building components such as doors, windows, etc. would add more improvements and enhancements in reaching better values and metrics. The paper describes the methodology of this design approach through BIM strategies adopted in design creation. Case studies of architectural designs are used to highlight the details and benefits of this proposed approach.

  20. North Alabama Lightning Mapping Array (LMA): VHF Source Retrieval Algorithm and Error Analyses

    NASA Technical Reports Server (NTRS)

    Koshak, W. J.; Solakiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J.; Bailey, J.; Krider, E. P.; Bateman, M. G.; Boccippio, D.

    2003-01-01

    Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA Marshall Space Flight Center (MSFC) and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix Theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50 ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results. However, for many source locations, the Curvature Matrix Theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.

  1. ATM QoS Experiments Using TCP Applications: Performance of TCP/IP Over ATM in a Variety of Errored Links

    NASA Technical Reports Server (NTRS)

    Frantz, Brian D.; Ivancic, William D.

    2001-01-01

    Asynchronous Transfer Mode (ATM) Quality of Service (QoS) experiments using the Transmission Control Protocol/Internet Protocol (TCP/IP) were performed for various link delays. The link delay was set to emulate a Wide Area Network (WAN) and a Satellite Link. The purpose of these experiments was to evaluate the ATM QoS requirements for applications that utilize advance TCP/IP protocols implemented with large windows and Selective ACKnowledgements (SACK). The effects of cell error, cell loss, and random bit errors on throughput were reported. The detailed test plan and test results are presented herein.

  2. Mars Exploration Rover Potentiometer Problems, Failures and Lessons Learned

    NASA Technical Reports Server (NTRS)

    Balzer, Mark

    2006-01-01

    During qualification testing of three types of non-wire-wound precision potentiometers for the Mars Exploration Rover, a variety of problems and failures were encountered. This paper will describe some of the more interesting problems, detail their investigations and present their final solutions. The failures were found to be caused by design errors, manufacturing errors, improper handling, test errors, and carelessness. A trend of decreasing total resistance was noted, and a resistance histogram was used to identify an outlier. A gang fixture is described for simultaneously testing multiple pots, and real time X-ray imaging was used extensively to assist in the failure analyses. Lessons learned are provided.

  3. Mars Exploration Rover potentiometer problems, failures and lessons learned

    NASA Technical Reports Server (NTRS)

    Balzer, Mark A.

    2006-01-01

    During qualification testing of three types of nonwire-wound precision potentiometers for the Mars Exploration Rover, a variety of problems and failures were encountered. This paper will describe some of the more interesting problems, detail their investigations and present their final solutions. The failures were found to be caused by design errors, manufacturing errors, improper handling, test errors, and carelessness. A trend of decreasing total resistance was noted, and a resistance histogram was used to identify an outlier. A gang fixture is described for simultaneously testing multiple pots, and real time X-ray imaging was used extensively to assist in the failure analyses. Lessons learned are provided.

  4. Using the Abstraction Network in Complement to Description Logics for Quality Assurance in Biomedical Terminologies - A Case Study in SNOMED CT

    PubMed Central

    Wei, Duo; Bodenreider, Olivier

    2015-01-01

    Objectives To investigate errors identified in SNOMED CT by human reviewers with help from the Abstraction Network methodology and examine why they had escaped detection by the Description Logic (DL) classifier. Case study; Two examples of errors are presented in detail (one missing IS-A relation and one duplicate concept). After correction, SNOMED CT is reclassified to ensure that no new inconsistency was introduced. Conclusions DL-based auditing techniques built in terminology development environments ensure the logical consistency of the terminology. However, complementary approaches are needed for identifying and addressing other types of errors. PMID:20841848

  5. Using the abstraction network in complement to description logics for quality assurance in biomedical terminologies - a case study in SNOMED CT.

    PubMed

    Wei, Duo; Bodenreider, Olivier

    2010-01-01

    To investigate errors identified in SNOMED CT by human reviewers with help from the Abstraction Network methodology and examine why they had escaped detection by the Description Logic (DL) classifier. Case study; Two examples of errors are presented in detail (one missing IS-A relation and one duplicate concept). After correction, SNOMED CT is reclassified to ensure that no new inconsistency was introduced. DL-based auditing techniques built in terminology development environments ensure the logical consistency of the terminology. However, complementary approaches are needed for identifying and addressing other types of errors.

  6. Derivation of error sources for experimentally derived heliostat shapes

    NASA Astrophysics Data System (ADS)

    Cumpston, Jeff; Coventry, Joe

    2017-06-01

    Data gathered using photogrammetry that represents the surface and structure of a heliostat mirror panel is investigated in detail. A curve-fitting approach that allows the retrieval of four distinct mirror error components, while prioritizing the best fit possible to paraboloidal terms in the curve fitting equation, is presented. The angular errors associated with each of the four surfaces are calculated, and the relative magnitude for each of them is given. It is found that in this case, the mirror had a significant structural twist, and an estimate of the improvement to the mirror surface quality in the case of no twist was made.

  7. Bidirectional Retroviral Integration Site PCR Methodology and Quantitative Data Analysis Workflow.

    PubMed

    Suryawanshi, Gajendra W; Xu, Song; Xie, Yiming; Chou, Tom; Kim, Namshin; Chen, Irvin S Y; Kim, Sanggu

    2017-06-14

    Integration Site (IS) assays are a critical component of the study of retroviral integration sites and their biological significance. In recent retroviral gene therapy studies, IS assays, in combination with next-generation sequencing, have been used as a cell-tracking tool to characterize clonal stem cell populations sharing the same IS. For the accurate comparison of repopulating stem cell clones within and across different samples, the detection sensitivity, data reproducibility, and high-throughput capacity of the assay are among the most important assay qualities. This work provides a detailed protocol and data analysis workflow for bidirectional IS analysis. The bidirectional assay can simultaneously sequence both upstream and downstream vector-host junctions. Compared to conventional unidirectional IS sequencing approaches, the bidirectional approach significantly improves IS detection rates and the characterization of integration events at both ends of the target DNA. The data analysis pipeline described here accurately identifies and enumerates identical IS sequences through multiple steps of comparison that map IS sequences onto the reference genome and determine sequencing errors. Using an optimized assay procedure, we have recently published the detailed repopulation patterns of thousands of Hematopoietic Stem Cell (HSC) clones following transplant in rhesus macaques, demonstrating for the first time the precise time point of HSC repopulation and the functional heterogeneity of HSCs in the primate system. The following protocol describes the step-by-step experimental procedure and data analysis workflow that accurately identifies and quantifies identical IS sequences.

  8. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  9. Adaptive graph-based multiple testing procedures

    PubMed Central

    Klinglmueller, Florian; Posch, Martin; Koenig, Franz

    2016-01-01

    Multiple testing procedures defined by directed, weighted graphs have recently been proposed as an intuitive visual tool for constructing multiple testing strategies that reflect the often complex contextual relations between hypotheses in clinical trials. Many well-known sequentially rejective tests, such as (parallel) gatekeeping tests or hierarchical testing procedures are special cases of the graph based tests. We generalize these graph-based multiple testing procedures to adaptive trial designs with an interim analysis. These designs permit mid-trial design modifications based on unblinded interim data as well as external information, while providing strong family wise error rate control. To maintain the familywise error rate, it is not required to prespecify the adaption rule in detail. Because the adaptive test does not require knowledge of the multivariate distribution of test statistics, it is applicable in a wide range of scenarios including trials with multiple treatment comparisons, endpoints or subgroups, or combinations thereof. Examples of adaptations are dropping of treatment arms, selection of subpopulations, and sample size reassessment. If, in the interim analysis, it is decided to continue the trial as planned, the adaptive test reduces to the originally planned multiple testing procedure. Only if adaptations are actually implemented, an adjusted test needs to be applied. The procedure is illustrated with a case study and its operating characteristics are investigated by simulations. PMID:25319733

  10. Analysis of shockless dynamic compression data on solids to multi-megabar pressures: Application to tantalum

    DOE PAGES

    Davis, Jean -Paul; Brown, Justin L.; Knudson, Marcus D.; ...

    2014-11-26

    In this research, magnetically-driven, planar shockless-compression experiments to multi-megabar pressures were performed on tantalum samples using a stripline target geometry. Free-surface velocity waveforms were measured in 15 cases; nine of these in a dual-sample configuration with two samples of different thicknesses on opposing electrodes, and six in a single-sample configuration with a bare electrode opposite the sample. Details are given on the application of inverse Lagrangian analysis (ILA) to these data, including potential sources of error. The most significant source of systematic error, particularly for single-sample experiments, was found to arise from the pulse-shape dependent free-surface reflected wave interactions withmore » the deviatoric-stress response of tantalum. This could cause local, possibly temporary, unloading of material from a ramp compressed state, and thus multi-value response in wave speed that invalidates the free-surface to in-material velocity mapping step of ILA. By averaging all 15 data sets, a final result for the principal quasi-isentrope of tantalum in stress-strain was obtained to a peak longitudinal stress of 330 GPa with conservative uncertainty bounds of ±4.5% in stress. The result agrees well with a tabular equation of state developed at Los Alamos National Laboratory.« less

  11. Experimental Evaluation of Verification and Validation Tools on Martian Rover Software

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Giannakopoulou, Dimitra; Goldberg, Allen; Havelund, Klaus; Lowry, Mike; Pasareani, Corina; Venet, Arnaud; Visser, Willem; Washington, Rich

    2003-01-01

    We report on a study to determine the maturity of different verification and validation technologies (V&V) on a representative example of NASA flight software. The study consisted of a controlled experiment where three technologies (static analysis, runtime analysis and model checking) were compared to traditional testing with respect to their ability to find seeded errors in a prototype Mars Rover. What makes this study unique is that it is the first (to the best of our knowledge) to do a controlled experiment to compare formal methods based tools to testing on a realistic industrial-size example where the emphasis was on collecting as much data on the performance of the tools and the participants as possible. The paper includes a description of the Rover code that was analyzed, the tools used as well as a detailed description of the experimental setup and the results. Due to the complexity of setting up the experiment, our results can not be generalized, but we believe it can still serve as a valuable point of reference for future studies of this kind. It did confirm the belief we had that advanced tools can outperform testing when trying to locate concurrency errors. Furthermore the results of the experiment inspired a novel framework for testing the next generation of the Rover.

  12. Downscaling Aerosols and the Impact of Neglected Subgrid Processes on Direct Aerosol Radiative Forcing for a Representative Global Climate Model Grid Spacing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gustafson, William I.; Qian, Yun; Fast, Jerome D.

    2011-07-13

    Recent improvements to many global climate models include detailed, prognostic aerosol calculations intended to better reproduce the observed climate. However, the trace gas and aerosol fields are treated at the grid-cell scale with no attempt to account for sub-grid impacts on the aerosol fields. This paper begins to quantify the error introduced by the neglected sub-grid variability for the shortwave aerosol radiative forcing for a representative climate model grid spacing of 75 km. An analysis of the value added in downscaling aerosol fields is also presented to give context to the WRF-Chem simulations used for the sub-grid analysis. We foundmore » that 1) the impact of neglected sub-grid variability on the aerosol radiative forcing is strongest in regions of complex topography and complicated flow patterns, and 2) scale-induced differences in emissions contribute strongly to the impact of neglected sub-grid processes on the aerosol radiative forcing. The two of these effects together, when simulated at 75 km vs. 3 km in WRF-Chem, result in an average daytime mean bias of over 30% error in top-of-atmosphere shortwave aerosol radiative forcing for a large percentage of central Mexico during the MILAGRO field campaign.« less

  13. Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, Ronald M.

    2015-01-01

    The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.

  14. Horizontal plane localization in single-sided deaf adults fitted with a bone-anchored hearing aid (Baha).

    PubMed

    Grantham, D Wesley; Ashmead, Daniel H; Haynes, David S; Hornsby, Benjamin W Y; Labadie, Robert F; Ricketts, Todd A

    2012-01-01

    : One purpose of this investigation was to evaluate the effect of a unilateral bone-anchored hearing aid (Baha) on horizontal plane localization performance in single-sided deaf adults who had either a conductive or sensorineural hearing loss in their impaired ear. The use of a 33-loudspeaker array allowed for a finer response measure than has previously been used to investigate localization in this population. In addition, a detailed analysis of error patterns allowed an evaluation of the contribution of random error and bias error to the total rms error computed in the various conditions studied. A second purpose was to investigate the effect of stimulus duration and head-turning on localization performance. : Two groups of single-sided deaf adults were tested in a localization task in which they had to identify the direction of a spoken phrase on each trial. One group had a sensorineural hearing loss (SNHL group; N = 7), and the other group had a conductive hearing loss (CHL group; N = 5). In addition, a control group of four normal-hearing adults was tested. The spoken phrase was either 1250 msec in duration (a male saying "Where am I coming from now?") or 341 msec in duration (the same male saying "Where?"). For the longer-duration phrase, subjects were tested in conditions in which they either were or were not allowed to move their heads before the termination of the phrase. The source came from one of nine positions in the front horizontal plane (from -79° to +79°). The response range included 33 choices (from -90° to +90°, separated by 5.6°). Subjects were tested in all stimulus conditions, both with and without the Baha device. Overall rms error was computed for each condition. Contributions of random error and bias error to the overall error were also computed. : There was considerable intersubject variability in all conditions. However, for the CHL group, the average overall error was significantly smaller when the Baha was on than when it was off. Further analysis of error patterns indicated that this improvement was primarily based on reduced response bias when the device was on; that is, the average response azimuth was nearer to the source azimuth when the device was on than when it was off. The SNHL group, on the other hand, had significantly greater overall error when the Baha was on than when it was off. Collapsed across listening conditions and groups, localization performance was significantly better with the 1250 msec stimulus than with the 341 msec stimulus. However, for the longer-duration stimulus, there was no significant beneficial effect of head-turning. Error scores in all conditions for both groups were considerably larger than those in the normal-hearing control group. : On average, single-sided deaf adults with CHL showed improved localization ability when using the Baha, whereas single-sided deaf adults with SNHL showed a decrement in performance when using the device. These results may have implications for clinical counseling for patients with unilateral hearing impairment.

  15. Maintaining data integrity in a rural clinical trial.

    PubMed

    Van den Broeck, Jan; Mackay, Melanie; Mpontshane, Nontobeko; Kany Kany Luabeya, Angelique; Chhagan, Meera; Bennish, Michael L

    2007-01-01

    Clinical trials conducted in rural resource-poor settings face special challenges in ensuring quality of data collection and handling. The variable nature of these challenges, ways to overcome them, and the resulting data quality are rarely reported in the literature. To provide a detailed example of establishing local data handling capacity for a clinical trial conducted in a rural area, highlight challenges and solutions in establishing such capacity, and to report the data quality obtained by the trial. We provide a descriptive case study of a data system for biological samples and questionnaire data, and the problems encountered during its implementation. To determine the quality of data we analyzed test-retest studies using Kappa statistics of inter- and intra-observer agreement on categorical data. We calculated Technical Errors of Measurement of anthropometric measurements, audit trail analysis was done to assess error correction rates, and residual error rates were calculated by database-to-source document comparison. Initial difficulties included the unavailability of experienced research nurses, programmers and data managers in this rural area and the difficulty of designing new software tools and a complex database while making them error-free. National and international collaboration and external monitoring helped ensure good data handling and implementation of good clinical practice. Data collection, fieldwork supervision and query handling depended on streamlined transport over large distances. The involvement of a community advisory board was helpful in addressing cultural issues and establishing community acceptability of data collection methods. Data accessibility for safety monitoring required special attention. Kappa values and Technical Errors of Measurement showed acceptable values. Residual error rates in key variables were low. The article describes the experience of a single-site trial and does not address challenges particular to multi-site trials. Obtaining and maintaining data integrity in rural clinical trials is feasible, can result in acceptable data quality and can be used to develop capacity in developing country sites. It does, however, involve special challenges and requirements.

  16. Human Factors Process Task Analysis: Liquid Oxygen Pump Acceptance Test Procedure at the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.; Voska, Ned (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.

  17. The concept of antifragility and its implications for the practice of risk analysis.

    PubMed

    Aven, Terje

    2015-03-01

    Nassim Taleb's antifragile concept has been shown considerable interest in the media and on the Internet recently. For Taleb, the antifragile concept is a blueprint for living in a black swan world (where surprising extreme events may occur), the key being to love variation and uncertainty to some degree, and thus also errors. The antonym of "fragile" is not robustness or resilience, but "please mishandle" or "please handle carelessly," using an example from Taleb when referring to sending a package full of glasses by post. In this article, we perform a detailed analysis of this concept, having a special focus on how the antifragile concept relates to common ideas and principles of risk management. The article argues that Taleb's antifragile concept adds an important contribution to the current practice of risk analysis by its focus on the dynamic aspects of risk and performance, and the necessity of some variation, uncertainties, and risk to achieve improvements and high performance at later stages. © 2014 Society for Risk Analysis.

  18. Chemical Differentiation of Osseous, Dental, and Non-skeletal Materials in Forensic Anthropology using Elemental Analysis.

    PubMed

    Zimmerman, Heather A; Meizel-Lambert, Cayli J; Schultz, John J; Sigman, Michael E

    2015-03-01

    Forensic anthropologists are generally able to identify skeletal materials (bone and tooth) using gross anatomical features; however, highly fragmented or taphonomically altered materials may be problematic to identify. Several chemical analysis techniques have been shown to be reliable laboratory methods that can be used to determine if questionable fragments are osseous, dental, or non-skeletal in nature. The purpose of this review is to provide a detailed background of chemical analysis techniques focusing on elemental compositions that have been assessed for use in differentiating osseous, dental, and non-skeletal materials. More recently, chemical analysis studies have also focused on using the elemental composition of osseous/dental materials to evaluate species and provide individual discrimination, but have generally been successful only in small, closed groups, limiting their use forensically. Despite significant advances incorporating a variety of instruments, including handheld devices, further research is necessary to address issues in standardization, error rates, and sample size/diversity. Copyright © 2014 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.

  19. Cobble cam: Grain-size measurements of sand to boulder from digital photographs and autocorrelation analyses

    USGS Publications Warehouse

    Warrick, J.A.; Rubin, D.M.; Ruggiero, P.; Harney, J.N.; Draut, A.E.; Buscombe, D.

    2009-01-01

    A new application of the autocorrelation grain size analysis technique for mixed to coarse sediment settings has been investigated. Photographs of sand- to boulder-sized sediment along the Elwha River delta beach were taken from approximately 1??2 m above the ground surface, and detailed grain size measurements were made from 32 of these sites for calibration and validation. Digital photographs were found to provide accurate estimates of the long and intermediate axes of the surface sediment (r2 > 0??98), but poor estimates of the short axes (r2 = 0??68), suggesting that these short axes were naturally oriented in the vertical dimension. The autocorrelation method was successfully applied resulting in total irreducible error of 14% over a range of mean grain sizes of 1 to 200 mm. Compared with reported edge and object-detection results, it is noted that the autocorrelation method presented here has lower error and can be applied to a much broader range of mean grain sizes without altering the physical set-up of the camera (~200-fold versus ~6-fold). The approach is considerably less sensitive to lighting conditions than object-detection methods, although autocorrelation estimates do improve when measures are taken to shade sediments from direct sunlight. The effects of wet and dry conditions are also evaluated and discussed. The technique provides an estimate of grain size sorting from the easily calculated autocorrelation standard error, which is correlated with the graphical standard deviation at an r2 of 0??69. The technique is transferable to other sites when calibrated with linear corrections based on photo-based measurements, as shown by excellent grain-size analysis results (r2 = 0??97, irreducible error = 16%) from samples from the mixed grain size beaches of Kachemak Bay, Alaska. Thus, a method has been developed to measure mean grain size and sorting properties of coarse sediments. ?? 2009 John Wiley & Sons, Ltd.

  20. Magnetic Resonance Imaging–Guided versus Surrogate-Based Motion Tracking in Liver Radiation Therapy: A Prospective Comparative Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paganelli, Chiara, E-mail: chiara.paganelli@polimi.it; Seregni, Matteo; Fattori, Giovanni

    Purpose: This study applied automatic feature detection on cine–magnetic resonance imaging (MRI) liver images in order to provide a prospective comparison between MRI-guided and surrogate-based tracking methods for motion-compensated liver radiation therapy. Methods and Materials: In a population of 30 subjects (5 volunteers plus 25 patients), 2 oblique sagittal slices were acquired across the liver at high temporal resolution. An algorithm based on scale invariant feature transform (SIFT) was used to extract and track multiple features throughout the image sequence. The position of abdominal markers was also measured directly from the image series, and the internal motion of each featuremore » was quantified through multiparametric analysis. Surrogate-based tumor tracking with a state-of-the-art external/internal correlation model was simulated. The geometrical tracking error was measured, and its correlation with external motion parameters was also investigated. Finally, the potential gain in tracking accuracy relying on MRI guidance was quantified as a function of the maximum allowed tracking error. Results: An average of 45 features was extracted for each subject across the whole liver. The multi-parametric motion analysis reported relevant inter- and intrasubject variability, highlighting the value of patient-specific and spatially-distributed measurements. Surrogate-based tracking errors (relative to the motion amplitude) were were in the range 7% to 23% (1.02-3.57mm) and were significantly influenced by external motion parameters. The gain of MRI guidance compared to surrogate-based motion tracking was larger than 30% in 50% of the subjects when considering a 1.5-mm tracking error tolerance. Conclusions: Automatic feature detection applied to cine-MRI allows detailed liver motion description to be obtained. Such information was used to quantify the performance of surrogate-based tracking methods and to provide a prospective comparison with respect to MRI-guided radiation therapy, which could support the definition of patient-specific optimal treatment strategies.« less

Top