Sample records for release error analysis

  1. Attitude Determination Error Analysis System (ADEAS) mathematical specifications document

    NASA Technical Reports Server (NTRS)

    Nicholson, Mark; Markley, F.; Seidewitz, E.

    1988-01-01

    The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.

  2. Evidence for aversive withdrawal response to own errors.

    PubMed

    Hochman, Eldad Yitzhak; Milman, Valery; Tal, Liron

    2017-10-01

    Recent model suggests that error detection gives rise to defensive motivation prompting protective behavior. Models of active avoidance behavior predict it should grow larger with threat imminence and avoidance. We hypothesized that in a task requiring left or right key strikes, error detection would drive an avoidance reflex manifested by rapid withdrawal of an erring finger growing larger with threat imminence and avoidance. In experiment 1, three groups differing by error-related threat imminence and avoidance performed a flanker task requiring left or right force sensitive-key strikes. As predicted, errors were followed by rapid force release growing faster with threat imminence and opportunity to evade threat. In experiment 2, we established a link between error key release time (KRT) and the subjective sense of inner-threat. In a simultaneous, multiple regression analysis of three error-related compensatory mechanisms (error KRT, flanker effect, error correction RT), only error KRT was significantly associated with increased compulsive checking tendencies. We propose that error response withdrawal reflects an error-withdrawal reflex. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    DTIC Science & Technology

    2014-04-01

    Barrier methods for critical exponent problems in geometric analysis and mathematical physics, J. Erway and M. Holst, Submitted for publication ...TR-14-33 A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics...Problems Approved for public release, distribution is unlimited. April 2014 HDTRA1-09-1-0036 Donald Estep and Michael

  4. 76 FR 42715 - Quarantine Release Errors in Blood Establishments; Public Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-19

    ... Committee on Blood Safety and Availability (the Committee) met to discuss the current FDA blood donor...] Quarantine Release Errors in Blood Establishments; Public Workshop AGENCY: Food and Drug Administration, HHS... entitled: ``Quarantine Release Errors in Blood Establishments.'' The purpose of this public workshop is to...

  5. A critical analysis of the accuracy of several numerical techniques for combustion kinetic rate equations

    NASA Technical Reports Server (NTRS)

    Radhadrishnan, Krishnan

    1993-01-01

    A detailed analysis of the accuracy of several techniques recently developed for integrating stiff ordinary differential equations is presented. The techniques include two general-purpose codes EPISODE and LSODE developed for an arbitrary system of ordinary differential equations, and three specialized codes CHEMEQ, CREK1D, and GCKP4 developed specifically to solve chemical kinetic rate equations. The accuracy study is made by application of these codes to two practical combustion kinetics problems. Both problems describe adiabatic, homogeneous, gas-phase chemical reactions at constant pressure, and include all three combustion regimes: induction, heat release, and equilibration. To illustrate the error variation in the different combustion regimes the species are divided into three types (reactants, intermediates, and products), and error versus time plots are presented for each species type and the temperature. These plots show that CHEMEQ is the most accurate code during induction and early heat release. During late heat release and equilibration, however, the other codes are more accurate. A single global quantity, a mean integrated root-mean-square error, that measures the average error incurred in solving the complete problem is used to compare the accuracy of the codes. Among the codes examined, LSODE is the most accurate for solving chemical kinetics problems. It is also the most efficient code, in the sense that it requires the least computational work to attain a specified accuracy level. An important finding is that use of the algebraic enthalpy conservation equation to compute the temperature can be more accurate and efficient than integrating the temperature differential equation.

  6. Orbit Determination Error Analysis Results for the Triana Sun-Earth L2 Libration Point Mission

    NASA Technical Reports Server (NTRS)

    Marr, G.

    2003-01-01

    Using the NASA Goddard Space Flight Center's Orbit Determination Error Analysis System (ODEAS), orbit determination error analysis results are presented for all phases of the Triana Sun-Earth L1 libration point mission and for the science data collection phase of a future Sun-Earth L2 libration point mission. The Triana spacecraft was nominally to be released by the Space Shuttle in a low Earth orbit, and this analysis focuses on that scenario. From the release orbit a transfer trajectory insertion (TTI) maneuver performed using a solid stage would increase the velocity be approximately 3.1 km/sec sending Triana on a direct trajectory to its mission orbit. The Triana mission orbit is a Sun-Earth L1 Lissajous orbit with a Sun-Earth-vehicle (SEV) angle between 4.0 and 15.0 degrees, which would be achieved after a Lissajous orbit insertion (LOI) maneuver at approximately launch plus 6 months. Because Triana was to be launched by the Space Shuttle, TTI could potentially occur over a 16 orbit range from low Earth orbit. This analysis was performed assuming TTI was performed from a low Earth orbit with an inclination of 28.5 degrees and assuming support from a combination of three Deep Space Network (DSN) stations, Goldstone, Canberra, and Madrid and four commercial Universal Space Network (USN) stations, Alaska, Hawaii, Perth, and Santiago. These ground stations would provide coherent two-way range and range rate tracking data usable for orbit determination. Larger range and range rate errors were assumed for the USN stations. Nominally, DSN support would end at TTI+144 hours assuming there were no USN problems. Post-TTI coverage for a range of TTI longitudes for a given nominal trajectory case were analyzed. The orbit determination error analysis after the first correction maneuver would be generally applicable to any libration point mission utilizing a direct trajectory.

  7. Errors in the estimation of the variance: implications for multiple-probability fluctuation analysis.

    PubMed

    Saviane, Chiara; Silver, R Angus

    2006-06-15

    Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.

  8. Gaia Data Release 1. Validation of the photometry

    NASA Astrophysics Data System (ADS)

    Evans, D. W.; Riello, M.; De Angeli, F.; Busso, G.; van Leeuwen, F.; Jordi, C.; Fabricius, C.; Brown, A. G. A.; Carrasco, J. M.; Voss, H.; Weiler, M.; Montegriffo, P.; Cacciari, C.; Burgess, P.; Osborne, P.

    2017-04-01

    Aims: The photometric validation of the Gaia DR1 release of the ESA Gaia mission is described and the quality of the data shown. Methods: This is carried out via an internal analysis of the photometry using the most constant sources. Comparisons with external photometric catalogues are also made, but are limited by the accuracies and systematics present in these catalogues. An analysis of the quoted errors is also described. Investigations of the calibration coefficients reveal some of the systematic effects that affect the fluxes. Results: The analysis of the constant sources shows that the early-stage photometric calibrations can reach an accuracy as low as 3 mmag.

  9. MINESTRONE

    DTIC Science & Technology

    2015-03-01

    release; distribution unlimited. Integration of pmalloc to enhance the tool and enable continued execution of overflow an underflow errors was...of IARPA, AFRL, or the U.S. Government. Report contains color. 14. ABSTRACT MINESTRONE is an architecture that integrates static analysis... Integration ..................................................................................................... 30 4.1.16 Miscellaneous Items

  10. In search of periodic signatures in IGS REPRO1 solution

    NASA Astrophysics Data System (ADS)

    Mtamakaya, J. D.; Santos, M. C.; Craymer, M. R.

    2010-12-01

    We have been looking for periodic signatures in the REPRO1 solution recently released by the IGS. At this stage, a selected sub-set of IGS station time series in position and residual domain are under harmonic analysis. We can learn different things from this analysis. From the position domain, we can learn more about actual station motions. From the residual domain, we can learn more about mis-modelled or un-modelled errors. As far as error sources are concerned, we have investigated effects that may be due to tides, atmospheric loading, definition of the position of the figure axis and GPS constellation geometry. This poster presents and discusses our findings and presents insights on errors that need to be modelled or have their models improved.

  11. BEATBOX v1.0: Background Error Analysis Testbed with Box Models

    NASA Astrophysics Data System (ADS)

    Knote, Christoph; Barré, Jérôme; Eckl, Max

    2018-02-01

    The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.

  12. Analysis of the Efficiency of an A-Posteriori Error Estimator for Linear Triangular Finite Elements

    DTIC Science & Technology

    1991-06-01

    Release 1.0, NOETIC Tech. Corp., St. Louis, Missouri, 1985. [28] R. VERFURTH, FEMFLOW-user guide. Version 1, Report, Universitiit Zirich, 1989. [29] R... study and research for foreign students in numerical mathematics who are supported by foreign governments or exchange agencies (Fulbright, etc

  13. Mass-balance measurements in Alaska and suggestions for simplified observation programs

    USGS Publications Warehouse

    Trabant, D.C.; March, R.S.

    1999-01-01

    US Geological Survey glacier fieldwork in Alaska includes repetitious measurements, corrections for leaning or bending stakes, an ability to reliably measure seasonal snow as deep as 10 m, absolute identification of summer surfaces in the accumulation area, and annual evaluation of internal accumulation, internal ablation, and glacier-thickness changes. Prescribed field measurement and note-taking techniques help eliminate field errors and expedite the interpretative process. In the office, field notes are transferred to computerized spread-sheets for analysis, release on the World Wide Web, and archival storage. The spreadsheets have error traps to help eliminate note-taking and transcription errors. Rigorous error analysis ends when mass-balance measurements are extrapolated and integrated with area to determine glacier and basin mass balances. Unassessable errors in the glacier and basin mass-balance data reduce the value of the data set for correlations with climate change indices. The minimum glacier mass-balance program has at least three measurement sites on a glacier and the measurements must include the seasonal components of mass balance as well as the annual balance.

  14. Prediction of error rates in dose-imprinted memories on board CRRES by two different methods. [Combined Release and Radiation Effects Satellite

    NASA Technical Reports Server (NTRS)

    Brucker, G. J.; Stassinopoulos, E. G.

    1991-01-01

    An analysis of the expected space radiation effects on the single event upset (SEU) properties of CMOS/bulk memories onboard the Combined Release and Radiation Effects Satellite (CRRES) is presented. Dose-imprint data from ground test irradiations of identical devices are applied to the predictions of cosmic-ray-induced space upset rates in the memories onboard the spacecraft. The calculations take into account the effect of total dose on the SEU sensitivity of the devices as the dose accumulates in orbit. Estimates of error rates, which involved an arbitrary selection of a single pair of threshold linear energy transfer (LET) and asymptotic cross-section values, were compared to the results of an integration over the cross-section curves versus LET. The integration gave lower upset rates than the use of the selected values of the SEU parameters. Since the integration approach is more accurate and eliminates the need for an arbitrary definition of threshold LET and asymptotic cross section, it is recommended for all error rate predictions where experimental sigma-versus-LET curves are available.

  15. New features added to EVALIDator: ratio estimation and county choropleth maps

    Treesearch

    Patrick D. Miles; Mark H. Hansen

    2012-01-01

    The EVALIDator Web application, developed in 2007, provides estimates and sampling errors for many user selected forest statistics from the Forest Inventory and Analysis Database (FIADB). Among the statistics estimated are forest area, number of trees, biomass, volume, growth, removals, and mortality. A new release of EVALIDator, developed in 2012, has an option to...

  16. Adding uncertainty to forest inventory plot locations: effects on analyses using geospatial data

    Treesearch

    Alexia A. Sabor; Volker C. Radeloff; Ronald E. McRoberts; Murray Clayton; Susan I. Stewart

    2007-01-01

    The Forest Inventory and Analysis (FIA) program of the USDA Forest Service alters plot locations before releasing data to the public to ensure landowner confidentiality and sample integrity, but using data with altered plot locations in conjunction with other spatially explicit data layers produces analytical results with unknown amounts of error. We calculated the...

  17. Reduction of prostate intrafraction motion using gas-release rectal balloons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su Zhong; Zhao Tianyu; Li Zuofeng

    2012-10-15

    Purpose: To analyze prostate intrafraction motion using both non-gas-release (NGR) and gas-release (GR) rectal balloons and to evaluate the ability of GR rectal balloons to reduce prostate intrafraction motion. Methods: Twenty-nine patients with NGR rectal balloons and 29 patients with GR balloons were randomly selected from prostate patients treated with proton therapy at University of Florida Proton Therapy Institute (Jacksonville, FL). Their pretreatment and post-treatment orthogonal radiographs were analyzed, and both pretreatment setup residual error and intrafraction-motion data were obtained. Population histograms of intrafraction motion were plotted for both types of balloons. Population planning target-volume (PTV) margins were calculated withmore » the van Herk formula of 2.5{Sigma}+ 0.7{sigma} to account for setup residual errors and intrafraction motion errors. Results: Pretreatment and post-treatment radiographs indicated that the use of gas-release rectal balloons reduced prostate intrafraction motion along superior-inferior (SI) and anterior-posterior (AP) directions. Similar patient setup residual errors were exhibited for both types of balloons. Gas-release rectal balloons resulted in PTV margin reductions from 3.9 to 2.8 mm in the SI direction, 3.1 to 1.8 mm in the AP direction, and an increase from 1.9 to 2.1 mm in the left-right direction. Conclusions: Prostate intrafraction motion is an important uncertainty source in radiotherapy after image-guided patient setup with online corrections. Compared to non-gas-release rectal balloons, gas-release balloons can reduce prostate intrafraction motion in the SI and AP directions caused by gas buildup.« less

  18. Systematic Errors in an Air Track Experiment.

    ERIC Educational Resources Information Center

    Ramirez, Santos A.; Ham, Joe S.

    1990-01-01

    Errors found in a common physics experiment to measure acceleration resulting from gravity using a linear air track are investigated. Glider position at release and initial velocity are shown to be sources of systematic error. (CW)

  19. Hard choices in assessing survival past dams — a comparison of single- and paired-release strategies

    USGS Publications Warehouse

    Zydlewski, Joseph D.; Stich, Daniel S.; Sigourney, Douglas B.

    2017-01-01

    Mark–recapture models are widely used to estimate survival of salmon smolts migrating past dams. Paired releases have been used to improve estimate accuracy by removing components of mortality not attributable to the dam. This method is accompanied by reduced precision because (i) sample size is reduced relative to a single, large release; and (ii) variance calculations inflate error. We modeled an idealized system with a single dam to assess trade-offs between accuracy and precision and compared methods using root mean squared error (RMSE). Simulations were run under predefined conditions (dam mortality, background mortality, detection probability, and sample size) to determine scenarios when the paired release was preferable to a single release. We demonstrate that a paired-release design provides a theoretical advantage over a single-release design only at large sample sizes and high probabilities of detection. At release numbers typical of many survival studies, paired release can result in overestimation of dam survival. Failures to meet model assumptions of a paired release may result in further overestimation of dam-related survival. Under most conditions, a single-release strategy was preferable.

  20. Calibration Of Partial-Pressure-Of-Oxygen Sensors

    NASA Technical Reports Server (NTRS)

    Yount, David W.; Heronimus, Kevin

    1995-01-01

    Report and analysis of, and discussion of improvements in, procedure for calibrating partial-pressure-of-oxygen sensors to satisfy Spacelab calibration requirements released. Sensors exhibit fast drift, which results in short calibration period not suitable for Spacelab. By assessing complete process of determining total drift range available, calibration procedure modified to eliminate errors and still satisfy requirements without compromising integrity of system.

  1. Reconstruction of Atmospheric Tracer Releases with Optimal Resolution Features: Concentration Data Assimilation

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Turbelin, Gregory; Issartel, Jean-Pierre; Kumar, Pramod; Feiz, Amir Ali

    2015-04-01

    The fast growing urbanization, industrialization and military developments increase the risk towards the human environment and ecology. This is realized in several past mortality incidents, for instance, Chernobyl nuclear explosion (Ukraine), Bhopal gas leak (India), Fukushima-Daichi radionuclide release (Japan), etc. To reduce the threat and exposure to the hazardous contaminants, a fast and preliminary identification of unknown releases is required by the responsible authorities for the emergency preparedness and air quality analysis. Often, an early detection of such contaminants is pursued by a distributed sensor network. However, identifying the origin and strength of unknown releases following the sensor reported concentrations is a challenging task. This requires an optimal strategy to integrate the measured concentrations with the predictions given by the atmospheric dispersion models. This is an inverse problem. The measured concentrations are insufficient and atmospheric dispersion models suffer from inaccuracy due to the lack of process understanding, turbulence uncertainties, etc. These lead to a loss of information in the reconstruction process and thus, affect the resolution, stability and uniqueness of the retrieved source. An additional well known issue is the numerical artifact arisen at the measurement locations due to the strong concentration gradient and dissipative nature of the concentration. Thus, assimilation techniques are desired which can lead to an optimal retrieval of the unknown releases. In general, this is facilitated within the Bayesian inference and optimization framework with a suitable choice of a priori information, regularization constraints, measurement and background error statistics. An inversion technique is introduced here for an optimal reconstruction of unknown releases using limited concentration measurements. This is based on adjoint representation of the source-receptor relationship and utilization of a weight function which exhibits a priori information about the unknown releases apparent to the monitoring network. The properties of the weight function provide an optimal data resolution and model resolution to the retrieved source estimates. The retrieved source estimates are proved theoretically to be stable against the random measurement errors and their reliability can be interpreted in terms of the distribution of the weight functions. Further, the same framework can be extended for the identification of the point type releases by utilizing the maximum of the retrieved source estimates. The inversion technique has been evaluated with the several diffusion experiments, like, Idaho low wind diffusion experiment (1974), IIT Delhi tracer experiment (1991), European Tracer Experiment (1994), Fusion Field Trials (2007), etc. In case of point release experiments, the source parameters are mostly retrieved close to the true source parameters with least error. Primarily, the proposed technique overcomes two major difficulties incurred in the source reconstruction: (i) The initialization of the source parameters as required by the optimization based techniques. The converged solution depends on their initialization. (ii) The statistical knowledge about the measurement and background errors as required by the Bayesian inference based techniques. These are hypothetically assumed in case of no prior knowledge.

  2. Energy dependence of SEP electron and proton onset times

    NASA Astrophysics Data System (ADS)

    Xie, H.; Mäkelä, P.; Gopalswamy, N.; St. Cyr, O. C.

    2016-07-01

    We study the large solar energetic particle (SEP) events that were detected by GOES in the >10 MeV energy channel during December 2006 to March 2014. We derive and compare solar particle release (SPR) times for the 0.25-10.4 MeV electrons and 10-100 MeV protons for the 28 SEP events. In the study, the electron SPR times are derived with the time-shifting analysis (TSA) and the proton SPR times are derived using both the TSA and the velocity dispersion analysis (VDA). Electron anisotropies are computed to evaluate the amount of scattering for the events under study. Our main results include (1) near-relativistic electrons and high-energy protons are released at the same time within 8 min for most (16 of 23) SEP events. (2)There exists a good correlation between electron and proton acceleration, peak intensity, and intensity time profiles. (3) The TSA SPR times for 90.5 MeV and 57.4 MeV protons have maximum errors of 6 min and 10 min compared to the proton VDA release times, respectively, while the maximum error for 15.4 MeV protons can reach to 32 min. (4) For 7 low-intensity events of the 23, large delays occurred for 6.5 MeV electrons and 90.5 MeV protons relative to 0.5 MeV electrons. Whether these delays are due to times needed for the evolving shock to be strengthened or due to particle transport effects remains unsolved.

  3. A Study on Mutil-Scale Background Error Covariances in 3D-Var Data Assimilation

    NASA Astrophysics Data System (ADS)

    Zhang, Xubin; Tan, Zhe-Min

    2017-04-01

    The construction of background error covariances is a key component of three-dimensional variational data assimilation. There are different scale background errors and interactions among them in the numerical weather Prediction. However, the influence of these errors and their interactions cannot be represented in the background error covariances statistics when estimated by the leading methods. So, it is necessary to construct background error covariances influenced by multi-scale interactions among errors. With the NMC method, this article firstly estimates the background error covariances at given model-resolution scales. And then the information of errors whose scales are larger and smaller than the given ones is introduced respectively, using different nesting techniques, to estimate the corresponding covariances. The comparisons of three background error covariances statistics influenced by information of errors at different scales reveal that, the background error variances enhance particularly at large scales and higher levels when introducing the information of larger-scale errors by the lateral boundary condition provided by a lower-resolution model. On the other hand, the variances reduce at medium scales at the higher levels, while those show slight improvement at lower levels in the nested domain, especially at medium and small scales, when introducing the information of smaller-scale errors by nesting a higher-resolution model. In addition, the introduction of information of larger- (smaller-) scale errors leads to larger (smaller) horizontal and vertical correlation scales of background errors. Considering the multivariate correlations, the Ekman coupling increases (decreases) with the information of larger- (smaller-) scale errors included, whereas the geostrophic coupling in free atmosphere weakens in both situations. The three covariances obtained in above work are used in a data assimilation and model forecast system respectively, and then the analysis-forecast cycles for a period of 1 month are conducted. Through the comparison of both analyses and forecasts from this system, it is found that the trends for variation in analysis increments with information of different scale errors introduced are consistent with those for variation in variances and correlations of background errors. In particular, introduction of smaller-scale errors leads to larger amplitude of analysis increments for winds at medium scales at the height of both high- and low- level jet. And analysis increments for both temperature and humidity are greater at the corresponding scales at middle and upper levels under this circumstance. These analysis increments improve the intensity of jet-convection system which includes jets at different levels and coupling between them associated with latent heat release, and these changes in analyses contribute to the better forecasts for winds and temperature in the corresponding areas. When smaller-scale errors are included, analysis increments for humidity enhance significantly at large scales at lower levels to moisten southern analyses. This humidification devotes to correcting dry bias there and eventually improves forecast skill of humidity. Moreover, inclusion of larger- (smaller-) scale errors is beneficial for forecast quality of heavy (light) precipitation at large (small) scales due to the amplification (diminution) of intensity and area in precipitation forecasts but tends to overestimate (underestimate) light (heavy) precipitation .

  4. Testing the Equivalence Principle in an Einstein Elevator: Detector Dynamics and Gravity Perturbations

    NASA Technical Reports Server (NTRS)

    Hubbard, Dorthy (Technical Monitor); Lorenzini, E. C.; Shapiro, I. I.; Cosmo, M. L.; Ashenberg, J.; Parzianello, G.; Iafolla, V.; Nozzoli, S.

    2003-01-01

    We discuss specific, recent advances in the analysis of an experiment to test the Equivalence Principle (EP) in free fall. A differential accelerometer detector with two proof masses of different materials free falls inside an evacuated capsule previously released from a stratospheric balloon. The detector spins slowly about its horizontal axis during the fall. An EP violation signal (if present) will manifest itself at the rotational frequency of the detector. The detector operates in a quiet environment as it slowly moves with respect to the co-moving capsule. There are, however, gravitational and dynamical noise contributions that need to be evaluated in order to define key requirements for this experiment. Specifically, higher-order mass moments of the capsule contribute errors to the differential acceleration output with components at the spin frequency which need to be minimized. The dynamics of the free falling detector (in its present design) has been simulated in order to estimate the tolerable errors at release which, in turn, define the release mechanism requirements. Moreover, the study of the higher-order mass moments for a worst-case position of the detector package relative to the cryostat has led to the definition of requirements on the shape and size of the proof masses.

  5. A Systematic Approach to Error Free Telemetry

    DTIC Science & Technology

    2017-06-28

    A SYSTEMATIC APPROACH TO ERROR FREE TELEMETRY 412TW-TIM-17-03 DISTRIBUTION A: Approved for public release. Distribution is...Systematic Approach to Error-Free Telemetry) was submitted by the Commander, 412th Test Wing, Edwards AFB, California 93524. Prepared by...Technical Information Memorandum 3. DATES COVERED (From - Through) February 2016 4. TITLE AND SUBTITLE A Systematic Approach to Error-Free

  6. Engineering the electronic health record for safety: a multi-level video-based approach to diagnosing and preventing technology-induced error arising from usability problems.

    PubMed

    Borycki, Elizabeth M; Kushniruk, Andre W; Kuwata, Shigeki; Kannry, Joseph

    2011-01-01

    Electronic health records (EHRs) promise to improve and streamline healthcare through electronic entry and retrieval of patient data. Furthermore, based on a number of studies showing their positive benefits, they promise to reduce medical error and make healthcare safer. However, a growing body of literature has clearly documented that if EHRS are not designed properly and with usability as an important goal in their design, rather than reducing error, EHR deployment has the potential to actually increase medical error. In this paper we describe our approach to engineering (and reengineering) EHRs in order to increase their beneficial potential while at the same time improving their safety. The approach described in this paper involves an integration of the methods of usability analysis with video analysis of end users interacting with EHR systems and extends the evaluation of the usability of EHRs to include the assessment of the impact of these systems on work practices. Using clinical simulations, we analyze human-computer interaction in real healthcare settings (in a portable, low-cost and high fidelity manner) and include both artificial and naturalistic data collection to identify potential usability problems and sources of technology-induced error prior to widespread system release. Two case studies where the methods we have developed and refined have been applied at different levels of user-computer interaction are described.

  7. System Error Budgets, Target Distributions and Hitting Performance Estimates for General-Purpose Rifles and Sniper Rifles of 7.62 x 51 mm and Larger Calibers

    DTIC Science & Technology

    1990-05-01

    CLASSIFICATION AUTPOVITY 3. DISTRIBUTION IAVAILABILITY OF REPORT 2b. P OCLASSIFICATION/OOWNGRADING SC14DULE Approved for public release; distribution 4...in the Red Book should obtain a copy of the Engineering Design Handbook, Army Weapon System Analysis, Part One, DARCOM- P 706-101, November 1977; a...companion volume: Army Weapon System Analysis, Part Two, DARCOM- P 706-102, October 1979, also makes worthwhile study. Both of these documents, written by

  8. An alternative approach based on artificial neural networks to study controlled drug release.

    PubMed

    Reis, Marcus A A; Sinisterra, Rubén D; Belchior, Jadson C

    2004-02-01

    An alternative methodology based on artificial neural networks is proposed to be a complementary tool to other conventional methods to study controlled drug release. Two systems are used to test the approach; namely, hydrocortisone in a biodegradable matrix and rhodium (II) butyrate complexes in a bioceramic matrix. Two well-established mathematical models are used to simulate different release profiles as a function of fundamental properties; namely, diffusion coefficient (D), saturation solubility (C(s)), drug loading (A), and the height of the device (h). The models were tested, and the results show that these fundamental properties can be predicted after learning the experimental or model data for controlled drug release systems. The neural network results obtained after the learning stage can be considered to quantitatively predict ideal experimental conditions. Overall, the proposed methodology was shown to be efficient for ideal experiments, with a relative average error of <1% in both tests. This approach can be useful for the experimental analysis to simulate and design efficient controlled drug-release systems. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association

  9. A Comparison of Three Algorithms for Orion Drogue Parachute Release

    NASA Technical Reports Server (NTRS)

    Matz, Daniel A.; Braun, Robert D.

    2015-01-01

    The Orion Multi-Purpose Crew Vehicle is susceptible to ipping apex forward between drogue parachute release and main parachute in ation. A smart drogue release algorithm is required to select a drogue release condition that will not result in an apex forward main parachute deployment. The baseline algorithm is simple and elegant, but does not perform as well as desired in drogue failure cases. A simple modi cation to the baseline algorithm can improve performance, but can also sometimes fail to identify a good release condition. A new algorithm employing simpli ed rotational dynamics and a numeric predictor to minimize a rotational energy metric is proposed. A Monte Carlo analysis of a drogue failure scenario is used to compare the performance of the algorithms. The numeric predictor prevents more of the cases from ipping apex forward, and also results in an improvement in the capsule attitude at main bag extraction. The sensitivity of the numeric predictor to aerodynamic dispersions, errors in the navigated state, and execution rate is investigated, showing little degradation in performance.

  10. Patients with chronic insomnia have selective impairments in memory that are modulated by cortisol.

    PubMed

    Chen, Gui-Hai; Xia, Lan; Wang, Fang; Li, Xue-Wei; Jiao, Chuan-An

    2016-10-01

    Memory impairment is a frequent complaint in insomniacs; however, it is not consistently demonstrated. It is unknown whether memory impairment in insomniacs involves neuroendocrine dysfunction. The participants in this study were selected from the clinical setting and included 21 patients with chronic insomnia disorder (CID), 25 patients with insomnia and comorbid depressive disorder (CDD), and 20 control participants without insomnia. We evaluated spatial working and reference memory, object working and reference memory, and object recognition memory using the Nine Box Maze Test. We also evaluated serum neuroendocrine hormone levels. Compared to the controls, the CID patients made significantly more errors in spatial working and object recognition memory (p < .05), whereas the CDD patients performed poorly in all the assessed memory types (p < .05). In addition, the CID patients had higher levels (mean difference [95% CI]) of corticotrophin-releasing hormone, cortisol (31.98 [23.97, 39.98] μg/l), total triiodothyronine (667.58 [505.71, 829.45] μg/l), and total thyroxine (41.49 [33.23, 49.74] μg/l) (p < .05), and lower levels of thyrotropin-releasing hormone (-35.93 [-38.83, -33.02] ng/l), gonadotropin-releasing hormone (-4.50 [-5.02, -3.98] ng/l) (p < .05), and adrenocorticotropic hormone compared to the CDD patients. After controlling for confounding variables, the partial correlation analysis revealed that the levels of cortisol positively correlated with the errors in object working memory (r = .534, p = .033) and negatively correlated with the errors in object recognition memory (r = -.659, p = .006) in the CID patients. The results suggest that the CID patients had selective memory impairment, which may be mediated by increased cortisol levels. © 2016 Society for Psychophysiological Research.

  11. Dispersion Modeling Using Ensemble Forecasts Compared to ETEX Measurements.

    NASA Astrophysics Data System (ADS)

    Straume, Anne Grete; N'dri Koffi, Ernest; Nodop, Katrin

    1998-11-01

    Numerous numerical models are developed to predict long-range transport of hazardous air pollution in connection with accidental releases. When evaluating and improving such a model, it is important to detect uncertainties connected to the meteorological input data. A Lagrangian dispersion model, the Severe Nuclear Accident Program, is used here to investigate the effect of errors in the meteorological input data due to analysis error. An ensemble forecast, produced at the European Centre for Medium-Range Weather Forecasts, is then used as model input. The ensemble forecast members are generated by perturbing the initial meteorological fields of the weather forecast. The perturbations are calculated from singular vectors meant to represent possible forecast developments generated by instabilities in the atmospheric flow during the early part of the forecast. The instabilities are generated by errors in the analyzed fields. Puff predictions from the dispersion model, using ensemble forecast input, are compared, and a large spread in the predicted puff evolutions is found. This shows that the quality of the meteorological input data is important for the success of the dispersion model. In order to evaluate the dispersion model, the calculations are compared with measurements from the European Tracer Experiment. The model manages to predict the measured puff evolution concerning shape and time of arrival to a fairly high extent, up to 60 h after the start of the release. The modeled puff is still too narrow in the advection direction.

  12. Quality control in the year 2000.

    PubMed

    Schade, B

    1992-01-01

    'Just-in-time' production is a prerequisite for a company to meet the challenges of competition. Manufacturing cycles have been so successfully optimized that release time now has become a significant factor. A vision for a major quality-control (QC) contribution to profitability in this decade seems to be the just-in-time release. Benefits will go beyond cost savings for lower inventory. The earlier detection of problems will reduce rejections and scrap. In addition, problem analysis and problem-solving will be easier. To achieve just-in-time release, advanced automated systems like robots will become the workhorses in QC for high volume pharmaceutical production. The requirements for these systems are extremely high in terms of quality, reliability and ruggedness. Crucial for the success might be advances in use of microelectronics for error checks, system recording, trouble shooting, etc. as well as creative new approaches (for example the use of redundant assay systems).

  13. Quality control in the year 2000

    PubMed Central

    Schade, Bernd

    1992-01-01

    ‘Just-in-time’ production is a prerequisite for a company to meet the challenges of competition. Manufacturing cycles have been so successfully optimized that release time now has become a significant factor. A vision for a major quality-control (QC) contribution to profitability in this decade seems to be the just-in-time release. Benefits will go beyond cost savings for lower inventory. The earlier detection of problems will reduce rejections and scrap. In addition, problem analysis and problem-solving will be easier. To achieve just-in-time release, advanced automated systems like robots will become the workhorses in QC for high volume pharmaceutical production. The requirements for these systems are extremely high in terms of quality, reliability and ruggedness. Crucial for the success might be advances in use of microelectronics for error checks, system recording, trouble shooting, etc. as well as creative new approaches (for example the use of redundant assay systems). PMID:18924930

  14. A Transient Dopamine Signal Represents Avoidance Value and Causally Influences the Demand to Avoid

    PubMed Central

    Pultorak, Katherine J.; Schelp, Scott A.; Isaacs, Dominic P.; Krzystyniak, Gregory

    2018-01-01

    Abstract While an extensive literature supports the notion that mesocorticolimbic dopamine plays a role in negative reinforcement, recent evidence suggests that dopamine exclusively encodes the value of positive reinforcement. In the present study, we employed a behavioral economics approach to investigate whether dopamine plays a role in the valuation of negative reinforcement. Using rats as subjects, we first applied fast-scan cyclic voltammetry (FSCV) to determine that dopamine concentration decreases with the number of lever presses required to avoid electrical footshock (i.e., the economic price of avoidance). Analysis of the rate of decay of avoidance demand curves, which depict an inverse relationship between avoidance and increasing price, allows for inference of the worth an animal places on avoidance outcomes. Rapidly decaying demand curves indicate increased price sensitivity, or low worth placed on avoidance outcomes, while slow rates of decay indicate reduced price sensitivity, or greater worth placed on avoidance outcomes. We therefore used optogenetics to assess how inducing dopamine release causally modifies the demand to avoid electrical footshock in an economic setting. Increasing release at an avoidance predictive cue made animals more sensitive to price, consistent with a negative reward prediction error (i.e., the animal perceives they received a worse outcome than expected). Increasing release at avoidance made animals less sensitive to price, consistent with a positive reward prediction error (i.e., the animal perceives they received a better outcome than expected). These data demonstrate that transient dopamine release events represent the value of avoidance outcomes and can predictably modify the demand to avoid. PMID:29766047

  15. Insights into the Earth System mass variability from CSR-RL05 GRACE gravity fields

    NASA Astrophysics Data System (ADS)

    Bettadpur, S.

    2012-04-01

    The next-generation Release-05 GRACE gravity field data products are the result of extensive effort applied to the improvements to the GRACE Level-1 (tracking) data products, and to improvements in the background gravity models and processing methodology. As a result, the squared-error upper-bound in RL05 fields is half or less than the squared-error upper-bound in RL04 fields. The CSR-RL05 field release consists of unconstrained gravity fields as well as a regularized gravity field time-series that can be used for several applications without any post-processing error reduction. This paper will describe the background and the nature of these improvements in the data products, and provide an error characterization. We will describe the insights these new series offer in measuring the mass flux due to diverse Hydrologic, Oceanographic and Cryospheric processes.

  16. Prescribing Errors Involving Medication Dosage Forms

    PubMed Central

    Lesar, Timothy S

    2002-01-01

    CONTEXT Prescribing errors involving medication dose formulations have been reported to occur frequently in hospitals. No systematic evaluations of the characteristics of errors related to medication dosage formulation have been performed. OBJECTIVE To quantify the characteristics, frequency, and potential adverse patient effects of prescribing errors involving medication dosage forms . DESIGN Evaluation of all detected medication prescribing errors involving or related to medication dosage forms in a 631-bed tertiary care teaching hospital. MAIN OUTCOME MEASURES Type, frequency, and potential for adverse effects of prescribing errors involving or related to medication dosage forms. RESULTS A total of 1,115 clinically significant prescribing errors involving medication dosage forms were detected during the 60-month study period. The annual number of detected errors increased throughout the study period. Detailed analysis of the 402 errors detected during the last 16 months of the study demonstrated the most common errors to be: failure to specify controlled release formulation (total of 280 cases; 69.7%) both when prescribing using the brand name (148 cases; 36.8%) and when prescribing using the generic name (132 cases; 32.8%); and prescribing controlled delivery formulations to be administered per tube (48 cases; 11.9%). The potential for adverse patient outcome was rated as potentially “fatal or severe” in 3 cases (0.7%), and “serious” in 49 cases (12.2%). Errors most commonly involved cardiovascular agents (208 cases; 51.7%). CONCLUSIONS Hospitalized patients are at risk for adverse outcomes due to prescribing errors related to inappropriate use of medication dosage forms. This information should be considered in the development of strategies to prevent adverse patient outcomes resulting from such errors. PMID:12213138

  17. Automatic Recognition of Phonemes Using a Syntactic Processor for Error Correction.

    DTIC Science & Technology

    1980-12-01

    OF PHONEMES USING A SYNTACTIC PROCESSOR FOR ERROR CORRECTION THESIS AFIT/GE/EE/8D-45 Robert B. ’Taylor 2Lt USAF Approved for public release...distribution unlimilted. AbP AFIT/GE/EE/ 80D-45 AUTOMATIC RECOGNITION OF PHONEMES USING A SYNTACTIC PROCESSOR FOR ERROR CORRECTION THESIS Presented to the...Testing ..................... 37 Bayes Decision Rule for Minimum Error ........... 37 Bayes Decision Rule for Minimum Risk ............ 39 Mini Max Test

  18. Predicting morphological changes DS New Naga-Hammadi Barrage for extreme Nile flood flows: A Monte Carlo analysis

    PubMed Central

    Sattar, Ahmed M.A.; Raslan, Yasser M.

    2013-01-01

    While construction of the Aswan High Dam (AHD) has stopped concurrent flooding events, River Nile is still subject to low intensity flood waves resulting from controlled release of water from the dam reservoir. Analysis of flow released from New Naga-Hammadi Barrage, which is located at 3460 km downstream AHD indicated an increase in magnitude of flood released from the barrage in the past 10 years. A 2D numerical mobile bed model is utilized to investigate the possible morphological changes in the downstream of Naga-Hammadi Barrage from possible higher flood releases. Monte Carlo simulation analyses (MCS) is applied to the deterministic results of the 2D model to account for and assess the uncertainty of sediment parameters and formulations in addition to sacristy of field measurements. Results showed that the predicted volume of erosion yielded the highest uncertainty and variation from deterministic run, while navigation velocity yielded the least uncertainty. Furthermore, the error budget method is used to rank various sediment parameters for their contribution in the total prediction uncertainty. It is found that the suspended sediment contributed to output uncertainty more than other sediment parameters followed by bed load with 10% less order of magnitude. PMID:25685476

  19. Predicting morphological changes DS New Naga-Hammadi Barrage for extreme Nile flood flows: A Monte Carlo analysis.

    PubMed

    Sattar, Ahmed M A; Raslan, Yasser M

    2014-01-01

    While construction of the Aswan High Dam (AHD) has stopped concurrent flooding events, River Nile is still subject to low intensity flood waves resulting from controlled release of water from the dam reservoir. Analysis of flow released from New Naga-Hammadi Barrage, which is located at 3460 km downstream AHD indicated an increase in magnitude of flood released from the barrage in the past 10 years. A 2D numerical mobile bed model is utilized to investigate the possible morphological changes in the downstream of Naga-Hammadi Barrage from possible higher flood releases. Monte Carlo simulation analyses (MCS) is applied to the deterministic results of the 2D model to account for and assess the uncertainty of sediment parameters and formulations in addition to sacristy of field measurements. Results showed that the predicted volume of erosion yielded the highest uncertainty and variation from deterministic run, while navigation velocity yielded the least uncertainty. Furthermore, the error budget method is used to rank various sediment parameters for their contribution in the total prediction uncertainty. It is found that the suspended sediment contributed to output uncertainty more than other sediment parameters followed by bed load with 10% less order of magnitude.

  20. Near-infrared spectroscopic analysis of the breaking force of extended-release matrix tablets prepared by roller-compaction: influence of plasticizer levels and sintering temperature.

    PubMed

    Dave, Vivek S; Fahmy, Raafat M; Hoag, Stephen W

    2015-06-01

    The aim of this study was to investigate the feasibility of near-infrared (NIR) spectroscopy for the determination of the influence of sintering temperature and plasticizer levels on the breaking force of extended-release matrix tablets prepared via roller-compaction. Six formulations using theophylline as a model drug, Eudragit® RL PO or Eudragit® RS PO as a matrix former and three levels of TEC (triethyl citrate) as a plasticizer were prepared. The powder blend was roller compacted using a fixed roll-gap of 1.5 mm, feed screw speed to roller speed ratio of 5:1 and roll pressure of 4 MPa. The granules, after removing fines, were compacted into tablets on a Stokes B2 rotary tablet press at a compression force of 7 kN. The tablets were thermally treated at different temperatures (Room Temperature, 50, 75 and 100 °C) for 5 h. These tablets were scanned in reflectance mode in the wavelength range of 400-2500 nm and were evaluated for breaking force. Tablet breaking force significantly increased with increasing plasticizer levels and with increases in the sintering temperature. An increase in tablet hardness produced an upward shift (increase in absorbance) in the NIR spectra. The principle component analysis (PCA) of the spectra was able to distinguish samples with different plasticizer levels and sintering temperatures. In addition, a 9-factor partial least squares (PLS) regression model for tablets containing Eudragit® RL PO had an r(2) of 0.9797, a standard error of calibration of 0.6255 and a standard error of cross validation (SECV) of 0.7594. Similar analysis of tablets containing Eudragit® RS PO showed an r(2) of 0.9831, a standard error of calibration of 0.9711 and an SECV of 1.192.

  1. Using snowball sampling method with nurses to understand medication administration errors.

    PubMed

    Sheu, Shuh-Jen; Wei, Ien-Lan; Chen, Ching-Huey; Yu, Shu; Tang, Fu-In

    2009-02-01

    We aimed to encourage nurses to release information about drug administration errors to increase understanding of error-related circumstances and to identify high-alert situations. Drug administration errors represent the majority of medication errors, but errors are underreported. Effective ways are lacking to encourage nurses to actively report errors. Snowball sampling was conducted to recruit participants. A semi-structured questionnaire was used to record types of error, hospital and nurse backgrounds, patient consequences, error discovery mechanisms and reporting rates. Eighty-five nurses participated, reporting 328 administration errors (259 actual, 69 near misses). Most errors occurred in medical surgical wards of teaching hospitals, during day shifts, committed by nurses working fewer than two years. Leading errors were wrong drugs and doses, each accounting for about one-third of total errors. Among 259 actual errors, 83.8% resulted in no adverse effects; among remaining 16.2%, 6.6% had mild consequences and 9.6% had serious consequences (severe reaction, coma, death). Actual errors and near misses were discovered mainly through double-check procedures by colleagues and nurses responsible for errors; reporting rates were 62.5% (162/259) vs. 50.7% (35/69) and only 3.5% (9/259) vs. 0% (0/69) were disclosed to patients and families. High-alert situations included administration of 15% KCl, insulin and Pitocin; using intravenous pumps; and implementation of cardiopulmonary resuscitation (CPR). Snowball sampling proved to be an effective way to encourage nurses to release details concerning medication errors. Using empirical data, we identified high-alert situations. Strategies for reducing drug administration errors by nurses are suggested. Survey results suggest that nurses should double check medication administration in known high-alert situations. Nursing management can use snowball sampling to gather error details from nurses in a non-reprimanding atmosphere, helping to establish standard operational procedures for known high-alert situations.

  2. The Impact of Atmospheric Modeling Errors on GRACE Estimates of Mass Loss in Greenland and Antarctica

    NASA Astrophysics Data System (ADS)

    Hardy, Ryan A.; Nerem, R. Steven; Wiese, David N.

    2017-12-01

    Systematic errors in Gravity Recovery and Climate Experiment (GRACE) monthly mass estimates over the Greenland and Antarctic ice sheets can originate from low-frequency biases in the European Centre for Medium-Range Weather Forecasts (ECMWF) Operational Analysis model, the atmospheric component of the Atmospheric and Ocean Dealising Level-1B (AOD1B) product used to forward model atmospheric and ocean gravity signals in GRACE processing. These biases are revealed in differences in surface pressure between the ECMWF Operational Analysis model, state-of-the-art reanalyses, and in situ surface pressure measurements. While some of these errors are attributable to well-understood discrete model changes and have published corrections, we examine errors these corrections do not address. We compare multiple models and in situ data in Antarctica and Greenland to determine which models have the most skill relative to monthly averages of the dealiasing model. We also evaluate linear combinations of these models and synthetic pressure fields generated from direct interpolation of pressure observations. These models consistently reveal drifts in the dealiasing model that cause the acceleration of Antarctica's mass loss between April 2002 and August 2016 to be underestimated by approximately 4 Gt yr-2. We find similar results after attempting to solve the inverse problem, recovering pressure biases directly from the GRACE Jet Propulsion Laboratory RL05.1 M mascon solutions. Over Greenland, we find a 2 Gt yr-1 bias in mass trend. While our analysis focuses on errors in Release 05 of AOD1B, we also evaluate the new AOD1B RL06 product. We find that this new product mitigates some of the aforementioned biases.

  3. Global Erratum for Kepler Q0-Q17 and K2 C0-C5 Short Cadence Data

    NASA Technical Reports Server (NTRS)

    Caldwell, Douglas; Van Cleve, Jeffrey E.

    2016-01-01

    An accounting error has scrambled much of the short-cadence collateral smear data used to correct for the effects of Keplers shutterless readout. This error has been present since launch and affects approximately half of all short-cadence targets observed by Kepler and K2 to date. The resulting calibration errors are present in both the short-cadence target pixel files and the short-cadence light curves for Kepler Data Releases 1-24 and K2 Data Releases 1-7. This error does not affect long-cadence data. Since it will take some time to correct this error and reprocess all Kepler and K2 data, a list of affected targets is provided. Even though the affected targets are readily identified, the science impact for any particular target may be difficult to assess. Since the smear signal is often small compared to the target signal, the effect is negligible for many targets. However, the smear signal is scene-dependent, so time varying signals can be introduced into any target by the other stars falling on the same CCD column. Some tips on how to assess the severity of the calibration error are provided in this document.

  4. Medication Errors: New EU Good Practice Guide on Risk Minimisation and Error Prevention.

    PubMed

    Goedecke, Thomas; Ord, Kathryn; Newbould, Victoria; Brosch, Sabine; Arlett, Peter

    2016-06-01

    A medication error is an unintended failure in the drug treatment process that leads to, or has the potential to lead to, harm to the patient. Reducing the risk of medication errors is a shared responsibility between patients, healthcare professionals, regulators and the pharmaceutical industry at all levels of healthcare delivery. In 2015, the EU regulatory network released a two-part good practice guide on medication errors to support both the pharmaceutical industry and regulators in the implementation of the changes introduced with the EU pharmacovigilance legislation. These changes included a modification of the 'adverse reaction' definition to include events associated with medication errors, and the requirement for national competent authorities responsible for pharmacovigilance in EU Member States to collaborate and exchange information on medication errors resulting in harm with national patient safety organisations. To facilitate reporting and learning from medication errors, a clear distinction has been made in the guidance between medication errors resulting in adverse reactions, medication errors without harm, intercepted medication errors and potential errors. This distinction is supported by an enhanced MedDRA(®) terminology that allows for coding all stages of the medication use process where the error occurred in addition to any clinical consequences. To better understand the causes and contributing factors, individual case safety reports involving an error should be followed-up with the primary reporter to gather information relevant for the conduct of root cause analysis where this may be appropriate. Such reports should also be summarised in periodic safety update reports and addressed in risk management plans. Any risk minimisation and prevention strategy for medication errors should consider all stages of a medicinal product's life-cycle, particularly the main sources and types of medication errors during product development. This article describes the key concepts of the EU good practice guidance for defining, classifying, coding, reporting, evaluating and preventing medication errors. This guidance should contribute to the safe and effective use of medicines for the benefit of patients and public health.

  5. VizieR Online Data Catalog: 5 Galactic GC proper motions from Gaia DR1 (Watkins+, 2017)

    NASA Astrophysics Data System (ADS)

    Watkins, L. L.; van der Marel, R. P.

    2017-11-01

    We present a pilot study of Galactic globular cluster (GC) proper motion (PM) determinations using Gaia data. We search for GC stars in the Tycho-Gaia Astrometric Solution (TGAS) catalog from Gaia Data Release 1 (DR1), and identify five members of NGC 104 (47 Tucanae), one member of NGC 5272 (M3), five members of NGC 6121 (M4), seven members of NGC 6397, and two members of NGC 6656 (M22). By taking a weighted average of member stars, fully accounting for the correlations between parameters, we estimate the parallax (and, hence, distance) and PM of the GCs. This provides a homogeneous PM study of multiple GCs based on an astrometric catalog with small and well-controlled systematic errors and yields random PM errors similar to existing measurements. Detailed comparison to the available Hubble Space Telescope (HST) measurements generally shows excellent agreement, validating the astrometric quality of both TGAS and HST. By contrast, comparison to ground-based measurements shows that some of those must have systematic errors exceeding the random errors. Our parallax estimates have uncertainties an order of magnitude larger than previous studies, but nevertheless imply distances consistent with previous estimates. By combining our PM measurements with literature positions, distances, and radial velocities, we measure Galactocentric space motions for the clusters and find that these also agree well with previous analyses. Our analysis provides a framework for determining more accurate distances and PMs of Galactic GCs using future Gaia data releases. This will provide crucial constraints on the near end of the cosmic distance ladder and provide accurate GC orbital histories. (4 data files).

  6. Tycho- Gaia Astrometric Solution Parallaxes and Proper Motions for Five Galactic Globular Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkins, Laura L.; Van der Marel, Roeland P., E-mail: lwatkins@stsci.edu

    2017-04-20

    We present a pilot study of Galactic globular cluster (GC) proper motion (PM) determinations using Gaia data. We search for GC stars in the Tycho- Gaia Astrometric Solution (TGAS) catalog from Gaia Data Release 1 (DR1), and identify five members of NGC 104 (47 Tucanae), one member of NGC 5272 (M3), five members of NGC 6121 (M4), seven members of NGC 6397, and two members of NGC 6656 (M22). By taking a weighted average of member stars, fully accounting for the correlations between parameters, we estimate the parallax (and, hence, distance) and PM of the GCs. This provides a homogeneousmore » PM study of multiple GCs based on an astrometric catalog with small and well-controlled systematic errors and yields random PM errors similar to existing measurements. Detailed comparison to the available Hubble Space Telescope ( HST ) measurements generally shows excellent agreement, validating the astrometric quality of both TGAS and HST . By contrast, comparison to ground-based measurements shows that some of those must have systematic errors exceeding the random errors. Our parallax estimates have uncertainties an order of magnitude larger than previous studies, but nevertheless imply distances consistent with previous estimates. By combining our PM measurements with literature positions, distances, and radial velocities, we measure Galactocentric space motions for the clusters and find that these also agree well with previous analyses. Our analysis provides a framework for determining more accurate distances and PMs of Galactic GCs using future Gaia data releases. This will provide crucial constraints on the near end of the cosmic distance ladder and provide accurate GC orbital histories.« less

  7. nCTEQ15 - Global analysis of nuclear parton distributions with uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kusina, A.; Kovarik, Karol; Jezo, T.

    2015-09-01

    We present the first official release of the nCTEQ nuclear parton distribution functions with errors. The main addition to the previous nCTEQ PDFs is the introduction of PDF uncertainties based on the Hessian method. Another important addition is the inclusion of pion production data from RHIC that give us a handle on constraining the gluon PDF. This contribution summarizes our results from arXiv:1509.00792 and concentrates on the comparison with other groups providing nuclear parton distributions.

  8. nCTEQ15 - Global analysis of nuclear parton distributions with uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kusina, A.; Kovarik, K.; Jezo, T.

    2015-09-04

    We present the first official release of the nCTEQ nuclear parton distribution functions with errors. The main addition to the previous nCTEQ PDFs is the introduction of PDF uncertainties based on the Hessian method. Another important addition is the inclusion of pion production data from RHIC that give us a handle on constraining the gluon PDF. This contribution summarizes our results from arXiv:1509.00792, and concentrates on the comparison with other groups providing nuclear parton distributions.

  9. Evaluating a medical error taxonomy.

    PubMed

    Brixey, Juliana; Johnson, Todd R; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a standard language for reporting medication errors. This project maps the NCC MERP taxonomy of medication error to MedWatch medical errors involving infusion pumps. Of particular interest are human factors associated with medical device errors. The NCC MERP taxonomy of medication errors is limited in mapping information from MEDWATCH because of the focus on the medical device and the format of reporting.

  10. Differential Dopamine Release Dynamics in the Nucleus Accumbens Core and Shell Reveal Complementary Signals for Error Prediction and Incentive Motivation

    PubMed Central

    Cacciapaglia, Fabio; Wightman, R. Mark; Carelli, Regina M.

    2015-01-01

    Mesolimbic dopamine (DA) is phasically released during appetitive behaviors, though there is substantive disagreement about the specific purpose of these DA signals. For example, prediction error (PE) models suggest a role of learning, while incentive salience (IS) models argue that the DA signal imbues stimuli with value and thereby stimulates motivated behavior. However, within the nucleus accumbens (NAc) patterns of DA release can strikingly differ between subregions, and as such, it is possible that these patterns differentially contribute to aspects of PE and IS. To assess this, we measured DA release in subregions of the NAc during a behavioral task that spatiotemporally separated sequential goal-directed stimuli. Electrochemical methods were used to measure subsecond NAc dopamine release in the core and shell during a well learned instrumental chain schedule in which rats were trained to press one lever (seeking; SL) to gain access to a second lever (taking; TL) linked with food delivery, and again during extinction. In the core, phasic DA release was greatest following initial SL presentation, but minimal for the subsequent TL and reward events. In contrast, phasic shell DA showed robust release at all task events. Signaling decreased between the beginning and end of sessions in the shell, but not core. During extinction, peak DA release in the core showed a graded decrease for the SL and pauses in release during omitted expected rewards, whereas shell DA release decreased predominantly during the TL. These release dynamics suggest parallel DA signals capable of supporting distinct theories of appetitive behavior. SIGNIFICANCE STATEMENT Dopamine signaling in the brain is important for a variety of cognitive functions, such as learning and motivation. Typically, it is assumed that a single dopamine signal is sufficient to support these cognitive functions, though competing theories disagree on how dopamine contributes to reward-based behaviors. Here, we have found that real-time dopamine release within the nucleus accumbens (a primary target of midbrain dopamine neurons) strikingly varies between core and shell subregions. In the core, dopamine dynamics are consistent with learning-based theories (such as reward prediction error) whereas in the shell, dopamine is consistent with motivation-based theories (e.g., incentive salience). These findings demonstrate that dopamine plays multiple and complementary roles based on discrete circuits that help animals optimize rewarding behaviors. PMID:26290234

  11. A model for the prediction of latent errors using data obtained during the development process

    NASA Technical Reports Server (NTRS)

    Gaffney, J. E., Jr.; Martello, S. J.

    1984-01-01

    A model implemented in a program that runs on the IBM PC for estimating the latent (or post ship) content of a body of software upon its initial release to the user is presented. The model employs the count of errors discovered at one or more of the error discovery processes during development, such as a design inspection, as the input data for a process which provides estimates of the total life-time (injected) error content and of the latent (or post ship) error content--the errors remaining a delivery. The model presented presumes that these activities cover all of the opportunities during the software development process for error discovery (and removal).

  12. Chromium release from new stainless steel, recycled and nickel-free orthodontic brackets.

    PubMed

    Sfondrini, Maria Francesca; Cacciafesta, Vittorio; Maffia, Elena; Massironi, Sarah; Scribante, Andrea; Alberti, Giancarla; Biesuz, Raffaela; Klersy, Catherine

    2009-03-01

    To test the hypothesis that there is no difference in the amounts of chromium released from new stainless steel brackets, recycled stainless steel brackets, and nickel-free (Ni-free) orthodontic brackets. This in vitro study was performed using a classic batch procedure by immersion of the samples in artificial saliva at various acidities (pH 4.2, 6.5, and 7.6) over an extended time interval (t(1) = 0.25 h, t(2) = 1 h, t(3) = 24 h, t(4) = 48 h, t(5) = 120 h). The amount of chromium release was determined using an atomic absorption spectrophotometer and an inductively coupled plasma atomic emission spectrometer. Statistical analysis included a linear regression model for repeated measures, with calculation of Huber-White robust standard errors to account for intrabracket correlation of data. For post hoc comparisons the Bonferroni correction was applied. The greatest amount of chromium was released from new stainless steel brackets (0.52 +/- 1.083 microg/g), whereas the recycled brackets released 0.27 +/- 0.38 microg/g. The smallest release was measured with Ni-free brackets (0.21 +/- 0.51 microg/g). The difference between recycled brackets and Ni-free brackets was not statistically significant (P = .13). For all brackets, the greatest release (P = .000) was measured at pH 4.2, and a significant increase was reported between all time intervals (P < .002). The hypothesis is rejected, but the amount of chromium released in all test solutions was well below the daily dietary intake level.

  13. Unmanned Systems Safety Guide for DoD Acquisition

    DTIC Science & Technology

    2007-06-27

    Weapons release authorization validation. • Weapons release verification . • Weapons release abort/back-out, including clean -up or reset of weapons...conditions, clean room, stress) and other environments (e.g. software engineering environment, electromagnetic) related to system utilization. Error 22 (1...A solid or liquid energetic substance (or a mixture of substances) which is in itself capable, OUSD (AT&L) Systems and Software Engineering

  14. Risk management: correct patient and specimen identification in a surgical pathology laboratory. The experience of Infermi Hospital, Rimini, Italy.

    PubMed

    Fabbretti, G

    2010-06-01

    Because of its complex nature, surgical pathology practice is prone to error. In this report, we describe our methods for reducing error as much as possible during the pre-analytical and analytical phases. This was achieved by revising procedures, and by using computer technology and automation. Most mistakes are the result of human error in the identification and matching of patient and samples. To avoid faulty data interpretation, we employed a new comprehensive computer system that acquires all patient ID information directly from the hospital's database with a remote order entry; it also provides label and request forms via-Web where clinical information is required before sending the sample. Both patient and sample are identified directly and immediately at the site where the surgical procedures are performed. Barcode technology is used to input information at every step and automation is used for sample blocks and slides to avoid errors that occur when information is recorded or transferred by hand. Quality control checks occur at every step of the process to ensure that none of the steps are left to chance and that no phase is dependent on a single operator. The system also provides statistical analysis of errors so that new strategies can be implemented to avoid repetition. In addition, the staff receives frequent training on avoiding errors and new developments. The results have been shown promising results with a very low error rate (0.27%). None of these compromised patient health and all errors were detected before the release of the diagnosis report.

  15. Lost in Translation? A Comparison of Cancer-Genetics Reporting in the Press Release and its Subsequent Coverage in Lay Press1

    PubMed Central

    Brechman, Jean M.; Lee, Chul-joo; Cappella, Joseph N.

    2014-01-01

    Understanding how genetic science is communicated to the lay public is of great import, given that media coverage of genetics is increasing exponentially and that the ways in which discoveries are presented in the news can have significant effects on a variety of health outcomes. To address this issue, this study examines the presentation of genetic research relating to cancer outcomes and behaviors (i.e., prostate cancer, breast cancer, colon cancer, smoking and obesity) in both the press release (N = 23) and its subsequent news coverage (N = 71) by using both quantitative content analysis and qualitative textual analysis. In contrast to earlier studies reporting that news stories often misrepresent genetics by presenting biologically deterministic and simplified portrayals (e.g., Mountcastle-Shah et al., 2003; Ten Eych & Williment, 2003), our data shows no clear trends in the direction of distortion toward deterministic claims in news articles. Also, other errors commonly attributed to science journalism, such as lack of qualifying details and use of oversimplified language (e.g., “fat gene”) are observed in press releases. These findings suggest that the intermediary press release rather than news coverage may serve as a source of distortion in the dissemination of science to the lay public. The implications of this study for future research in this area are discussed. PMID:25568611

  16. Verification of experimental dynamic strength methods with atomistic ramp-release simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Alexander P.; Brown, Justin L.; Lim, Hojun

    Material strength and moduli can be determined from dynamic high-pressure ramp-release experiments using an indirect method of Lagrangian wave profile analysis of surface velocities. This method, termed self-consistent Lagrangian analysis (SCLA), has been difficult to calibrate and corroborate with other experimental methods. Using nonequilibrium molecular dynamics, we validate the SCLA technique by demonstrating that it accurately predicts the same bulk modulus, shear modulus, and strength as those calculated from the full stress tensor data, especially where strain rate induced relaxation effects and wave attenuation are small. We show here that introducing a hold in the loading profile at peak pressuremore » gives improved accuracy in the shear moduli and relaxation-adjusted strength by reducing the effect of wave attenuation. When rate-dependent effects coupled with wave attenuation are large, we find that Lagrangian analysis overpredicts the maximum unload wavespeed, leading to increased error in the measured dynamic shear modulus. Furthermore, these simulations provide insight into the definition of dynamic strength, as well as a plausible explanation for experimental disagreement in reported dynamic strength values.« less

  17. Verification of experimental dynamic strength methods with atomistic ramp-release simulations

    NASA Astrophysics Data System (ADS)

    Moore, Alexander P.; Brown, Justin L.; Lim, Hojun; Lane, J. Matthew D.

    2018-05-01

    Material strength and moduli can be determined from dynamic high-pressure ramp-release experiments using an indirect method of Lagrangian wave profile analysis of surface velocities. This method, termed self-consistent Lagrangian analysis (SCLA), has been difficult to calibrate and corroborate with other experimental methods. Using nonequilibrium molecular dynamics, we validate the SCLA technique by demonstrating that it accurately predicts the same bulk modulus, shear modulus, and strength as those calculated from the full stress tensor data, especially where strain rate induced relaxation effects and wave attenuation are small. We show here that introducing a hold in the loading profile at peak pressure gives improved accuracy in the shear moduli and relaxation-adjusted strength by reducing the effect of wave attenuation. When rate-dependent effects coupled with wave attenuation are large, we find that Lagrangian analysis overpredicts the maximum unload wavespeed, leading to increased error in the measured dynamic shear modulus. These simulations provide insight into the definition of dynamic strength, as well as a plausible explanation for experimental disagreement in reported dynamic strength values.

  18. Verification of experimental dynamic strength methods with atomistic ramp-release simulations

    DOE PAGES

    Moore, Alexander P.; Brown, Justin L.; Lim, Hojun; ...

    2018-05-04

    Material strength and moduli can be determined from dynamic high-pressure ramp-release experiments using an indirect method of Lagrangian wave profile analysis of surface velocities. This method, termed self-consistent Lagrangian analysis (SCLA), has been difficult to calibrate and corroborate with other experimental methods. Using nonequilibrium molecular dynamics, we validate the SCLA technique by demonstrating that it accurately predicts the same bulk modulus, shear modulus, and strength as those calculated from the full stress tensor data, especially where strain rate induced relaxation effects and wave attenuation are small. We show here that introducing a hold in the loading profile at peak pressuremore » gives improved accuracy in the shear moduli and relaxation-adjusted strength by reducing the effect of wave attenuation. When rate-dependent effects coupled with wave attenuation are large, we find that Lagrangian analysis overpredicts the maximum unload wavespeed, leading to increased error in the measured dynamic shear modulus. Furthermore, these simulations provide insight into the definition of dynamic strength, as well as a plausible explanation for experimental disagreement in reported dynamic strength values.« less

  19. Towards the operational estimation of a radiological plume using data assimilation after a radiological accidental atmospheric release

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Vira, Julius; Bocquet, Marc; Sofiev, Mikhail; Saunier, Olivier

    2011-06-01

    In the event of an accidental atmospheric release of radionuclides from a nuclear power plant, accurate real-time forecasting of the activity concentrations of radionuclides is required by the decision makers for the preparation of adequate countermeasures. The accuracy of the forecast plume is highly dependent on the source term estimation. On several academic test cases, including real data, inverse modelling and data assimilation techniques were proven to help in the assessment of the source term. In this paper, a semi-automatic method is proposed for the sequential reconstruction of the plume, by implementing a sequential data assimilation algorithm based on inverse modelling, with a care to develop realistic methods for operational risk agencies. The performance of the assimilation scheme has been assessed through the intercomparison between French and Finnish frameworks. Two dispersion models have been used: Polair3D and Silam developed in two different research centres. Different release locations, as well as different meteorological situations are tested. The existing and newly planned surveillance networks are used and realistically large multiplicative observational errors are assumed. The inverse modelling scheme accounts for strong error bias encountered with such errors. The efficiency of the data assimilation system is tested via statistical indicators. For France and Finland, the average performance of the data assimilation system is strong. However there are outlying situations where the inversion fails because of a too poor observability. In addition, in the case where the power plant responsible for the accidental release is not known, robust statistical tools are developed and tested to discriminate candidate release sites.

  20. Application of Uniform Measurement Error Distribution

    DTIC Science & Technology

    2016-03-18

    subrata.sanyal@navy.mil Point of Contact: Measurement Science & Engineering Department Operations (Code: MS02) P.O. Box 5000 Corona , CA 92878... Corona , California 92878-5000 March 18, 2016 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited...NSWC Corona Public Release Control Number 16-005) NSWCCORDIV/RDTR-2016-005 iii

  1. Landmark-Based Navigation of an Unmanned Ground Vehicle (UGV)

    DTIC Science & Technology

    2009-03-01

    against large measurement errors. 20090710280 RELEASE LIMITATION Approved for public release 4p fv^-Jo-osiit? Published by Weapons Systems Division...achieved as numerous low cost gyroscopes in the market meet this requirement. 24 DSTO-TR-2260 3.5.4 Sensitivity to Vehicle Speed In this subsection

  2. Soil Moisture Active Passive Mission L4_SM Data Product Assessment (Version 2 Validated Release)

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf Helmut; De Lannoy, Gabrielle J. M.; Liu, Qing; Ardizzone, Joseph V.; Chen, Fan; Colliander, Andreas; Conaty, Austin; Crow, Wade; Jackson, Thomas; Kimball, John; hide

    2016-01-01

    During the post-launch SMAP calibration and validation (Cal/Val) phase there are two objectives for each science data product team: 1) calibrate, verify, and improve the performance of the science algorithm, and 2) validate the accuracy of the science data product as specified in the science requirements and according to the Cal/Val schedule. This report provides an assessment of the SMAP Level 4 Surface and Root Zone Soil Moisture Passive (L4_SM) product specifically for the product's public Version 2 validated release scheduled for 29 April 2016. The assessment of the Version 2 L4_SM data product includes comparisons of SMAP L4_SM soil moisture estimates with in situ soil moisture observations from core validation sites and sparse networks. The assessment further includes a global evaluation of the internal diagnostics from the ensemble-based data assimilation system that is used to generate the L4_SM product. This evaluation focuses on the statistics of the observation-minus-forecast (O-F) residuals and the analysis increments. Together, the core validation site comparisons and the statistics of the assimilation diagnostics are considered primary validation methodologies for the L4_SM product. Comparisons against in situ measurements from regional-scale sparse networks are considered a secondary validation methodology because such in situ measurements are subject to up-scaling errors from the point-scale to the grid cell scale of the data product. Based on the limited set of core validation sites, the wide geographic range of the sparse network sites, and the global assessment of the assimilation diagnostics, the assessment presented here meets the criteria established by the Committee on Earth Observing Satellites for Stage 2 validation and supports the validated release of the data. An analysis of the time average surface and root zone soil moisture shows that the global pattern of arid and humid regions are captured by the L4_SM estimates. Results from the core validation site comparisons indicate that "Version 2" of the L4_SM data product meets the self-imposed L4_SM accuracy requirement, which is formulated in terms of the ubRMSE: the RMSE (Root Mean Square Error) after removal of the long-term mean difference. The overall ubRMSE of the 3-hourly L4_SM surface soil moisture at the 9 km scale is 0.035 cubic meters per cubic meter requirement. The corresponding ubRMSE for L4_SM root zone soil moisture is 0.024 cubic meters per cubic meter requirement. Both of these metrics are comfortably below the 0.04 cubic meters per cubic meter requirement. The L4_SM estimates are an improvement over estimates from a model-only SMAP Nature Run version 4 (NRv4), which demonstrates the beneficial impact of the SMAP brightness temperature data. L4_SM surface soil moisture estimates are consistently more skillful than NRv4 estimates, although not by a statistically significant margin. The lack of statistical significance is not surprising given the limited data record available to date. Root zone soil moisture estimates from L4_SM and NRv4 have similar skill. Results from comparisons of the L4_SM product to in situ measurements from nearly 400 sparse network sites corroborate the core validation site results. The instantaneous soil moisture and soil temperature analysis increments are within a reasonable range and result in spatially smooth soil moisture analyses. The O-F residuals exhibit only small biases on the order of 1-3 degrees Kelvin between the (re-scaled) SMAP brightness temperature observations and the L4_SM model forecast, which indicates that the assimilation system is largely unbiased. The spatially averaged time series standard deviation of the O-F residuals is 5.9 degrees Kelvin, which reduces to 4.0 degrees Kelvin for the observation-minus-analysis (O-A) residuals, reflecting the impact of the SMAP observations on the L4_SM system. Averaged globally, the time series standard deviation of the normalized O-F residuals is close to unity, which would suggest that the magnitude of the modeled errors approximately reflects that of the actual errors. The assessment report also notes several limitations of the "Version 2" L4_SM data product and science algorithm calibration that will be addressed in future releases. Regionally, the time series standard deviation of the normalized O-F residuals deviates considerably from unity, which indicates that the L4_SM assimilation algorithm either over- or under-estimates the actual errors that are present in the system. Planned improvements include revised land model parameters, revised error parameters for the land model and the assimilated SMAP observations, and revised surface meteorological forcing data for the operational period and underlying climatological data. Moreover, a refined analysis of the impact of SMAP observations will be facilitated by the construction of additional variants of the model-only reference data. Nevertheless, the “Version 2” validated release of the L4_SM product is sufficiently mature and of adequate quality for distribution to and use by the larger science and application communities.

  3. A method for the automated processing and analysis of images of ULVWF-platelet strings.

    PubMed

    Reeve, Scott R; Abbitt, Katherine B; Cruise, Thomas D; Hose, D Rodney; Lawford, Patricia V

    2013-01-01

    We present a method for identifying and analysing unusually large von Willebrand factor (ULVWF)-platelet strings in noisy low-quality images. The method requires relatively inexpensive, non-specialist equipment and allows multiple users to be employed in the capture of images. Images are subsequently enhanced and analysed, using custom-written software to perform the processing tasks. The formation and properties of ULVWF-platelet strings released in in vitro flow-based assays have recently become a popular research area. Endothelial cells are incorporated into a flow chamber, chemically stimulated to induce ULVWF release and perfused with isolated platelets which are able to bind to the ULVWF to form strings. The numbers and lengths of the strings released are related to characteristics of the flow. ULVWF-platelet strings are routinely identified by eye from video recordings captured during experiments and analysed manually using basic NIH image software to determine the number of strings and their lengths. This is a laborious, time-consuming task and a single experiment, often consisting of data from four to six dishes of endothelial cells, can take 2 or more days to analyse. The method described here allows analysis of the strings to provide data such as the number and length of strings, number of platelets per string and the distance between each platelet to be found. The software reduces analysis time, and more importantly removes user subjectivity, producing highly reproducible results with an error of less than 2% when compared with detailed manual analysis.

  4. Differential Dopamine Release Dynamics in the Nucleus Accumbens Core and Shell Reveal Complementary Signals for Error Prediction and Incentive Motivation.

    PubMed

    Saddoris, Michael P; Cacciapaglia, Fabio; Wightman, R Mark; Carelli, Regina M

    2015-08-19

    Mesolimbic dopamine (DA) is phasically released during appetitive behaviors, though there is substantive disagreement about the specific purpose of these DA signals. For example, prediction error (PE) models suggest a role of learning, while incentive salience (IS) models argue that the DA signal imbues stimuli with value and thereby stimulates motivated behavior. However, within the nucleus accumbens (NAc) patterns of DA release can strikingly differ between subregions, and as such, it is possible that these patterns differentially contribute to aspects of PE and IS. To assess this, we measured DA release in subregions of the NAc during a behavioral task that spatiotemporally separated sequential goal-directed stimuli. Electrochemical methods were used to measure subsecond NAc dopamine release in the core and shell during a well learned instrumental chain schedule in which rats were trained to press one lever (seeking; SL) to gain access to a second lever (taking; TL) linked with food delivery, and again during extinction. In the core, phasic DA release was greatest following initial SL presentation, but minimal for the subsequent TL and reward events. In contrast, phasic shell DA showed robust release at all task events. Signaling decreased between the beginning and end of sessions in the shell, but not core. During extinction, peak DA release in the core showed a graded decrease for the SL and pauses in release during omitted expected rewards, whereas shell DA release decreased predominantly during the TL. These release dynamics suggest parallel DA signals capable of supporting distinct theories of appetitive behavior. Dopamine signaling in the brain is important for a variety of cognitive functions, such as learning and motivation. Typically, it is assumed that a single dopamine signal is sufficient to support these cognitive functions, though competing theories disagree on how dopamine contributes to reward-based behaviors. Here, we have found that real-time dopamine release within the nucleus accumbens (a primary target of midbrain dopamine neurons) strikingly varies between core and shell subregions. In the core, dopamine dynamics are consistent with learning-based theories (such as reward prediction error) whereas in the shell, dopamine is consistent with motivation-based theories (e.g., incentive salience). These findings demonstrate that dopamine plays multiple and complementary roles based on discrete circuits that help animals optimize rewarding behaviors. Copyright © 2015 the authors 0270-6474/15/3511572-11$15.00/0.

  5. Innovation of novel 'Tab in Tab' system for release modulation of milnacipran HCl: optimization, formulation and in vitro investigations.

    PubMed

    Parejiya, Punit B; Barot, Bhavesh S; Patel, Hetal K; Shelat, Pragna K; Shukla, Arunkumar

    2013-11-01

    The study was aimed toward development of modified release oral drug delivery system for highly water soluble drug, Milnacipran HCl (MH). Novel Tablet in Tablet system (TITs) comprising immediate and extended release dose of MH in different parts was fabricated. The outer shell was composed of admixture of MH, lactose and novel herbal disintegrant obtained from seeds of Lepidium sativum. In the inner core, MH was matrixed with blend of hydrophilic (Benecel®) and hydrophobic (Compritol®) polymers. 3² full factorial design and an artificial neuron network (ANN) were employed for correlating effect of independent variables on dependent variables. The TITs were characterized for pharmacopoeial specifications, in vitro drug release, SEM, drug release kinetics and FTIR study. The release pattern of MH from batch A10 containing 25.17% w/w Benecel® and 8.21% w/w of Compritol® exhibited drug release pattern close proximal to the ideal theoretical profile (t(50%) = 5.92 h, t(75%) = 11.9 h, t(90%) = 18.11 h). The phenomenon of drug release was further explained by concept of percolation and the role of Benecel® and Compritol® in drug release retardation was studied. The normalized error obtained from ANN was less, compared with the multiple regression analysis, and exhibits the higher accuracy in prediction. The results of short-term stability study revealed stable chataracteristics of TITs. SEM study of TITs at different dissolution time points confirmed both diffusion and erosion mechanisms to be operative during drug release from the batch A10. Novel TITs can be a succesful once a day delivery system for highly water soluble drugs.

  6. Real-Time Identification of Wheel Terrain Interaction Models for Enhanced Autonomous Vehicle Mobility

    DTIC Science & Technology

    2014-04-24

    tim at io n Er ro r ( cm ) 0 2 4 6 8 10 Color Statistics Angelova...Color_Statistics_Error) / Average_Slip_Error Position Estimation Error: Global Pose Po si tio n Es tim at io n Er ro r ( cm ) 0 2 4 6 8 10 12 Color...get some kind of clearance for releasing pose and odometry data) collected at the following sites – Taylor, Gascola, Somerset, Fort Bliss and

  7. Brain negativity as an indicator of predictive error processing: the contribution of visual action effect monitoring.

    PubMed

    Joch, Michael; Hegele, Mathias; Maurer, Heiko; Müller, Hermann; Maurer, Lisa Katharina

    2017-07-01

    The error (related) negativity (Ne/ERN) is an event-related potential in the electroencephalogram (EEG) correlating with error processing. Its conditions of appearance before terminal external error information suggest that the Ne/ERN is indicative of predictive processes in the evaluation of errors. The aim of the present study was to specifically examine the Ne/ERN in a complex motor task and to particularly rule out other explaining sources of the Ne/ERN aside from error prediction processes. To this end, we focused on the dependency of the Ne/ERN on visual monitoring about the action outcome after movement termination but before result feedback (action effect monitoring). Participants performed a semi-virtual throwing task by using a manipulandum to throw a virtual ball displayed on a computer screen to hit a target object. Visual feedback about the ball flying to the target was masked to prevent action effect monitoring. Participants received a static feedback about the action outcome (850 ms) after each trial. We found a significant negative deflection in the average EEG curves of the error trials peaking at ~250 ms after ball release, i.e., before error feedback. Furthermore, this Ne/ERN signal did not depend on visual ball-flight monitoring after release. We conclude that the Ne/ERN has the potential to indicate error prediction in motor tasks and that it exists even in the absence of action effect monitoring. NEW & NOTEWORTHY In this study, we are separating different kinds of possible contributors to an electroencephalogram (EEG) error correlate (Ne/ERN) in a throwing task. We tested the influence of action effect monitoring on the Ne/ERN amplitude in the EEG. We used a task that allows us to restrict movement correction and action effect monitoring and to control the onset of result feedback. We ascribe the Ne/ERN to predictive error processing where a conscious feeling of failure is not a prerequisite. Copyright © 2017 the American Physiological Society.

  8. A new way of analyzing occlusion 3 dimensionally.

    PubMed

    Hayasaki, Haruaki; Martins, Renato Parsekian; Gandini, Luiz Gonzaga; Saitoh, Issei; Nonaka, Kazuaki

    2005-07-01

    This article introduces a new method for 3-dimensional dental cast analysis, by using a mechanical 3-dimensional digitizer, MicroScribe 3DX (Immersion, San Jose, Calif), and TIGARO software (not yet released, but available from the author at hayasaki@dent.kyushu-u.ac.jp ). By digitizing points on the model, multiple measurements can be made, including tooth dimensions; arch length, width, and perimeter; curve of Spee; overjet and overbite; and anteroposterior discrepancy. The bias of the system can be evaluated by comparing the distance between 2 points as determined by the new system and as measured with digital calipers. Fifteen pairs of models were measured digitally and manually, and the bias was evaluated by comparing the variances of both methods and checking for the type of error obtained by each method. No systematic errors were found. The results showed that the method is accurate, and it can be applied to both clinical practice and research.

  9. Runtime Detection of C-Style Errors in UPC Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pirkelbauer, P; Liao, C; Panas, T

    2011-09-29

    Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the globalmore » address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.« less

  10. An analysis of the potential for Glen Canyon Dam releases to inundate archaeological sites in the Grand Canyon, Arizona

    USGS Publications Warehouse

    Sondossi, Hoda A.; Fairley, Helen C.

    2014-01-01

    The development of a one-dimensional flow-routing model for the Colorado River between Lees Ferry and Diamond Creek, Arizona in 2008 provided a potentially useful tool for assessing the degree to which varying discharges from Glen Canyon Dam may inundate terrestrial environments and potentially affect resources located within the zone of inundation. Using outputs from the model, a geographic information system analysis was completed to evaluate the degree to which flows from Glen Canyon Dam might inundate archaeological sites located along the Colorado River in the Grand Canyon. The analysis indicates that between 4 and 19 sites could be partially inundated by flows released from Glen Canyon Dam under current (2014) operating guidelines, and as many as 82 archaeological sites may have been inundated to varying degrees by uncontrolled high flows released in June 1983. Additionally, the analysis indicates that more of the sites currently (2014) proposed for active management by the National Park Service are located at low elevations and, therefore, tend to be more susceptible to potential inundation effects than sites not currently (2014) targeted for management actions, although the potential for inundation occurs in both groups of sites. Because of several potential sources of error and uncertainty associated with the model and with limitations of the archaeological data used in this analysis, the results are not unequivocal. These caveats, along with the fact that dam-related impacts can involve more than surface-inundation effects, suggest that the results of this analysis should be used with caution to infer potential effects of Glen Canyon Dam on archaeological sites in the Grand Canyon.

  11. LH-independent testosterone secretion is mediated by the interaction between GNRH2 and its receptor within porcine testes

    USDA-ARS?s Scientific Manuscript database

    Unlike the classical gonadotropin-releasing hormone (GNRH1), the second mammalian isoform (GNRH2) is an ineffective stimulant of gonadotropin release. Species that produce GNRH2 may not maintain a functional GNRH2 receptor (GNRHR2) due to coding errors. A full length GNRHR2 gene has been identified ...

  12. Fricative-stop coarticulation: acoustic and perceptual evidence.

    PubMed

    Repp, B H; Mann, V A

    1982-06-01

    Eight native speakers of American English each produced ten tokens of all possible CV, FCV, and VFCV utterances with V = [a] or [u], F = [s] or [integral of], and C = [t] or [k]. Acoustic analysis showed that the formant transition onsets following the stop consonant release were systematically influenced by the preceding fricative, although there were large individual differences. In particular, F3 and F4 tended to be higher following [s] than following [integral of]. The coarticulatory effects were equally large in FCV (e.g.,/sta/) and VFCV (e.g.,/asda/) utterances; that is, they were not reduced when a syllable boundary intervened between fricative and stop. In a parallel perceptual study, the CV portions of these utterances (with release bursts removed to provoke errors) were presented to listeners for identification of the stop consonant. The pattern of place-of-articulation confusions, too, revealed coarticulatory effects due to the excised fricative context.

  13. Fluxgate magnetorelaxometry: a new approach to study the release properties of hydrogel cylinders and microspheres.

    PubMed

    Wöhl-Bruhn, S; Heim, E; Schwoerer, A; Bertz, A; Harling, S; Menzel, H; Schilling, M; Ludwig, F; Bunjes, H

    2012-10-15

    Hydrogels are under investigation as long term delivery systems for biomacromolecules as active pharmaceutical ingredients. The release behavior of hydrogels can be tailored during the fabrication process. This study investigates the applicability of fluxgate magnetorelaxometry (MRX) as a tool to characterize the release properties of such long term drug delivery depots. MRX is based on the use of superparamagnetic core-shell nanoparticles as model substances. The feasibility of using superparamagnetic nanoparticles to study the degradation of and the associated release from hydrogel cylinders and hydrogel microspheres was a major point of interest. Gels prepared from two types of photo crosslinkable polymers based on modified hydroxyethylstarch, specifically hydroxyethyl starch-hydroxyethyl methacrylate (HES-HEMA) and hydroxyethyl starch-polyethylene glycol methacrylate (HES-P(EG)(6)MA), were analyzed. MRX analysis of the incorporated nanoparticles allowed to evaluate the influence of different crosslinking conditions during hydrogel production as well as to follow the increase in nanoparticle mobility as a result of hydrogel degradation during release studies. Conventional release studies with fluorescent markers (half-change method) were performed for comparison. MRX with superparamagnetic nanoparticles as model substances is a promising method to analyze pharmaceutically relevant processes such as the degradation of hydrogel drug carrier systems. In contrast to conventional release experiments MRX allows measurements in closed vials (reducing loss of sample and sampling errors), in opaque media and at low magnetic nanoparticle concentrations. Magnetic markers possess a better long-term stability than fluorescent ones and are thus also promising for the use in in vivo studies. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Clouds and the Earth's Radiant Energy System (CERES) algorithm theoretical basis document. Volume 1; Overviews (subsystem 0)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator); Baum, Bryan A.; Cess, Robert D.; Charlock, Thomas P.; Coakley, James A.; Green, Richard N.; Lee, Robert B., III; Minnis, Patrick; Smith, G. Louis

    1995-01-01

    The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and the Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 1 provides both summarized and detailed overviews of the CERES Release 1 data analysis system. CERES will produce global top-of-the-atmosphere shortwave and longwave radiative fluxes at the top of the atmosphere, at the surface, and within the atmosphere by using the combination of a large variety of measurements and models. The CERES processing system includes radiance observations from CERES scanning radiometers, cloud properties derived from coincident satellite imaging radiometers, temperature and humidity fields from meteorological analysis models, and high-temporal-resolution geostationary satellite radiances to account for unobserved times. CERES will provide a continuation of the ERBE record and the lowest error climatology of consistent cloud properties and radiation fields. CERES will also substantially improve our knowledge of the Earth's surface radiation budget.

  15. Comparison of real-time BTEX flux measurements to reported emission inventories in the Upper Green River Basin, Wyoming.

    NASA Astrophysics Data System (ADS)

    Edie, R.; Robertson, A.; Murphy, S. M.; Soltis, J.; Field, R. A.; Zimmerle, D.; Bell, C.

    2017-12-01

    Other Test Method 33a (OTM-33a) is an EPA-developed near-source measurement technique that utilizes a Gaussian plume inversion to calculate the flux of a point source 20 to 200 meters away. In 2014, the University of Wyoming mobile laboratory—equipped with a Picarro methane analyzer and an Ionicon Proton Transfer Reaction Time of Flight Mass Spectrometer—measured methane and BTEX fluxes from oil and gas operations in the Upper Green River Basin (UGRB), Wyoming. In this study, OTM-33a BTEX flux measurements are compared to BTEX emissions reported by operators in the Wyoming Department of Environmental Quality (WY-DEQ) emission inventory. On average, OTM-33a measured BTEX fluxes are almost twice as high as those reported in the emission inventory. To further constrain errors in the OTM-33a method, methane test releases were performed at the Colorado State University Methane Emissions Test and Evaluation Center (METEC) in June of 2017. The METEC facility contains decommissioned oil and gas equipment arranged in realistic well pad layouts. Each piece of equipment has a multitude of possible emission points. A Gaussian fit of measurement error from these 29 test releases indicate the median OTM-33a measurement quantified 55% of the metered flowrate. BTEX results from the UGRB campaign and inventory analysis will be presented, along with a discussion of errors associated with the OTM-33a measurement technique. Real-time BTEX and methane mixing ratios at the measurement locations (which show a lack of correlation between VOC and methane sources in 20% of sites sampled) will also be discussed.

  16. Sensitivity Analysis of Kinetic Rate-Law Parameters Used to Simulate Long-Term Weathering of ILAW Glass. Erratum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Gary L.

    2016-09-06

    This report refers to or contains K g values for glasses LAWA44, LAWB45 and LAWC22 affected by calculations errors as identified by Papathanassiu et al. (2011). The corrected K g values are reported in an erratum included in the revised version of the original report. The revised report can be referenced as follows: Pierce E. M. et al. (2004) Waste Form Release Data Package for the 2005 Integrated Disposal Facility Performance Assessment. PNNL-14805 Rev. 0 Erratum. Pacific Northwest National Laboratory, Richland, WA, USA.

  17. 78 FR 2302 - Self-Regulatory Organizations; C2 Options Exchange, Incorporated; Order Approving a Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-10

    ... Authority To Cancel Orders When a Technical or Systems Issue Occurs and To Describe the Operation of Routing...,\\2\\ a proposed rule change to (i) address the authority of the Exchange to cancel orders (or release... that make it necessary to cancel orders (or release routing- related orders),\\6\\ and to resolve error...

  18. Estimation of Release History of Pollutant Source and Dispersion Coefficient of Aquifer Using Trained ANN Model

    NASA Astrophysics Data System (ADS)

    Srivastava, R.; Ayaz, M.; Jain, A.

    2013-12-01

    Knowledge of the release history of a groundwater pollutant source is critical in the prediction of the future trend of the pollutant movement and in choosing an effective remediation strategy. Moreover, for source sites which have undergone an ownership change, the estimated release history can be utilized for appropriate allocation of the costs of remediation among different parties who may be responsible for the contamination. Estimation of the release history with the help of concentration data is an inverse problem that becomes ill-posed because of the irreversible nature of the dispersion process. Breakthrough curves represent the temporal variation of pollutant concentration at a particular location, and contain significant information about the source and the release history. Several methodologies have been developed to solve the inverse problem of estimating the source and/or porous medium properties using the breakthrough curves as a known input. A common problem in the use of the breakthrough curves for this purpose is that, in most field situations, we have little or no information about the time of measurement of the breakthrough curve with respect to the time when the pollutant source becomes active. We develop an Artificial Neural Network (ANN) model to estimate the release history of a groundwater pollutant source through the use of breakthrough curves. It is assumed that the source location is known but the time dependent contaminant source strength is unknown. This temporal variation of the strength of the pollutant source is the output of the ANN model that is trained using the Levenberg-Marquardt algorithm utilizing synthetically generated breakthrough curves as inputs. A single hidden layer was used in the neural network and, to utilize just sufficient information and reduce the required sampling duration, only the upper half of the curve is used as the input pattern. The second objective of this work was to identify the aquifer parameters. An ANN model was developed to estimate the longitudinal and transverse dispersion coefficients following a philosophy similar to the one used earlier. Performance of the trained ANN model is evaluated for a 3-Dimensional case, first with perfect data and then with erroneous data with an error level up to 10 percent. Since the solution is highly sensitive to the errors in the input data, instead of using the raw data, we smoothen the upper half of the erroneous breakthrough curve by approximating it with a fourth order polynomial which is used as the input pattern for the ANN model. The main advantage of the proposed model is that it requires only the upper half of the breakthrough curve and, in addition to minimizing the effect of uncertainties in the tail ends of the breakthrough curve, is capable of estimating both the release history and aquifer parameters reasonably well. Results for the case with erroneous data having different error levels demonstrate the practical applicability and robustness of the ANN models. It is observed that with increase in the error level, the correlation coefficient of the training, testing and validation regressions tends to decrease, although the value stays within acceptable limits even for reasonably large error levels.

  19. How much swamp are we talking here?: Propagating uncertainty about the area of coastal wetlands into the U.S. greenhouse gas inventory

    NASA Astrophysics Data System (ADS)

    Holmquist, J. R.; Crooks, S.; Windham-Myers, L.; Megonigal, P.; Weller, D.; Lu, M.; Bernal, B.; Byrd, K. B.; Morris, J. T.; Troxler, T.; McCombs, J.; Herold, N.

    2017-12-01

    Stable coastal wetlands can store substantial amounts of carbon (C) that can be released when they are degraded or eroded. The EPA recently incorporated coastal wetland net-storage and emissions within the Agricultural Forested and Other Land Uses category of the U.S. National Greenhouse Gas Inventory (NGGI). This was a seminal analysis, but its quantification of uncertainty needs improvement. We provide a value-added analysis by estimating that uncertainty, focusing initially on the most basic assumption, the area of coastal wetlands. We considered three sources: uncertainty in the areas of vegetation and salinity subclasses, uncertainty in the areas of changing or stable wetlands, and uncertainty in the inland extent of coastal wetlands. The areas of vegetation and salinity subtypes, as well as stable or changing, were estimated from 2006 and 2010 maps derived from Landsat imagery by the Coastal Change Analysis Program (C-CAP). We generated unbiased area estimates and confidence intervals for C-CAP, taking into account mapped area, proportional areas of commission and omission errors, as well as the number of observations. We defined the inland extent of wetlands as all land below the current elevation of twice monthly highest tides. We generated probabilistic inundation maps integrating wetland-specific bias and random error in light-detection and ranging elevation maps, with the spatially explicit random error in tidal surfaces generated from tide gauges. This initial uncertainty analysis will be extended to calculate total propagated uncertainty in the NGGI by including the uncertainties in the amount of C lost from eroded and degraded wetlands, stored annually in stable wetlands, and emitted in the form of methane by tidal freshwater wetlands.

  20. SeqLib: a C ++ API for rapid BAM manipulation, sequence alignment and sequence assembly

    PubMed Central

    Wala, Jeremiah; Beroukhim, Rameen

    2017-01-01

    Abstract We present SeqLib, a C ++ API and command line tool that provides a rapid and user-friendly interface to BAM/SAM/CRAM files, global sequence alignment operations and sequence assembly. Four C libraries perform core operations in SeqLib: HTSlib for BAM access, BWA-MEM and BLAT for sequence alignment and Fermi for error correction and sequence assembly. Benchmarking indicates that SeqLib has lower CPU and memory requirements than leading C ++ sequence analysis APIs. We demonstrate an example of how minimal SeqLib code can extract, error-correct and assemble reads from a CRAM file and then align with BWA-MEM. SeqLib also provides additional capabilities, including chromosome-aware interval queries and read plotting. Command line tools are available for performing integrated error correction, micro-assemblies and alignment. Availability and Implementation: SeqLib is available on Linux and OSX for the C ++98 standard and later at github.com/walaj/SeqLib. SeqLib is released under the Apache2 license. Additional capabilities for BLAT alignment are available under the BLAT license. Contact: jwala@broadinstitue.org; rameen@broadinstitute.org PMID:28011768

  1. Objective Assessment of Patient Inhaler User Technique Using an Audio-Based Classification Approach.

    PubMed

    Taylor, Terence E; Zigel, Yaniv; Egan, Clarice; Hughes, Fintan; Costello, Richard W; Reilly, Richard B

    2018-02-01

    Many patients make critical user technique errors when using pressurised metered dose inhalers (pMDIs) which reduce the clinical efficacy of respiratory medication. Such critical errors include poor actuation coordination (poor timing of medication release during inhalation) and inhaling too fast (peak inspiratory flow rate over 90 L/min). Here, we present a novel audio-based method that objectively assesses patient pMDI user technique. The Inhaler Compliance Assessment device was employed to record inhaler audio signals from 62 respiratory patients as they used a pMDI with an In-Check Flo-Tone device attached to the inhaler mouthpiece. Using a quadratic discriminant analysis approach, the audio-based method generated a total frame-by-frame accuracy of 88.2% in classifying sound events (actuation, inhalation and exhalation). The audio-based method estimated the peak inspiratory flow rate and volume of inhalations with an accuracy of 88.2% and 83.94% respectively. It was detected that 89% of patients made at least one critical user technique error even after tuition from an expert clinical reviewer. This method provides a more clinically accurate assessment of patient inhaler user technique than standard checklist methods.

  2. SeqLib: a C ++ API for rapid BAM manipulation, sequence alignment and sequence assembly.

    PubMed

    Wala, Jeremiah; Beroukhim, Rameen

    2017-03-01

    We present SeqLib, a C ++ API and command line tool that provides a rapid and user-friendly interface to BAM/SAM/CRAM files, global sequence alignment operations and sequence assembly. Four C libraries perform core operations in SeqLib: HTSlib for BAM access, BWA-MEM and BLAT for sequence alignment and Fermi for error correction and sequence assembly. Benchmarking indicates that SeqLib has lower CPU and memory requirements than leading C ++ sequence analysis APIs. We demonstrate an example of how minimal SeqLib code can extract, error-correct and assemble reads from a CRAM file and then align with BWA-MEM. SeqLib also provides additional capabilities, including chromosome-aware interval queries and read plotting. Command line tools are available for performing integrated error correction, micro-assemblies and alignment. SeqLib is available on Linux and OSX for the C ++98 standard and later at github.com/walaj/SeqLib. SeqLib is released under the Apache2 license. Additional capabilities for BLAT alignment are available under the BLAT license. jwala@broadinstitue.org ; rameen@broadinstitute.org. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  3. Status of CSR RL06 GRACE reprocessing and preliminary results

    NASA Astrophysics Data System (ADS)

    Save, H.

    2017-12-01

    The GRACE project plans to re-processes the GRACE mission data in order to be consistent with the first gravity products released by the GRACE-FO project. The RL06 reprocessing will harmonize the GRACE time-series with the first release of GRACE-FO. This paper catalogues the changes in the upcoming RL06 release and discusses the quality improvements as compared to the current RL05 release. The processing and parameterization changes as compared to the current release are also discussed. This paper discusses the evolution of the quality of the GRACE solutions and characterize the errors over the past few years. The possible challenges associated with connecting the GRACE time series with that from GRACE-FO are also discussed.

  4. Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant: Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Saunier, Olivier; Mathieu, Anne

    2012-03-01

    A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativity of the measurements, those that are instrumental, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. We propose to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We apply the method to the estimation of the Fukushima Daiichi source term using activity concentrations in the air. The results are compared to an L-curve estimation technique and to Desroziers's scheme. The total reconstructed activities significantly depend on the chosen method. Because of the poor observability of the Fukushima Daiichi emissions, these methods provide lower bounds for cesium-137 and iodine-131 reconstructed activities. These lower bound estimates, 1.2 × 1016 Bq for cesium-137, with an estimated standard deviation range of 15%-20%, and 1.9 - 3.8 × 1017 Bq for iodine-131, with an estimated standard deviation range of 5%-10%, are of the same order of magnitude as those provided by the Japanese Nuclear and Industrial Safety Agency and about 5 to 10 times less than the Chernobyl atmospheric releases.

  5. Refining Field Measurements of Methane Flux Rates from Abandoned Oil and Gas Wells

    NASA Astrophysics Data System (ADS)

    Lagron, C. S.; Kang, M.; Riqueros, N. S.; Jackson, R. B.

    2015-12-01

    Recent studies in Pennsylvania demonstrate the potential for significant methane emissions from abandoned oil and gas wells. A subset of tested wells was high emitting, with methane flux rates up to seven orders of magnitude greater than natural fluxes (up to 105 mg CH4/hour, or about 2.5LPM). These wells contribute disproportionately to the total methane emissions from abandoned oil and gas wells. The principles guiding the chamber design have been developed for lower flux rates, typically found in natural environments, and chamber design modifications may reduce uncertainty in flux rates associated with high-emitting wells. Kang et al. estimate errors of a factor of two in measured values based on previous studies. We conduct controlled releases of methane to refine error estimates and improve chamber design with a focus on high-emitters. Controlled releases of methane are conducted at 0.05 LPM, 0.50 LPM, 1.0 LPM, 2.0 LPM, 3.0 LPM, and 5.0 LPM, and at two chamber dimensions typically used in field measurements studies of abandoned wells. As most sources of error tabulated by Kang et al. tend to bias the results toward underreporting of methane emissions, a flux-targeted chamber design modification can reduce error margins and/or provide grounds for a potential upward revision of emission estimates.

  6. Global carbon - nitrogen - phosphorus cycle interactions: A key to solving the atmospheric CO2 balance problem?

    NASA Technical Reports Server (NTRS)

    Peterson, B. J.; Mellillo, J. M.

    1984-01-01

    If all biotic sinks of atmospheric CO2 reported were added a value of about 0.4 Gt C/yr would be found. For each category, a very high (non-conservative) estimate was used. This still does not provide a sufficient basis for achieving a balance between the sources and sinks of atmospheric CO2. The bulk of the discrepancy lies in a combination of errors in the major terms, the greatest being in a combination of errors in the major terms, the greatest being in the net biotic release and ocean uptake segments, but smaller errors or biases may exist in calculations of the rate of atmospheric CO2 increase and total fossil fuel use as well. The reason why biotic sinks are not capable of balancing the CO2 increase via nutrient-matching in the short-term is apparent from a comparison of the stoichiometry of the sources and sinks. The burning of fossil fuels and forest biomass releases much more CO2-carbon than is sequestered as organic carbon.

  7. Model-Based Optimal Experimental Design for Complex Physical Systems

    DTIC Science & Technology

    2015-12-03

    for public release. magnitude reduction in estimator error required to make solving the exact optimal design problem tractable. Instead of using a naive...for designing a sequence of experiments uses suboptimal approaches: batch design that has no feedback, or greedy ( myopic ) design that optimally...approved for public release. Equation 1 is difficult to solve directly, but can be expressed in an equivalent form using the principle of dynamic programming

  8. Keeping patients safe: Institute of Medicine looks at transforming nurses' work environment.

    PubMed

    2004-01-01

    In November 1999, the Institute of Medicine (IOM) released To Err Is Human: Building a Safer Health System, which brought to the public's attention the serious--and sometimes deadly--dangers posed by medical errors occurring in healthcare organizations. Exactly 4 years later, an IOM committee released a new report that focuses on the need to reinforce patient safety defenses in the nurses' working environments.

  9. Improvements in GRACE Gravity Field Determination through Stochastic Observation Modeling

    NASA Astrophysics Data System (ADS)

    McCullough, C.; Bettadpur, S. V.

    2016-12-01

    Current unconstrained Release 05 GRACE gravity field solutions from the Center for Space Research (CSR RL05) assume random observation errors following an independent multivariate Gaussian distribution. This modeling of observations, a simplifying assumption, fails to account for long period, correlated errors arising from inadequacies in the background force models. Fully modeling the errors inherent in the observation equations, through the use of a full observation covariance (modeling colored noise), enables optimal combination of GPS and inter-satellite range-rate data and obviates the need for estimating kinematic empirical parameters during the solution process. Most importantly, fully modeling the observation errors drastically improves formal error estimates of the spherical harmonic coefficients, potentially enabling improved uncertainty quantification of scientific results derived from GRACE and optimizing combinations of GRACE with independent data sets and a priori constraints.

  10. HANSF 1.3 Users Manual FAI/98-40-R2 Hanford Spent Nuclear Fuel (SNF) Safety Analysis Model [SEC 1 and 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DUNCAN, D.R.

    The HANSF analysis tool is an integrated model considering phenomena inside a multi-canister overpack (MCO) spent nuclear fuel container such as fuel oxidation, convective and radiative heat transfer, and the potential for fission product release. This manual reflects the HANSF version 1.3.2, a revised version of 1.3.1. HANSF 1.3.2 was written to correct minor errors and to allow modeling of condensate flow on the MCO inner surface. HANSF 1.3.2 is intended for use on personal computers such as IBM-compatible machines with Intel processors running under Lahey TI or digital Visual FORTRAN, Version 6.0, but this does not preclude operation inmore » other environments.« less

  11. MultiNest: Efficient and Robust Bayesian Inference

    NASA Astrophysics Data System (ADS)

    Feroz, F.; Hobson, M. P.; Bridges, M.

    2011-09-01

    We present further development and the first public release of our multimodal nested sampling algorithm, called MultiNest. This Bayesian inference tool calculates the evidence, with an associated error estimate, and produces posterior samples from distributions that may contain multiple modes and pronounced (curving) degeneracies in high dimensions. The developments presented here lead to further substantial improvements in sampling efficiency and robustness, as compared to the original algorithm presented in Feroz & Hobson (2008), which itself significantly outperformed existing MCMC techniques in a wide range of astrophysical inference problems. The accuracy and economy of the MultiNest algorithm is demonstrated by application to two toy problems and to a cosmological inference problem focusing on the extension of the vanilla LambdaCDM model to include spatial curvature and a varying equation of state for dark energy. The MultiNest software is fully parallelized using MPI and includes an interface to CosmoMC. It will also be released as part of the SuperBayeS package, for the analysis of supersymmetric theories of particle physics, at this http URL.

  12. New approach for point pollution source identification in rivers based on the backward probability method.

    PubMed

    Wang, Jiabiao; Zhao, Jianshi; Lei, Xiaohui; Wang, Hao

    2018-06-13

    Pollution risk from the discharge of industrial waste or accidental spills during transportation poses a considerable threat to the security of rivers. The ability to quickly identify the pollution source is extremely important to enable emergency disposal of pollutants. This study proposes a new approach for point source identification of sudden water pollution in rivers, which aims to determine where (source location), when (release time) and how much pollutant (released mass) was introduced into the river. Based on the backward probability method (BPM) and the linear regression model (LR), the proposed LR-BPM converts the ill-posed problem of source identification into an optimization model, which is solved using a Differential Evolution Algorithm (DEA). The decoupled parameters of released mass are not dependent on prior information, which improves the identification efficiency. A hypothetical case study with a different number of pollution sources was conducted to test the proposed approach, and the largest relative errors for identified location, release time, and released mass in all tests were not greater than 10%. Uncertainty in the LR-BPM is mainly due to a problem with model equifinality, but averaging the results of repeated tests greatly reduces errors. Furthermore, increasing the gauging sections further improves identification results. A real-world case study examines the applicability of the LR-BPM in practice, where it is demonstrated to be more accurate and time-saving than two existing approaches, Bayesian-MCMC and basic DEA. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Experimental design for the formulation and optimization of novel cross-linked oilispheres developed for in vitro site-specific release of Mentha piperita oil.

    PubMed

    Sibanda, Wilbert; Pillay, Viness; Danckwerts, Michael P; Viljoen, Alvaro M; van Vuuren, Sandy; Khan, Riaz A

    2004-03-12

    A Plackett-Burman design was employed to develop and optimize a novel crosslinked calcium-aluminum-alginate-pectinate oilisphere complex as a potential system for the in vitro site-specific release of Mentha piperita, an essential oil used for the treatment of irritable bowel syndrome. The physicochemical and textural properties (dependent variables) of this complex were found to be highly sensitive to changes in the concentration of the polymers (0%-1.5% wt/vol), crosslinkers (0%-4% wt/vol), and crosslinking reaction times (0.5-6 hours) (independent variables). Particle size analysis indicated both unimodal and bimodal populations with the highest frequency of 2 mm oilispheres. Oil encapsulation ranged from 6 to 35 mg/100 mg oilispheres. Gravimetric changes of the crosslinked matrix indicated significant ion sequestration and loss in an exponential manner, while matrix erosion followed Higuchi's cube root law. Among the various measured responses, the total fracture energy was the most suitable optimization objective (R2 = 0.88, Durbin-Watson Index = 1.21%, Coefficient of Variation (CV) = 33.21%). The Lagrangian technique produced no significant differences (P > .05) between the experimental and predicted total fracture energy values (0.0150 vs 0.0107 J). Artificial Neural Networks, as an alternative predictive tool of the total fracture energy, was highly accurate (final mean square error of optimal network epoch approximately 0.02). Fused-coated optimized oilispheres produced a 4-hour lag phase followed by zero-order kinetics (n > 0.99), whereby analysis of release data indicated that diffusion (Fickian constant k1 = 0.74 vs relaxation constant k2 = 0.02) was the predominant release mechanism.

  14. Methodology for prediction and estimation of consequences of possible atmospheric releases of hazardous matter: "Kursk" submarine study

    NASA Astrophysics Data System (ADS)

    Baklanov, A.; Mahura, A.; Sørensen, J. H.

    2003-06-01

    There are objects with some periods of higher than normal levels of risk of accidental atmospheric releases (nuclear, chemical, biological, etc.). Such accidents or events may occur due to natural hazards, human errors, terror acts, and during transportation of waste or various operations at high risk. A methodology for risk assessment is suggested and it includes two approaches: 1) probabilistic analysis of possible atmospheric transport patterns using long-term trajectory and dispersion modelling, and 2) forecast and evaluation of possible contamination and consequences for the environment and population using operational dispersion modelling. The first approach could be applied during the preparation stage, and the second - during the operation stage. The suggested methodology is applied on an example of the most important phases (lifting, transportation, and decommissioning) of the ``Kursk" nuclear submarine operation. It is found that the temporal variability of several probabilistic indicators (fast transport probability fields, maximum reaching distance, maximum possible impact zone, and average integral concentration of 137Cs) showed that the fall of 2001 was the most appropriate time for the beginning of the operation. These indicators allowed to identify the hypothetically impacted geographical regions and territories. In cases of atmospheric transport toward the most populated areas, the forecasts of possible consequences during phases of the high and medium potential risk levels based on a unit hypothetical release (e.g. 1 Bq) are performed. The analysis showed that the possible deposition fractions of 10-11 (Bq/m2) over the Kola Peninsula, and 10-12 - 10-13 (Bq/m2) for the remote areas of the Scandinavia and Northwest Russia could be observed. The suggested methodology may be used successfully for any potentially dangerous object involving risk of atmospheric release of hazardous materials of nuclear, chemical or biological nature.

  15. Methodology for prediction and estimation of consequences of possible atmospheric releases of hazardous matter: "Kursk"? submarine study

    NASA Astrophysics Data System (ADS)

    Baklanov, A.; Mahura, A.; Sørensen, J. H.

    2003-03-01

    There are objects with some periods of higher than normal levels of risk of accidental atmospheric releases (nuclear, chemical, biological, etc.). Such accidents or events may occur due to natural hazards, human errors, terror acts, and during transportation of waste or various operations at high risk. A methodology for risk assessment is suggested and it includes two approaches: 1) probabilistic analysis of possible atmospheric transport patterns using long-term trajectory and dispersion modelling, and 2) forecast and evaluation of possible contamination and consequences for the environment and population using operational dispersion modelling. The first approach could be applied during the preparation stage, and the second - during the operation stage. The suggested methodology is applied on an example of the most important phases (lifting, transportation, and decommissioning) of the "Kursk" nuclear submarine operation. It is found that the temporal variability of several probabilistic indicators (fast transport probability fields, maximum reaching distance, maximum possible impact zone, and average integral concentration of 137Cs) showed that the fall of 2001 was the most appropriate time for the beginning of the operation. These indicators allowed to identify the hypothetically impacted geographical regions and territories. In cases of atmospheric transport toward the most populated areas, the forecasts of possible consequences during phases of the high and medium potential risk levels based on a unit hypothetical release are performed. The analysis showed that the possible deposition fractions of 1011 over the Kola Peninsula, and 10-12 - 10-13 for the remote areas of the Scandinavia and Northwest Russia could be observed. The suggested methodology may be used successfully for any potentially dangerous object involving risk of atmospheric release of hazardous materials of nuclear, chemical or biological nature.

  16. Extended release dosage form of glipizide: development and validation of a level A in vitro-in vivo correlation.

    PubMed

    Ghosh, Animesh; Bhaumik, Uttam Kumar; Bose, Anirbandeep; Mandal, Uttam; Gowda, Veeran; Chatterjee, Bappaditya; Chakrabarty, Uday Sankar; Pal, Tapan Kumar

    2008-10-01

    Defining a quantitative and reliable relationship between in vitro drug release and in vivo absorption is highly desired for rational development, optimization, and evaluation of controlled-release dosage forms and manufacturing process. During the development of once daily extended-release (ER) tablet of glipizide, a predictive in vitro drug release method was designed and statistically evaluated using three formulations with varying release rates. In order to establish internally and externally validated level A in vitro-in vivo correlation (IVIVC), a total of three different ER formulations of glipizide were used to evaluate a linear IVIVC model based on the in vitro test method. For internal validation, a single-dose four-way cross over study (n=6) was performed using fast-, moderate-, and slow-releasing ER formulations and an immediate-release (IR) of glipizide as reference. In vitro release rate data were obtained for each formulation using the United States Pharmacopeia (USP) apparatus II, paddle stirrer at 50 and 100 rev. min(-1) in 0.1 M hydrochloric acid (HCl) and pH 6.8 phosphate buffer. The f(2) metric (similarity factor) was used to analyze the dissolution data. The formulations were compared using area under the plasma concentration-time curve, AUC(0-infinity), time to reach peak plasma concentration, T(max), and peak plasma concentration, C(max), while correlation was determined between in vitro release and in vivo absorption. A linear correlation model was developed using percent absorbed data versus percent dissolved from the three formulations. Predicted glipizide concentrations were obtained by convolution of the in vivo absorption rates. Prediction errors were estimated for C(max) and AUC(0-infinity) to determine the validity of the correlation. Apparatus II, pH 6.8 at 100 rev. min(-1) was found to be the most discriminating dissolution method. Linear regression analysis of the mean percentage of dose absorbed versus the mean percentage of in vitro release resulted in a significant correlation (r(2)>or=0.9) for the three formulations.

  17. Hydraulic correction method (HCM) to enhance the efficiency of SRTM DEM in flood modeling

    NASA Astrophysics Data System (ADS)

    Chen, Huili; Liang, Qiuhua; Liu, Yong; Xie, Shuguang

    2018-04-01

    Digital Elevation Model (DEM) is one of the most important controlling factors determining the simulation accuracy of hydraulic models. However, the currently available global topographic data is confronted with limitations for application in 2-D hydraulic modeling, mainly due to the existence of vegetation bias, random errors and insufficient spatial resolution. A hydraulic correction method (HCM) for the SRTM DEM is proposed in this study to improve modeling accuracy. Firstly, we employ the global vegetation corrected DEM (i.e. Bare-Earth DEM), developed from the SRTM DEM to include both vegetation height and SRTM vegetation signal. Then, a newly released DEM, removing both vegetation bias and random errors (i.e. Multi-Error Removed DEM), is employed to overcome the limitation of height errors. Last, an approach to correct the Multi-Error Removed DEM is presented to account for the insufficiency of spatial resolution, ensuring flow connectivity of the river networks. The approach involves: (a) extracting river networks from the Multi-Error Removed DEM using an automated algorithm in ArcGIS; (b) correcting the location and layout of extracted streams with the aid of Google Earth platform and Remote Sensing imagery; and (c) removing the positive biases of the raised segment in the river networks based on bed slope to generate the hydraulically corrected DEM. The proposed HCM utilizes easily available data and tools to improve the flow connectivity of river networks without manual adjustment. To demonstrate the advantages of HCM, an extreme flood event in Huifa River Basin (China) is simulated on the original DEM, Bare-Earth DEM, Multi-Error removed DEM, and hydraulically corrected DEM using an integrated hydrologic-hydraulic model. A comparative analysis is subsequently performed to assess the simulation accuracy and performance of four different DEMs and favorable results have been obtained on the corrected DEM.

  18. Dissolution assessment of allopurinol immediate release tablets by near infrared spectroscopy.

    PubMed

    Smetiško, Jelena; Miljanić, Snežana

    2017-10-25

    The purpose of this study was to develop a NIR spectroscopic method for assessment of drug dissolution from allopurinol immediate release tablets. Thirty three different batches of allopurinol immediate release tablets containing constant amount of the active ingredient, but varying in excipients content and physical properties were introduced in a PLS calibration model. Correlating allopurinol dissolution reference values measured by the routinely used UV/Vis method, with the data extracted from the NIR spectra, values of correlation coefficient, bias, slope, residual prediction determination and root mean square error of prediction (0.9632, 0.328%, 1.001, 3.58, 3.75%) were evaluated. The obtained values implied that the NIR diffuse reflectance spectroscopy could serve as a faster and simpler alternative to the conventional dissolution procedure, even for the tablets with a very fast dissolution rate (>85% in 15minutes). Apart from the possibility of prediction of the allopurinol dissolution rate, the other multivariate technique, PCA, provided additional data on the non-chemical characteristics of the product, which could not be obtained from the reference dissolution values. Analysis on an independent set of samples confirmed that a difference between the UV/Vis reference method and the proposed NIR method was not significant. According to the presented results, the proposed NIR method may be suitable for practical application in routine analysis and for continuously monitoring the product's chemical and physical properties responsible for expected quality. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kicker, Dwayne Curtis; Herrick, Courtney G; Zeitler, Todd

    The numerical code DRSPALL (from direct release spallings) is written to calculate the volume of Waste Isolation Pilot Plant solid waste subject to material failure and transport to the surface (i.e., spallings) as a result of a hypothetical future inadvertent drilling intrusion into the repository. An error in the implementation of the DRSPALL finite difference equations was discovered and documented in a software problem report in accordance with the quality assurance procedure for software requirements. This paper describes the corrections to DRSPALL and documents the impact of the new spallings data from the modified DRSPALL on previous performance assessment calculations.more » Updated performance assessments result in more simulations with spallings, which generally translates to an increase in spallings releases to the accessible environment. Total normalized radionuclide releases using the modified DRSPALL data were determined by forming the summation of releases across each potential release pathway, namely borehole cuttings and cavings releases, spallings releases, direct brine releases, and transport releases. Because spallings releases are not a major contributor to the total releases, the updated performance assessment calculations of overall mean complementary cumulative distribution functions for total releases are virtually unchanged. Therefore, the corrections to the spallings volume calculation did not impact Waste Isolation Pilot Plant performance assessment calculation results.« less

  20. CARE 3, Version 4 enhancements

    NASA Technical Reports Server (NTRS)

    Bryant, L. A.; Stiffler, J. J.

    1985-01-01

    The enhancements and error corrections to CARE III Version 4 are listed. All changes to Version 4 with the exception of the internal redundancy model were implemented in Version 5. Version 4 is the first public release version for execution on the CDC Cyber 170 series computers. Version 5 is the second release version and it is written in ANSI standard FORTRAN 77 for execution on the DEC VAX 11/700 series computers and many others.

  1. Bacterial dissolution of fluorapatite as a possible source of elevated dissolved phosphate in the environment

    NASA Astrophysics Data System (ADS)

    Feng, Mu-hua; Ngwenya, Bryne T.; Wang, Lin; Li, Wenchao; Olive, Valerie; Ellam, Robert M.

    2011-10-01

    In order to understand the contribution of geogenic phosphorus to lake eutrophication, we have investigated the rate and extent of fluorapatite dissolution in the presence of two common soil bacteria ( Pantoea agglomerans and Bacillus megaterium) at T = 25 °C for 26 days. The release of calcium (Ca), phosphorus (P), and rare earth elements (REE) under biotic and abiotic conditions was compared to investigate the effect of microorganism on apatite dissolution. The release of Ca and P was enhanced under the influence of bacteria. Apatite dissolution rates obtained from solution Ca concentration in the biotic reactors increased above error compared with abiotic controls. Chemical analysis of biomass showed that bacteria scavenged Ca, P, and REE during their growth, which lowered their fluid concentrations, leading to apparent lower release rates. The temporal evolution of pH in the reactors reflected the balance of apatite weathering, solution reactions, bacterial metabolism, and potentially secondary precipitation, which was implied in the variety of REE patterns in the biotic and abiotic reactors. Light rare earth elements (LREE) were preferentially adsorbed to cell surfaces, whereas heavy rare earth elements (HREE) were retained in the fluid phase. Decoupling of LREE and HREE could possibly be due to preferential release of HREE from apatite or selective secondary precipitation of LREE enriched phosphates, especially in the presence of bacteria. When corrected for intracellular concentrations, both biotic reactors showed high P and REE release compared with the abiotic control. We speculate that lack of this correction explains the conflicting findings about the role of bacteria in mineral weathering rates. The observation that bacteria enhance the release rates of P and REE from apatite could account for some of the phosphorus burden and metal pollution in aquatic environments.

  2. Improvements and Advances to the Cross-Calibrated Multi-Platform (CCMP) Ocean Vector Wind Analysis (V2.0 release)

    NASA Astrophysics Data System (ADS)

    Scott, J. P.; Wentz, F. J.; Hoffman, R. N.; Atlas, R. M.

    2016-02-01

    Ocean vector wind is a valuable climate data record (CDR) useful in observing and monitoring changes in climate and air-sea interactions. Ocean surface wind stress influences such processes as heat, moisture, and momentum fluxes between the atmosphere and ocean, driving ocean currents and forcing ocean circulation. The Cross-Calibrated Multi-Platform (CCMP) ocean vector wind analysis is a quarter-degree, six-hourly global ocean wind analysis product created using the variational analysis method (VAM) [Atlas et al., 1996; Hoffman et al., 2003]. The CCMP V1.1 wind product is a highly-esteemed, widely-used data set containing the longest gap-free record of satellite-based ocean vector wind data (July 1987 to June 2012). CCMP V1.1 was considered a "first-look" data set that used the most-timely, albeit preliminary, releases of satellite, in situ, and modeled ECMWF-Operational wind background fields. The authors have been working with the original producers of CCMP V1.1 to create an updated, improved, and consistently-reprocessed CCMP V2.0 ocean vector wind analysis data set. With Remote Sensing Systems (RSS) having recently updated all passive microwave satellite instrument calibrations and retrievals to the RSS Version-7 RTM standard, the reprocessing of the CCMP data set into a higher-quality CDR using inter-calibrated satellite inputs became feasible. In addition to the use of SSM/I, SSMIS, TRMM TMI, QuikSCAT, AMSRE, and WindSat instruments, AMSR2, GMI, and ASCAT have been also included in the CCMP V2.0 data set release, which has now been extended to the beginning of 2015. Additionally, the background field has been updated to use six-hourly, quarter-degree ERA-Interim wind vector inputs, and the quality-checks on the in situ data have been carefully reviewed and improved. The goal of the release of the CCMP V2.0 ocean wind vector analysis product is to serve as a merged ocean wind vector data set for climate studies. Diligent effort has been made by the authors to minimize systematic and spurious sources of error. The authors will present a complete discussion of upgrades made to the CCMP V2.0 data set, as well as present validation work that has been completed on the CCMP V2.0 wind analysis product.

  3. Five-Year Wilkinson Microwave Anisotropy Probe (WMAP)Observations: Beam Maps and Window Functions

    NASA Technical Reports Server (NTRS)

    Hill, R.S.; Weiland, J.L.; Odegard, N.; Wollack, E.; Hinshaw, G.; Larson, D.; Bennett, C.L.; Halpern, M.; Kogut, A.; Page, L.; hide

    2008-01-01

    Cosmology and other scientific results from the WMAP mission require an accurate knowledge of the beam patterns in flight. While the degree of beam knowledge for the WMAP one-year and three-year results was unprecedented for a CMB experiment, we have significantly improved the beam determination as part of the five-year data release. Physical optics fits are done on both the A and the B sides for the first time. The cutoff scale of the fitted distortions on the primary mirror is reduced by a factor of approximately 2 from previous analyses. These changes enable an improvement in the hybridization of Jupiter data with beam models, which is optimized with respect to error in the main beam solid angle. An increase in main-beam solid angle of approximately 1% is found for the V2 and W1-W4 differencing assemblies. Although the five-year results are statistically consistent with previous ones, the errors in the five-year beam transfer functions are reduced by a factor of approximately 2 as compared to the three-year analysis. We present radiometry of the planet Jupiter as a test of the beam consistency and as a calibration standard; for an individual differencing assembly. errors in the measured disk temperature are approximately 0.5%.

  4. Antenna Deployment for the Localization of Partial Discharges in Open-Air Substations

    PubMed Central

    Robles, Guillermo; Fresno, José Manuel; Sánchez-Fernández, Matilde; Martínez-Tarifa, Juan Manuel

    2016-01-01

    Partial discharges are ionization processes inside or on the surface of dielectrics that can unveil insulation problems in electrical equipment. The charge accumulated is released under certain environmental and voltage conditions attacking the insulation both physically and chemically. The final consequence of a continuous occurrence of these events is the breakdown of the dielectric. The electron avalanche provokes a derivative of the electric field with respect to time, creating an electromagnetic impulse that can be detected with antennas. The localization of the source helps in the identification of the piece of equipment that has to be decommissioned. This can be done by deploying antennas and calculating the time difference of arrival (TDOA) of the electromagnetic pulses. However, small errors in this parameter can lead to great displacements of the calculated position of the source. Usually, four antennas are used to find the source but the array geometry has to be correctly deployed to have minimal errors in the localization. This paper demonstrates, by an analysis based on simulation and also experimentally, that the most common layouts are not always the best options and proposes a simple antenna layout to reduce the systematic error in the TDOA calculation due to the positions of the antennas in the array. PMID:27092501

  5. Reduction of multi-dimensional laboratory data to a two-dimensional plot: a novel technique for the identification of laboratory error.

    PubMed

    Kazmierczak, Steven C; Leen, Todd K; Erdogmus, Deniz; Carreira-Perpinan, Miguel A

    2007-01-01

    The clinical laboratory generates large amounts of patient-specific data. Detection of errors that arise during pre-analytical, analytical, and post-analytical processes is difficult. We performed a pilot study, utilizing a multidimensional data reduction technique, to assess the utility of this method for identifying errors in laboratory data. We evaluated 13,670 individual patient records collected over a 2-month period from hospital inpatients and outpatients. We utilized those patient records that contained a complete set of 14 different biochemical analytes. We used two-dimensional generative topographic mapping to project the 14-dimensional record to a two-dimensional space. The use of a two-dimensional generative topographic mapping technique to plot multi-analyte patient data as a two-dimensional graph allows for the rapid identification of potentially anomalous data. Although we performed a retrospective analysis, this technique has the benefit of being able to assess laboratory-generated data in real time, allowing for the rapid identification and correction of anomalous data before they are released to the physician. In addition, serial laboratory multi-analyte data for an individual patient can also be plotted as a two-dimensional plot. This tool might also be useful for assessing patient wellbeing and prognosis.

  6. Estimation of the caesium-137 source term from the Fukushima Daiichi nuclear power plant using a consistent joint assimilation of air concentration and deposition observations

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne

    2014-01-01

    Inverse modelling techniques can be used to estimate the amount of radionuclides and the temporal profile of the source term released in the atmosphere during the accident of the Fukushima Daiichi nuclear power plant in March 2011. In Winiarek et al. (2012b), the lower bounds of the caesium-137 and iodine-131 source terms were estimated with such techniques, using activity concentration measurements. The importance of an objective assessment of prior errors (the observation errors and the background errors) was emphasised for a reliable inversion. In such critical context where the meteorological conditions can make the source term partly unobservable and where only a few observations are available, such prior estimation techniques are mandatory, the retrieved source term being very sensitive to this estimation. We propose to extend the use of these techniques to the estimation of prior errors when assimilating observations from several data sets. The aim is to compute an estimate of the caesium-137 source term jointly using all available data about this radionuclide, such as activity concentrations in the air, but also daily fallout measurements and total cumulated fallout measurements. It is crucial to properly and simultaneously estimate the background errors and the prior errors relative to each data set. A proper estimation of prior errors is also a necessary condition to reliably estimate the a posteriori uncertainty of the estimated source term. Using such techniques, we retrieve a total released quantity of caesium-137 in the interval 11.6-19.3 PBq with an estimated standard deviation range of 15-20% depending on the method and the data sets. The “blind” time intervals of the source term have also been strongly mitigated compared to the first estimations with only activity concentration data.

  7. A test of Gaia Data Release 1 parallaxes: implications for the local distance scale

    NASA Astrophysics Data System (ADS)

    Casertano, Stefano; Riess, Adam G.; Bucciarelli, Beatrice; Lattanzi, Mario G.

    2017-03-01

    Aims: We present a comparison of Gaia Data Release 1 (DR1) parallaxes with photometric parallaxes for a sample of 212 Galactic Cepheids at a median distance of 2 kpc, and explore their implications on the distance scale and the local value of the Hubble constant H0. Methods: The Cepheid distances are estimated from a recent calibration of the near-infrared period-luminosity (P-L) relation. The comparison is carried out in parallax space, where the DR1 parallax errors, with a median value of half the median parallax, are expected to be well-behaved. Results: With the exception of one outlier, the DR1 parallaxes are in very good global agreement with the predictions from a well-established P-L relation, with a possible indication that the published errors may be conservatively overestimated by about 20%. This confirms that the quality of DR1 parallaxes for the Cepheids in our sample is well within their stated errors. We find that the parallaxes of 9 Cepheids brighter than G = 6 may be systematically underestimated. If interpreted as an independent calibration of the Cepheid luminosities and assumed to be otherwise free of systematic uncertainties, DR1 parallaxes are in very good agreement (within 0.3%) with the current estimate of the local Hubble constant, and in conflict at the level of 2.5σ (3.5σ if the errors are scaled) with the value inferred from Planck cosmic microwave background data used in conjunction with ΛCDM. We also test for a zeropoint error in Gaia parallaxes and find none to a precision of 20 μas. We caution however that with this early release, the complete systematic properties of the measurements may not be fully understood at the statistical level of the Cepheid sample mean, a level an order of magnitude below the individual uncertainties. The early results from DR1 demonstrate again the enormous impact that the full mission will likely have on fundamental questions in astrophysics and cosmology.

  8. A U.S. Partnership with India and Poland to Track Acute Chemical Releases to Serve Public Health

    PubMed Central

    Ruckart, Perri Zeitz; Orr, Maureen; Pałaszewska-Tkacz, Anna; Dewan, Aruna; Kapil, Vikas

    2009-01-01

    We describe a collaborative effort between the U.S., India, and Poland to track acute chemical releases during 2005–2007. In all three countries, fixed facility events were more common than transportation-related events; manufacturing and transportation/warehousing were the most frequently involved industries; and equipment failure and human error were the primary contributing factors. The most commonly released nonpetroleum substances were ammonia (India), carbon monoxide (U.S.) and mercury (Poland). More events in India (54%) resulted in victims compared with Poland (15%) and the U.S. (9%). The pilot program showed it is possible to successfully conduct international surveillance of acute hazardous substances releases with careful interpretation of the findings. PMID:19826549

  9. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less

  10. Learning-based Wind Estimation using Distant Soundings for Unguided Aerial Delivery

    NASA Astrophysics Data System (ADS)

    Plyler, M.; Cahoy, K.; Angermueller, K.; Chen, D.; Markuzon, N.

    2016-12-01

    Delivering unguided, parachuted payloads from aircraft requires accurate knowledge of the wind field inside an operational zone. Usually, a dropsonde released from the aircraft over the drop zone gives a more accurate wind estimate than a forecast. Mission objectives occasionally demand releasing the dropsonde away from the drop zone, but still require accuracy and precision. Barnes interpolation and many other assimilation methods do poorly when the forecast error is inconsistent in a forecast grid. A machine learning approach can better leverage non-linear relations between different weather patterns and thus provide a better wind estimate at the target drop zone when using data collected up to 100 km away. This study uses the 13 km resolution Rapid Refresh (RAP) dataset available through NOAA and subsamples to an area around Yuma, AZ and up to approximately 10km AMSL. RAP forecast grids are updated with simulated dropsondes taken from analysis (historical weather maps). We train models using different data mining and machine learning techniques, most notably boosted regression trees, that can accurately assimilate the distant dropsonde. The model takes a forecast grid and simulated remote dropsonde data as input and produces an estimate of the wind stick over the drop zone. Using ballistic winds as a defining metric, we show our data driven approach does better than Barnes interpolation under some conditions, most notably when the forecast error is different between the two locations, on test data previously unseen by the model. We study and evaluate the model's performance depending on the size, the time lag, the drop altitude, and the geographic location of the training set, and identify parameters most contributing to the accuracy of the wind estimation. This study demonstrates a new approach for assimilating remotely released dropsondes, based on boosted regression trees, and shows improvement in wind estimation over currently used methods.

  11. Clinical validation of a new control-oriented model of insulin and glucose dynamics in subjects with type 1 diabetes.

    PubMed

    Fabietti, Pier Giorgio; Canonico, Valentina; Orsini-Federici, Marco; Sarti, Eugenio; Massi-Benedetti, Massimo

    2007-08-01

    The development of an artificial pancreas requires an accurate representation of diabetes pathophysiology to create effective and safe control systems for automatic insulin infusion regulation. The aim of the present study is the assessment of a previously developed mathematical model of insulin and glucose metabolism in type 1 diabetes and the evaluation of its effectiveness for the development and testing of control algorithms. Based on the already existing "minimal model" a new mathematical model was developed composed of glucose and insulin submodels. The glucose model includes the representation of peripheral uptake, hepatic uptake and release, and renal clearance. The insulin model describes the kinetics of exogenous insulin injected either subcutaneously or intravenously. The estimation of insulin sensitivity allows the model to personalize parameters to each subject. Data sets from two different clinical trials were used here for model validation through simulation studies. The first set had subcutaneous insulin injection, while the second set had intravenous insulin injection. The root mean square error between simulated and real blood glucose profiles (G(rms)) and the Clarke error grid analysis were used to evaluate the system efficacy. Results from our study demonstrated the model's capability in identifying individual characteristics even under different experimental conditions. This was reflected by an effective simulation as indicated by G(rms), and clinical acceptability by the Clarke error grid analysis, in both clinical data series. Simulation results confirmed the capacity of the model to faithfully represent the glucose-insulin relationship in type 1 diabetes in different circumstances.

  12. Discovering body site and severity modifiers in clinical texts

    PubMed Central

    Dligach, Dmitriy; Bethard, Steven; Becker, Lee; Miller, Timothy; Savova, Guergana K

    2014-01-01

    Objective To research computational methods for discovering body site and severity modifiers in clinical texts. Methods We cast the task of discovering body site and severity modifiers as a relation extraction problem in the context of a supervised machine learning framework. We utilize rich linguistic features to represent the pairs of relation arguments and delegate the decision about the nature of the relationship between them to a support vector machine model. We evaluate our models using two corpora that annotate body site and severity modifiers. We also compare the model performance to a number of rule-based baselines. We conduct cross-domain portability experiments. In addition, we carry out feature ablation experiments to determine the contribution of various feature groups. Finally, we perform error analysis and report the sources of errors. Results The performance of our method for discovering body site modifiers achieves F1 of 0.740–0.908 and our method for discovering severity modifiers achieves F1 of 0.905–0.929. Discussion Results indicate that both methods perform well on both in-domain and out-domain data, approaching the performance of human annotators. The most salient features are token and named entity features, although syntactic dependency features also contribute to the overall performance. The dominant sources of errors are infrequent patterns in the data and inability of the system to discern deeper semantic structures. Conclusions We investigated computational methods for discovering body site and severity modifiers in clinical texts. Our best system is released open source as part of the clinical Text Analysis and Knowledge Extraction System (cTAKES). PMID:24091648

  13. Discovering body site and severity modifiers in clinical texts.

    PubMed

    Dligach, Dmitriy; Bethard, Steven; Becker, Lee; Miller, Timothy; Savova, Guergana K

    2014-01-01

    To research computational methods for discovering body site and severity modifiers in clinical texts. We cast the task of discovering body site and severity modifiers as a relation extraction problem in the context of a supervised machine learning framework. We utilize rich linguistic features to represent the pairs of relation arguments and delegate the decision about the nature of the relationship between them to a support vector machine model. We evaluate our models using two corpora that annotate body site and severity modifiers. We also compare the model performance to a number of rule-based baselines. We conduct cross-domain portability experiments. In addition, we carry out feature ablation experiments to determine the contribution of various feature groups. Finally, we perform error analysis and report the sources of errors. The performance of our method for discovering body site modifiers achieves F1 of 0.740-0.908 and our method for discovering severity modifiers achieves F1 of 0.905-0.929. Results indicate that both methods perform well on both in-domain and out-domain data, approaching the performance of human annotators. The most salient features are token and named entity features, although syntactic dependency features also contribute to the overall performance. The dominant sources of errors are infrequent patterns in the data and inability of the system to discern deeper semantic structures. We investigated computational methods for discovering body site and severity modifiers in clinical texts. Our best system is released open source as part of the clinical Text Analysis and Knowledge Extraction System (cTAKES).

  14. The Influence of Observation Errors on Analysis Error and Forecast Skill Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, R. M.; Tai, K.-S.

    2013-01-01

    The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.

  15. Error and Error Mitigation in Low-Coverage Genome Assemblies

    PubMed Central

    Hubisz, Melissa J.; Lin, Michael F.; Kellis, Manolis; Siepel, Adam

    2011-01-01

    The recent release of twenty-two new genome sequences has dramatically increased the data available for mammalian comparative genomics, but twenty of these new sequences are currently limited to ∼2× coverage. Here we examine the extent of sequencing error in these 2× assemblies, and its potential impact in downstream analyses. By comparing 2× assemblies with high-quality sequences from the ENCODE regions, we estimate the rate of sequencing error to be 1–4 errors per kilobase. While this error rate is fairly modest, sequencing error can still have surprising effects. For example, an apparent lineage-specific insertion in a coding region is more likely to reflect sequencing error than a true biological event, and the length distribution of coding indels is strongly distorted by error. We find that most errors are contributed by a small fraction of bases with low quality scores, in particular, by the ends of reads in regions of single-read coverage in the assembly. We explore several approaches for automatic sequencing error mitigation (SEM), making use of the localized nature of sequencing error, the fact that it is well predicted by quality scores, and information about errors that comes from comparisons across species. Our automatic methods for error mitigation cannot replace the need for additional sequencing, but they do allow substantial fractions of errors to be masked or eliminated at the cost of modest amounts of over-correction, and they can reduce the impact of error in downstream phylogenomic analyses. Our error-mitigated alignments are available for download. PMID:21340033

  16. Evolution of the ATLAS Nightly Build System

    NASA Astrophysics Data System (ADS)

    Undrus, A.

    2012-12-01

    The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing software, and migration to new platforms and compilers for ATLAS code that currently contains 2200 packages with 4 million C++ and 1.4 million python scripting lines written by about 1000 developers. Recent development was focused on the integration of ATLAS Nightly Build and Installation systems. The nightly releases are distributed and validated and some are transformed into stable releases used for data processing worldwide. The ATLAS Nightly System is managed by the NICOS control tool on a computing farm with 50 powerful multiprocessor nodes. NICOS provides the fully automated framework for the release builds, testing, and creation of distribution kits. The ATN testing framework of the Nightly System runs unit and integration tests in parallel suites, fully utilizing the resources of multi-core machines, and provides the first results even before compilations complete. The NICOS error detection system is based on several techniques and classifies the compilation and test errors according to their severity. It is periodically tuned to place greater emphasis on certain software defects by highlighting the problems on NICOS web pages and sending automatic e-mail notifications to responsible developers. These and other recent developments will be presented and future plans will be described.

  17. Impact of Corrections to the Spallings Volume Calculation on Waste Isolation Pilot Plant Performance Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kicker, Dwayne Curtis; Herrick, Courtney G; Zeitler, Todd

    2015-11-01

    The numerical code DRSPALL (from direct release spallings) is written to calculate the volume of Waste Isolation Pilot Plant solid waste subject to material failure and transport to the surface (i.e., spallings) as a result of a hypothetical future inadvertent drilling intrusion into the repository. An error in the implementation of the DRSPALL finite difference equations was discovered and documented in a software problem report in accordance with the quality assurance procedure for software requirements. This paper describes the corrections to DRSPALL and documents the impact of the new spallings data from the modified DRSPALL on previous performance assessment calculations.more » Updated performance assessments result in more simulations with spallings, which generally translates to an increase in spallings releases to the accessible environment. Total normalized radionuclide releases using the modified DRSPALL data were determined by forming the summation of releases across each potential release pathway, namely borehole cuttings and cavings releases, spallings releases, direct brine releases, and transport releases. Because spallings releases are not a major contributor to the total releases, the updated performance assessment calculations of overall mean complementary cumulative distribution functions for total releases are virtually unchanged. Therefore, the corrections to the spallings volume calculation did not impact Waste Isolation Pilot Plant performance assessment calculation results.« less

  18. Random and independent sampling of endogenous tryptic peptides from normal human EDTA plasma by liquid chromatography micro electrospray ionization and tandem mass spectrometry.

    PubMed

    Dufresne, Jaimie; Florentinus-Mefailoski, Angelique; Ajambo, Juliet; Ferwa, Ammara; Bowden, Peter; Marshall, John

    2017-01-01

    Normal human EDTA plasma samples were collected on ice, processed ice cold, and stored in a freezer at - 80 °C prior to experiments. Plasma test samples from the - 80 °C freezer were thawed on ice or intentionally warmed to room temperature. Protein content was measured by CBBR binding and the release of alcohol soluble amines by the Cd ninhydrin assay. Plasma peptides released over time were collected over C18 for random and independent sampling by liquid chromatography micro electrospray ionization and tandem mass spectrometry (LC-ESI-MS/MS) and correlated with X!TANDEM. Fully tryptic peptides by X!TANDEM returned a similar set of proteins, but was more computationally efficient, than "no enzyme" correlations. Plasma samples maintained on ice, or ice with a cocktail of protease inhibitors, showed lower background amounts of plasma peptides compared to samples incubated at room temperature. Regression analysis indicated that warming plasma to room temperature, versus ice cold, resulted in a ~ twofold increase in the frequency of peptide identification over hours-days of incubation at room temperature. The type I error rate of the protein identification from the X!TANDEM algorithm combined was estimated to be low compared to a null model of computer generated random MS/MS spectra. The peptides of human plasma were identified and quantified with low error rates by random and independent sampling that revealed 1000s of peptides from hundreds of human plasma proteins from endogenous tryptic peptides.

  19. Effect of anatomical fractionation on the enzymatic hydrolysis of acid and alkaline pretreated corn stover.

    PubMed

    Duguid, K B; Montross, M D; Radtke, C W; Crofcheck, C L; Wendt, L M; Shearer, S A

    2009-11-01

    Due to concerns with biomass collection systems and soil sustainability there are opportunities to investigate the optimal plant fractions to collect for conversion. An ideal feedstock would require a low severity pretreatment to release a maximum amount of sugar during enzymatic hydrolysis. Corn stover fractions were separated manually and analyzed for glucan, xylan, acid soluble lignin, acid insoluble lignin, and ash composition. The stover fractions were also pretreated with either 0%, 0.4%, or 0.8% NaOH for 2 h at room temperature, washed, autoclaved and saccharified. In addition, dilute sulfuric acid pretreated samples underwent simultaneous saccharification and fermentation (SSF) to ethanol. In general, the two pretreatments produced similar trends with cobs, husks, and leaves responding best to the pretreatments, the tops of stalks responding slightly less, and the bottom of the stalks responding the least. For example, corn husks pretreated with 0.8% NaOH released over 90% (standard error of 3.8%) of the available glucan, while only 45% (standard error of 1.1%) of the glucan was produced from identically treated stalk bottoms. Estimates of the theoretical ethanol yield using acid pretreatment followed by SSF were 65% (standard error of 15.9%) for husks and 29% (standard error of 1.8%) for stalk bottoms. This suggests that integration of biomass collection systems to remove sustainable feedstocks could be integrated with the processes within a biorefinery to minimize overall ethanol production costs.

  20. Impact of Uncertainties in Exposure Assessment on Thyroid Cancer Risk among Persons in Belarus Exposed as Children or Adolescents Due to the Chernobyl Accident.

    PubMed

    Little, Mark P; Kwon, Deukwoo; Zablotska, Lydia B; Brenner, Alina V; Cahoon, Elizabeth K; Rozhko, Alexander V; Polyanskaya, Olga N; Minenko, Victor F; Golovanov, Ivan; Bouville, André; Drozdovitch, Vladimir

    2015-01-01

    The excess incidence of thyroid cancer in Ukraine and Belarus observed a few years after the Chernobyl accident is considered to be largely the result of 131I released from the reactor. Although the Belarus thyroid cancer prevalence data has been previously analyzed, no account was taken of dose measurement error. We examined dose-response patterns in a thyroid screening prevalence cohort of 11,732 persons aged under 18 at the time of the accident, diagnosed during 1996-2004, who had direct thyroid 131I activity measurement, and were resident in the most radio-actively contaminated regions of Belarus. Three methods of dose-error correction (regression calibration, Monte Carlo maximum likelihood, Bayesian Markov Chain Monte Carlo) were applied. There was a statistically significant (p<0.001) increasing dose-response for prevalent thyroid cancer, irrespective of regression-adjustment method used. Without adjustment for dose errors the excess odds ratio was 1.51 Gy- (95% CI 0.53, 3.86), which was reduced by 13% when regression-calibration adjustment was used, 1.31 Gy- (95% CI 0.47, 3.31). A Monte Carlo maximum likelihood method yielded an excess odds ratio of 1.48 Gy- (95% CI 0.53, 3.87), about 2% lower than the unadjusted analysis. The Bayesian method yielded a maximum posterior excess odds ratio of 1.16 Gy- (95% BCI 0.20, 4.32), 23% lower than the unadjusted analysis. There were borderline significant (p = 0.053-0.078) indications of downward curvature in the dose response, depending on the adjustment methods used. There were also borderline significant (p = 0.102) modifying effects of gender on the radiation dose trend, but no significant modifying effects of age at time of accident, or age at screening as modifiers of dose response (p>0.2). In summary, the relatively small contribution of unshared classical dose error in the current study results in comparatively modest effects on the regression parameters.

  1. Identification of the release history of a groundwater contaminant in non-uniform flow field through the minimum relative entropy method

    NASA Astrophysics Data System (ADS)

    Cupola, F.; Tanda, M. G.; Zanini, A.

    2014-12-01

    The interest in approaches that allow the estimation of pollutant source release in groundwater has increased exponentially over the last decades. This is due to the large number of groundwater reclamation procedures that have been carried out: the remediation is expensive and the costs can be easily shared among the different actors if the release history is known. Moreover, a reliable release history can be a useful tool for predicting the plume evolution and for minimizing the harmful effects of the contamination. In this framework, Woodbury and Ulrych (1993, 1996) adopted and improved the minimum relative entropy (MRE) method to solve linear inverse problems for the recovery of the pollutant release history in an aquifer. In this work, the MRE method has been improved to detect the source release history in 2-D aquifer characterized by a non-uniform flow-field. The approach has been tested on two cases: a 2-D homogeneous conductivity field and a strong heterogeneous one (the hydraulic conductivity presents three orders of magnitude in terms of variability). In the latter case the transfer function could not be described with an analytical formulation, thus, the transfer functions were estimated by means of the method developed by Butera et al. (2006). In order to demonstrate its scope, this method was applied with two different datasets: observations collected at the same time at 20 different monitoring points, and observations collected at 2 monitoring points at different times (15-25 monitoring points). The data observed were considered affected by a random error. These study cases have been carried out considering a Boxcar and a Gaussian function as expected value of the prior distribution of the release history. The agreement between the true and the estimated release history has been evaluated through the calculation of the normalized Root Mean Square (nRMSE) error: this has shown the ability of the method of recovering the release history even in the most severe cases. Finally, the forward simulation has been carried out by using the estimated release history in order to compare the true data with the estimated one: the best agreement has been obtained in the homogeneous case, even if also in the heterogenous one the nRMSE is acceptable.

  2. Estimation of the Cesium-137 Source Term from the Fukushima Daiichi Power Plant Using Air Concentration and Deposition Data

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne

    2013-04-01

    A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativeness of the measurements, the instrumental errors, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, and specially in a situation of sparse observability, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. In Winiarek et al. (2012), we proposed to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We applied the method to the estimation of the Fukushima Daiichi cesium-137 and iodine-131 source terms using activity concentrations in the air. The results were compared to an L-curve estimation technique, and to Desroziers's scheme. Additionally to the estimations of released activities, we provided related uncertainties (12 PBq with a std. of 15 - 20 % for cesium-137 and 190 - 380 PBq with a std. of 5 - 10 % for iodine-131). We also enlightened that, because of the low number of available observations (few hundreds) and even if orders of magnitude were consistent, the reconstructed activities significantly depended on the method used to estimate the prior errors. In order to use more data, we propose to extend the methods to the use of several data types, such as activity concentrations in the air and fallout measurements. The idea is to simultaneously estimate the prior errors related to each dataset, in order to fully exploit the information content of each one. Using the activity concentration measurements, but also daily fallout data from prefectures and cumulated deposition data over a region lying approximately 150 km around the nuclear power plant, we can use a few thousands of data in our inverse modeling algorithm to reconstruct the Cesium-137 source term. To improve the parameterization of removal processes, rainfall fields have also been corrected using outputs from the mesoscale meteorological model WRF and ground station rainfall data. As expected, the different methods yield closer results as the number of data increases. Reference : Winiarek, V., M. Bocquet, O. Saunier, A. Mathieu (2012), Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant : Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant, J. Geophys. Res., 117, D05122, doi:10.1029/2011JD016932.

  3. Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students

    NASA Astrophysics Data System (ADS)

    Priyani, H. A.; Ekawati, R.

    2018-01-01

    Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.

  4. Development of a press and drag method for hyperlink selection on smartphones.

    PubMed

    Chang, Joonho; Jung, Kihyo

    2017-11-01

    The present study developed a novel touch method for hyperlink selection on smartphones consisting of two sequential finger interactions: press and drag motions. The novel method requires a user to press a target hyperlink, and if a touch error occurs he/she can immediately correct the touch error by dragging the finger without releasing it in the middle. The method was compared with two existing methods in terms of completion time, error rate, and subjective rating. Forty college students participated in the experiments with different hyperlink sizes (4-pt, 6-pt, 8-pt, and 10-pt) on a touch-screen device. When hyperlink size was small (4-pt and 6-pt), the novel method (time: 826 msec; error: 0.6%) demonstrated better completion time and error rate than the current method (time: 1194 msec; error: 22%). In addition, the novel method (1.15, slightly satisfied, in 7-pt bipolar scale) had significantly higher satisfaction scores than the two existing methods (0.06, neutral). Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Error Propagation Analysis in the SAE Architecture Analysis and Design Language (AADL) and the EDICT Tool Framework

    NASA Technical Reports Server (NTRS)

    LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.

    2011-01-01

    This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.

  6. Consequence analysis in LPG installation using an integrated computer package.

    PubMed

    Ditali, S; Colombi, M; Moreschini, G; Senni, S

    2000-01-07

    This paper presents the prototype of the computer code, Atlantide, developed to assess the consequences associated with accidental events that can occur in a LPG storage plant. The characteristic of Atlantide is to be simple enough but at the same time adequate to cope with consequence analysis as required by Italian legislation in fulfilling the Seveso Directive. The application of Atlantide is appropriate for LPG storage/transferring installations. The models and correlations implemented in the code are relevant to flashing liquid releases, heavy gas dispersion and other typical phenomena such as BLEVE/Fireball. The computer code allows, on the basis of the operating/design characteristics, the study of the relevant accidental events from the evaluation of the release rate (liquid, gaseous and two-phase) in the unit involved, to the analysis of the subsequent evaporation and dispersion, up to the assessment of the final phenomena of fire and explosion. This is done taking as reference simplified Event Trees which describe the evolution of accidental scenarios, taking into account the most likely meteorological conditions, the different release situations and other features typical of a LPG installation. The limited input data required and the automatic linking between the single models, that are activated in a defined sequence, depending on the accidental event selected, minimize both the time required for the risk analysis and the possibility of errors. Models and equations implemented in Atlantide have been selected from public literature or in-house developed software and tailored with the aim to be easy to use and fast to run but, nevertheless, able to provide realistic simulation of the accidental event as well as reliable results, in terms of physical effects and hazardous areas. The results have been compared with those of other internationally recognized codes and with the criteria adopted by Italian authorities to verify the Safety Reports for LPG installations. A brief of the theoretical basis of each model implemented in Atlantide and an example of application are included in the paper.

  7. Ecological and toxicological aspects of the partial meltdown of the Chernobyl nuclear power plant reactor

    USGS Publications Warehouse

    Eisler, Ronald; Hoffman, David J.; Rattner, Barnett A.; Burton, G. Allen; Cairns, John

    1995-01-01

    the partial meltdown of the 1000-MW reactor at Chernobyl, Ukraine, on April 26, 1986, released large amounts of radiocesium and other radionuclides into the environment, causing widespread radioactive contamination of Europe and the former Soviet Union.1-7 At least 3,000,000 trillion becquerels (TBq) were released from the fuel during the accident (Table 24.1), dwarfing, by orders of magnitude, radiation released from other highly publicized reactor accidents at Windscale (U.K.) and three-Mile Island (U.S.)3,8 The Chernobyl accident happened while a test was being conducted during a normal scheduled shutdown and is attributed mainly to human error.3

  8. Attentional sensitivity and asymmetries of vertical saccade generation in monkey

    NASA Technical Reports Server (NTRS)

    Zhou, Wu; King, W. M.; Shelhamer, M. J. (Principal Investigator)

    2002-01-01

    The first goal of this study was to systematically document asymmetries in vertical saccade generation. We found that visually guided upward saccades have not only shorter latencies, but higher peak velocities, shorter durations and smaller errors. The second goal was to identify possible mechanisms underlying the asymmetry in vertical saccade latencies. Based on a recent model of saccade generation, three stages of saccade generation were investigated using specific behavioral paradigms: attention shift to a visual target (CUED paradigm), initiation of saccade generation (GAP paradigm) and release of the motor command to execute the saccade (DELAY paradigm). Our results suggest that initiation of a saccade (or "ocular disengagement") and its motor release contribute little to the asymmetry in vertical saccade latency. However, analysis of saccades made in the CUED paradigm indicated that it took less time to shift attention to a target in the upper visual field than to a target in the lower visual field. These data suggest that higher attentional sensitivity to targets in the upper visual field may contribute to shorter latencies of upward saccades.

  9. The Space-Wise Global Gravity Model from GOCE Nominal Mission Data

    NASA Astrophysics Data System (ADS)

    Gatti, A.; Migliaccio, F.; Reguzzoni, M.; Sampietro, D.; Sanso, F.

    2011-12-01

    In the framework of the GOCE data analysis, the space-wise approach implements a multi-step collocation solution for the estimation of a global geopotential model in terms of spherical harmonic coefficients and their error covariance matrix. The main idea is to use the collocation technique to exploit the spatial correlation of the gravity field in the GOCE data reduction. In particular the method consists of an along-track Wiener filter, a collocation gridding at satellite altitude and a spherical harmonic analysis by integration. All these steps are iterated, also to account for the rotation between local orbital and gradiometer reference frame. Error covariances are computed by Montecarlo simulations. The first release of the space-wise approach was presented at the ESA Living Planet Symposium in July 2010. This model was based on only two months of GOCE data and partially contained a priori information coming from other existing gravity models, especially at low degrees and low orders. A second release was distributed after the 4th International GOCE User Workshop in May 2011. In this solution, based on eight months of GOCE data, all the dependencies from external gravity information were removed thus giving rise to a GOCE-only space-wise model. However this model showed an over-regularization at the highest degrees of the spherical harmonic expansion due to the combination technique of intermediate solutions (based on about two months of data). In this work a new space-wise solution is presented. It is based on all nominal mission data from November 2009 to mid April 2011, and its main novelty is that the intermediate solutions are now computed in such a way to avoid over-regularization in the final solution. Beyond the spherical harmonic coefficients of the global model and their error covariance matrix, the space-wise approach is able to deliver as by-products a set of spherical grids of potential and of its second derivatives at mean satellite altitude. These grids have an information content that is very similar to the original along-orbit data, but they are much easier to handle. In addition they are estimated by local least-squares collocation and therefore, although computed by a unique global covariance function, they could yield more information at local level than the spherical harmonic coefficients of the global model. For this reason these grids seem to be useful for local geophysical investigations. The estimated grids with their estimated errors are presented in this work together with proposals on possible future improvements. A test to compare the different information contents of the along-orbit data, the gridded data and the spherical harmonic coefficients is also shown.

  10. Design and analysis of a torsion braid pendulum displacement transducer

    NASA Technical Reports Server (NTRS)

    Rind, E.; Bryant, E. L.

    1981-01-01

    The dynamic properties at various temperatures of braids impregnated with polymer can be measured by using the braid as the suspension of a torsion pendulum. This report describes the electronic and mechanical design of a torsional braid pendulum displacement transducer which is an advance in the state of the art. The transducer uses a unique optical design consisting of refracting quartz windows used in conjunction with a differential photocell to produce a null signal. The release mechanism for initiating free torsional oscillation of the pendulum has also been improved. Analysis of the precision and accuracy of the transducer indicated that the maximum relative error in measuring torsional amplitude was approximately 0. A serious problem inherent in all instruments which use a torsional suspension was analyzed: misalignment of the physical and torsional axes of the torsional member which results in modulation of the amplitude of the free oscillation.

  11. Determination of stores pointing error due to wing flexibility under flight load

    NASA Technical Reports Server (NTRS)

    Lokos, William A.; Bahm, Catherine M.; Heinle, Robert A.

    1995-01-01

    The in-flight elastic wing twist of a fighter-type aircraft was studied to provide for an improved on-board real-time computed prediction of pointing variations of three wing store stations. This is an important capability to correct sensor pod alignment variation or to establish initial conditions of iron bombs or smart weapons prior to release. The original algorithm was based upon coarse measurements. The electro-optical Flight Deflection Measurement System measured the deformed wing shape in flight under maneuver loads to provide a higher resolution database from which an improved twist prediction algorithm could be developed. The FDMS produced excellent repeatable data. In addition, a NASTRAN finite-element analysis was performed to provide additional elastic deformation data. The FDMS data combined with the NASTRAN analysis indicated that an improved prediction algorithm could be derived by using a different set of aircraft parameters, namely normal acceleration, stores configuration, Mach number, and gross weight.

  12. Study of the location of testing area in residual stress measurement by Moiré interferometry combined with hole-drilling method

    NASA Astrophysics Data System (ADS)

    Qin, Le; Xie, HuiMin; Zhu, RongHua; Wu, Dan; Che, ZhiGang; Zou, ShiKun

    2014-04-01

    This paper investigates the effect of the location of testing area in residual stress measurement by Moiré interferometry combined with hole-drilling method. The selection of the location of the testing area is analyzed from theory and experiment. In the theoretical study, the factors which affect the surface released radial strain ɛ r were analyzed on the basis of the formulae of the hole-drilling method, and the relations between those factors and ɛ r were established. By combining Moiré interferometry with the hole-drilling method, the residual stress of interference-fit specimen was measured to verify the theoretical analysis. According to the analysis results, the testing area for minimizing the error of strain measurement is determined. Moreover, if the orientation of the maximum principal stress is known, the value of strain will be measured with higher precision by the Moiré interferometry method.

  13. Statistical inference for template aging

    NASA Astrophysics Data System (ADS)

    Schuckers, Michael E.

    2006-04-01

    A change in classification error rates for a biometric device is often referred to as template aging. Here we offer two methods for determining whether the effect of time is statistically significant. The first of these is the use of a generalized linear model to determine if these error rates change linearly over time. This approach generalizes previous work assessing the impact of covariates using generalized linear models. The second approach uses of likelihood ratio tests methodology. The focus here is on statistical methods for estimation not the underlying cause of the change in error rates over time. These methodologies are applied to data from the National Institutes of Standards and Technology Biometric Score Set Release 1. The results of these applications are discussed.

  14. Computational Intelligence Modeling of the Macromolecules Release from PLGA Microspheres-Focus on Feature Selection.

    PubMed

    Zawbaa, Hossam M; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander

    2016-01-01

    Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven.

  15. Atomoxetine could improve intra-individual variability in drug-naïve adults with attention-deficit/hyperactivity disorder comparably with methylphenidate: A head-to-head randomized clinical trial.

    PubMed

    Ni, Hsing-Chang; Hwang Gu, Shoou-Lian; Lin, Hsiang-Yuan; Lin, Yu-Ju; Yang, Li-Kuang; Huang, Hui-Chun; Gau, Susan Shur-Fen

    2016-05-01

    Intra-individual variability in reaction time (IIV-RT) is common in individuals with attention-deficit/hyperactivity disorder (ADHD). It can be improved by stimulants. However, the effects of atomoxetine on IIV-RT are inconclusive. We aimed to investigate the effects of atomoxetine on IIV-RT, and directly compared its efficacy with methylphenidate in adults with ADHD. An 8-10 week, open-label, head-to-head, randomized clinical trial was conducted in 52 drug-naïve adults with ADHD, who were randomly assigned to two treatment groups: immediate-release methylphenidate (n=26) thrice daily (10-20 mg per dose) and atomoxetine once daily (n=26) (0.5-1.2 mg/kg/day). IIV-RT, derived from the Conners' continuous performance test (CCPT), was represented by the Gaussian (reaction time standard error, RTSE) and ex-Gaussian models (sigma and tau). Other neuropsychological functions, including response errors and mean of reaction time, were also measured. Participants received CCPT assessments at baseline and week 8-10 (60.4±6.3 days). We found comparable improvements in performances of CCPT between the immediate-release methylphenidate- and atomoxetine-treated groups. Both medications significantly improved IIV-RT in terms of reducing tau values with comparable efficacy. In addition, both medications significantly improved inhibitory control by reducing commission errors. Our results provide evidence to support that atomoxetine could improve IIV-RT and inhibitory control, of comparable efficacy with immediate-release methylphenidate, in drug-naïve adults with ADHD. Shared and unique mechanisms underpinning these medication effects on IIV-RT awaits further investigation. © The Author(s) 2016.

  16. Computational Intelligence Modeling of the Macromolecules Release from PLGA Microspheres—Focus on Feature Selection

    PubMed Central

    Zawbaa, Hossam M.; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander

    2016-01-01

    Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven. PMID:27315205

  17. Toward accurate prediction of pKa values for internal protein residues: the importance of conformational relaxation and desolvation energy.

    PubMed

    Wallace, Jason A; Wang, Yuhang; Shi, Chuanyin; Pastoor, Kevin J; Nguyen, Bao-Linh; Xia, Kai; Shen, Jana K

    2011-12-01

    Proton uptake or release controls many important biological processes, such as energy transduction, virus replication, and catalysis. Accurate pK(a) prediction informs about proton pathways, thereby revealing detailed acid-base mechanisms. Physics-based methods in the framework of molecular dynamics simulations not only offer pK(a) predictions but also inform about the physical origins of pK(a) shifts and provide details of ionization-induced conformational relaxation and large-scale transitions. One such method is the recently developed continuous constant pH molecular dynamics (CPHMD) method, which has been shown to be an accurate and robust pK(a) prediction tool for naturally occurring titratable residues. To further examine the accuracy and limitations of CPHMD, we blindly predicted the pK(a) values for 87 titratable residues introduced in various hydrophobic regions of staphylococcal nuclease and variants. The predictions gave a root-mean-square deviation of 1.69 pK units from experiment, and there were only two pK(a)'s with errors greater than 3.5 pK units. Analysis of the conformational fluctuation of titrating side-chains in the context of the errors of calculated pK(a) values indicate that explicit treatment of conformational flexibility and the associated dielectric relaxation gives CPHMD a distinct advantage. Analysis of the sources of errors suggests that more accurate pK(a) predictions can be obtained for the most deeply buried residues by improving the accuracy in calculating desolvation energies. Furthermore, it is found that the generalized Born implicit-solvent model underlying the current CPHMD implementation slightly distorts the local conformational environment such that the inclusion of an explicit-solvent representation may offer improvement of accuracy. Copyright © 2011 Wiley-Liss, Inc.

  18. 42 CFR 431.992 - Corrective action plan.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... CMS, designed to reduce improper payments in each program based on its analysis of the error causes in... State must take the following actions: (1) Data analysis. States must conduct data analysis such as reviewing clusters of errors, general error causes, characteristics, and frequency of errors that are...

  19. 42 CFR 431.992 - Corrective action plan.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... CMS, designed to reduce improper payments in each program based on its analysis of the error causes in... State must take the following actions: (1) Data analysis. States must conduct data analysis such as reviewing clusters of errors, general error causes, characteristics, and frequency of errors that are...

  20. Characterization of Mode 1 and Mode 2 delamination growth and thresholds in graphite/peek composites

    NASA Technical Reports Server (NTRS)

    Martin, Roderick H.; Murri, Gretchen B.

    1988-01-01

    Composite materials often fail by delamination. The onset and growth of delamination in AS4/PEEK, a tough thermoplastic matrix composite, was characterized for mode 1 and mode 2 loadings, using the Double Cantilever Beam (DCB) and the End Notched Flexure (ENF) test specimens. Delamination growth per fatigue cycle, da/dN, was related to strain energy release rate, G, by means of a power law. However, the exponents of these power laws were too large for them to be adequately used as a life prediction tool. A small error in the estimated applied loads could lead to large errors in the delamination growth rates. Hence strain energy release rate thresholds, G sub th, below which no delamination would occur were also measured. Mode 1 and 2 threshold G values for no delamination growth were found by monitoring the number of cycles to delamination onset in the DCB and ENF specimens. The maximum applied G for which no delamination growth had occurred until at least 1,000,000 cycles was considered the threshold strain energy release rate. Comments are given on how testing effects, facial interference or delamination front damage, may invalidate the experimental determination of the constants in the expression.

  1. Characterization of Mode I and Mode II delamination growth and thresholds in AS4/PEEK composites

    NASA Technical Reports Server (NTRS)

    Martin, Roderick H.; Murri, Gretchen Bostaph

    1990-01-01

    Composite materials often fail by delamination. The onset and growth of delamination in AS4/PEEK, a tough thermoplastic matrix composite, was characterized for mode 1 and mode 2 loadings, using the Double Cantilever Beam (DCB) and the End Notched Flexure (ENF) test specimens. Delamination growth per fatigue cycle, da/dN, was related to strain energy release rate, G, by means of a power law. However, the exponents of these power laws were too large for them to be adequately used as a life prediction tool. A small error in the estimated applied loads could lead to large errors in the delamination growth rates. Hence strain energy release rate thresholds, G sub th, below which no delamination would occur were also measured. Mode 1 and 2 threshold G values for no delamination growth were found by monitoring the number of cycles to delamination onset in the DCB and ENF specimens. The maximum applied G for which no delamination growth had occurred until at least 1,000,000 cycles was considered the threshold strain energy release rate. Comments are given on how testing effects, facial interference or delamination front damage, may invalidate the experimental determination of the constants in the expression.

  2. Numerical Study of Buoyancy and Different Diffusion Effects on the Structure and Dynamics of Triple Flames

    NASA Technical Reports Server (NTRS)

    Chen, Jyh-Yuan; Echekki, Tarek

    2001-01-01

    Numerical simulations of 2-D triple flames under gravity force have been implemented to identify the effects of gravity on triple flame structure and propagation properties and to understand the mechanisms of instabilities resulting from both heat release and buoyancy effects. A wide range of gravity conditions, heat release, and mixing widths for a scalar mixing layer are computed for downward-propagating (in the same direction with the gravity vector) and upward-propagating (in the opposite direction of the gravity vector) triple flames. Results of numerical simulations show that gravity strongly affects the triple flame speed through its contribution to the overall flow field. A simple analytical model for the triple flame speed, which accounts for both buoyancy and heat release, is developed. Comparisons of the proposed model with the numerical results for a wide range of gravity, heat release and mixing width conditions, yield very good agreement. The analysis shows that under neutral diffusion, downward propagation reduces the triple flame speed, while upward propagation enhances it. For the former condition, a critical Froude number may be evaluated, which corresponds to a vanishing triple flame speed. Downward-propagating triple flames at relatively strong gravity effects have exhibited instabilities. These instabilities are generated without any artificial forcing of the flow. Instead disturbances are initiated by minute round-off errors in the numerical simulations, and subsequently amplified by instabilities. A linear stability analysis on mean profiles of stable triple flame configurations have been performed to identify the most amplified frequency in spatially developed flows. The eigenfunction equations obtained from the linearized disturbance equations are solved using the shooting method. The linear stability analysis yields reasonably good agreements with the observed frequencies of the unstable triple flames. The frequencies and amplitudes of disturbances increase with the magnitude of the gravity vector. Moreover, disturbances appear to be most amplified just downstream of the premixed branches. The effects of mixing width and differential diffusion are investigated and their roles on the flame stability are studied.

  3. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  4. Mesolimbic Dopamine Signals the Value of Work

    PubMed Central

    Hamid, Arif A.; Pettibone, Jeffrey R.; Mabrouk, Omar S.; Hetrick, Vaughn L.; Schmidt, Robert; Vander Weele, Caitlin M.; Kennedy, Robert T.; Aragona, Brandon J.; Berke, Joshua D.

    2015-01-01

    Dopamine cell firing can encode errors in reward prediction, providing a learning signal to guide future behavior. Yet dopamine is also a key modulator of motivation, invigorating current behavior. Existing theories propose that fast (“phasic”) dopamine fluctuations support learning, while much slower (“tonic”) dopamine changes are involved in motivation. We examined dopamine release in the nucleus accumbens across multiple time scales, using complementary microdialysis and voltammetric methods during adaptive decision-making. We first show that minute-by-minute dopamine levels covary with reward rate and motivational vigor. We then show that second-by-second dopamine release encodes an estimate of temporally-discounted future reward (a value function). We demonstrate that changing dopamine immediately alters willingness to work, and reinforces preceding action choices by encoding temporal-difference reward prediction errors. Our results indicate that dopamine conveys a single, rapidly-evolving decision variable, the available reward for investment of effort, that is employed for both learning and motivational functions. PMID:26595651

  5. Error-Analysis for Correctness, Effectiveness, and Composing Procedure.

    ERIC Educational Resources Information Center

    Ewald, Helen Rothschild

    The assumptions underpinning grammatical mistakes can often be detected by looking for patterns of errors in a student's work. Assumptions that negatively influence rhetorical effectiveness can similarly be detected through error analysis. On a smaller scale, error analysis can also reveal assumptions affecting rhetorical choice. Snags in the…

  6. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  7. Waste Form Release Data Package for the 2005 Integrated Disposal Facility Performance Assessment. Erratum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Gary L.

    2016-09-06

    This report refers to or contains K g values for glasses LAWA44, LAWB45 and LAWC22 affected by calculations errors as identified by Papathanassiu et al. (2011). The corrected K g values are reported in an erratum included in the revised version of the original report. The revised report can be referenced as follows: Pierce E. M. et al. (2004) Waste Form Release Data Package for the 2005 Integrated Disposal Facility Performance Assessment. PNNL-14805 Rev. 0 Erratum. Pacific Northwest National Laboratory, Richland, WA, USA.

  8. Waste Form Release Calculations for the 2005 Integrated Disposal Facility Performance Assessment. Erratum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Gary L.

    2016-09-06

    This report refers to or contains K g values for glasses LAWA44, LAWB45 and LAWC22 affected by calculations errors as identified by Papathanassiu et al. (2011). The corrected K g values are reported in an erratum included in the revised version of the original report. The revised report can be referenced as follows: Pierce E. M. et al. (2004) Waste Form Release Data Package for the 2005 Integrated Disposal Facility Performance Assessment. PNNL-14805 Rev. 0 Erratum. Pacific Northwest National Laboratory, Richland, WA, USA.

  9. Micro Computer Feedback Report for the Strategic Leader Development Inventory; Source Code

    DTIC Science & Technology

    1994-03-01

    SEL5 ;exit if error CALL SELZCT SCRZEN ;display select screen JC SEL4 ;no files in directory .------- display the files NOV BX, [BarPos] ;starting...SEL2 ;if not goto next test imp SEL4 ; Ecit SEL2: CUP AL,ODh ;in it a pick ? 3Z SEL3 ;if YES exit loop ------- see if an active control key was...file CALL READCOMFIG eread file into memory JC SEL5 ;exit to main menu CALL OPEN DATA FILE ;is data arailable? SEL4 : CALL RELEASE_ _MDR ;release mom

  10. The impact of response measurement error on the analysis of designed experiments

    DOE PAGES

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    2016-11-01

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  11. The impact of response measurement error on the analysis of designed experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  12. Modulation of precipitation by conditional symmetric instability release

    NASA Astrophysics Data System (ADS)

    Glinton, Michael R.; Gray, Suzanne L.; Chagnon, Jeffrey M.; Morcrette, Cyril J.

    2017-03-01

    Although many theoretical and observational studies have investigated the mechanism of conditional symmetric instability (CSI) release and associated it with mesoscale atmospheric phenomena such as frontal precipitation bands, cloud heads in rapidly developing extratropical cyclones and sting jets, its climatology and contribution to precipitation have not been extensively documented. The aim of this paper is to quantify the contribution of CSI release, yielding slantwise convection, to climatological precipitation accumulations for the North Atlantic and western Europe. Case studies reveal that CSI release could be common along cold fronts of mature extratropical cyclones and the North Atlantic storm track is found to be a region with large CSI according to two independent CSI metrics. Correlations of CSI with accumulated precipitation are also large in this region and CSI release is inferred to be occurring about 20% of the total time over depths of over 1 km. We conclude that the inability of current global weather forecast and climate prediction models to represent CSI release (due to insufficient resolution yet lack of subgrid parametrization schemes) may lead to errors in precipitation distributions, particularly in the region of the North Atlantic storm track.

  13. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  14. Program Instrumentation and Trace Analysis

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Goldberg, Allen; Filman, Robert; Rosu, Grigore; Koga, Dennis (Technical Monitor)

    2002-01-01

    Several attempts have been made recently to apply techniques such as model checking and theorem proving to the analysis of programs. This shall be seen as a current trend to analyze real software systems instead of just their designs. This includes our own effort to develop a model checker for Java, the Java PathFinder 1, one of the very first of its kind in 1998. However, model checking cannot handle very large programs without some kind of abstraction of the program. This paper describes a complementary scalable technique to handle such large programs. Our interest is turned on the observation part of the equation: How much information can be extracted about a program from observing a single execution trace? It is our intention to develop a technology that can be applied automatically and to large full-size applications, with minimal modification to the code. We present a tool, Java PathExplorer (JPaX), for exploring execution traces of Java programs. The tool prioritizes scalability for completeness, and is directed towards detecting errors in programs, not to prove correctness. One core element in JPaX is an instrumentation package that allows to instrument Java byte code files to log various events when executed. The instrumentation is driven by a user provided script that specifies what information to log. Examples of instructions that such a script can contain are: 'report name and arguments of all called methods defined in class C, together with a timestamp'; 'report all updates to all variables'; and 'report all acquisitions and releases of locks'. In more complex instructions one can specify that certain expressions should be evaluated and even that certain code should be executed under various conditions. The instrumentation package can hence be seen as implementing Aspect Oriented Programming for Java in the sense that one can add functionality to a Java program without explicitly changing the code of the original program, but one rather writes an aspect and compiles it into the original program using the instrumentation. Another core element of JPaX is an observation package that supports the analysis of the generated event stream. Two kinds of analysis are currently supported. In temporal analysis the execution trace is evaluated against formulae written in temporal logic. We have implemented a temporal logic evaluator on finite traces using the Maude rewriting system from SRI International, USA. Temporal logic is defined in Maude by giving its syntax as a signature and its semantics as rewrite equations. The resulting semantics is extremely efficient and can handle event streams of hundreds of millions events in few minutes. Furthermore, the implementation is very succinct. The second form of even stream analysis supported is error pattern analysis where an execution trace is analyzed using various error detection algorithms that can identify error-prone programming practices that may potentially lead to errors in some different executions. Two such algorithms focusing on concurrency errors have been implemented in JPaX, one for deadlocks and the other for data races. It is important to note, that a deadlock or data race potential does not need to occur in order for its potential to be detected with these algorithms. This is what makes them very scalable in practice. The data race algorithm implemented is the Eraser algorithm from Compaq, however adopted to Java. The tool is currently being applied to a code base for controlling a spacecraft by the developers of that software in order to evaluate its applicability.

  15. Error management in blood establishments: results of eight years of experience (2003–2010) at the Croatian Institute of Transfusion Medicine

    PubMed Central

    Vuk, Tomislav; Barišić, Marijan; Očić, Tihomir; Mihaljević, Ivanka; Šarlija, Dorotea; Jukić, Irena

    2012-01-01

    Background. Continuous and efficient error management, including procedures from error detection to their resolution and prevention, is an important part of quality management in blood establishments. At the Croatian Institute of Transfusion Medicine (CITM), error management has been systematically performed since 2003. Materials and methods. Data derived from error management at the CITM during an 8-year period (2003–2010) formed the basis of this study. Throughout the study period, errors were reported to the Department of Quality Assurance. In addition to surveys and the necessary corrective activities, errors were analysed and classified according to the Medical Event Reporting System for Transfusion Medicine (MERS-TM). Results. During the study period, a total of 2,068 errors were recorded, including 1,778 (86.0%) in blood bank activities and 290 (14.0%) in blood transfusion services. As many as 1,744 (84.3%) errors were detected before issue of the product or service. Among the 324 errors identified upon release from the CITM, 163 (50.3%) errors were detected by customers and reported as complaints. In only five cases was an error detected after blood product transfusion however without any harmful consequences for the patients. All errors were, therefore, evaluated as “near miss” and “no harm” events. Fifty-two (2.5%) errors were evaluated as high-risk events. With regards to blood bank activities, the highest proportion of errors occurred in the processes of labelling (27.1%) and blood collection (23.7%). With regards to blood transfusion services, errors related to blood product issuing prevailed (24.5%). Conclusion. This study shows that comprehensive management of errors, including near miss errors, can generate data on the functioning of transfusion services, which is a precondition for implementation of efficient corrective and preventive actions that will ensure further improvement of the quality and safety of transfusion treatment. PMID:22395352

  16. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

    PubMed

    Chiu, Ming-Chuan; Hsieh, Min-Chih

    2016-05-01

    The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  17. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  18. Drug release and swelling kinetics of directly compressed glipizide sustained-release matrices: establishment of level A IVIVC.

    PubMed

    Sankalia, Jolly M; Sankalia, Mayur G; Mashru, Rajashree C

    2008-07-02

    The purpose of this study was to examine a level A in vitro-in vivo correlation (IVIVC) for glipizide hydrophilic sustained-release matrices, with an acceptable internal predictability, in the presence of a range of formulation/manufacturing changes. The effect of polymeric blends of ethylcellulose, microcrystalline cellulose, hydroxypropylmethylcellulose, xanthan gum, guar gum, Starch 1500, and lactose on in vitro release profiles was studied and fitted to various release kinetics models. Water uptake kinetics with scanning electron microscopy (SEM) was carried out to support the drug release mechanism. An IVIVC was established by comparing the pharmacokinetic parameters of optimized (M-24) and marketed (Glytop-2.5 SR) formulations after single oral dose studies on white albino rabbits. The matrix M-19 (xanthan:MCC PH301 at 70:40) and M-24 (xanthan:HPMC K4M:Starch 1500 at 70:25:15) showed the glipizide release within the predetermined constraints at all time points with Korsmeyer-Peppas' and zero-order release mechanism, respectively. Kopcha model revealed that the xanthan gum is the major excipient responsible for the diffusional release profile and was further supported by SEM and swelling studies. A significant level A IVIVC with acceptable limits of prediction errors (below 15%) enables the prediction of in vivo performance from their in vitro release profile. It was concluded that proper selection of rate-controlling polymers with release rate modifier excipients will determine overall release profile, duration and mechanism from directly compressed matrices.

  19. Effects of Correlated Errors on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, Andres; Jacobs, C. S.

    2011-01-01

    As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, S; Currier, B; Hodgdon, A

    Purpose: The design of a new Portable Faraday Cup (PFC) used to calibrate proton accelerators was evaluated for energies between 50 and 220 MeV. Monte Carlo simulations performed in Geant4–10.0 were used to evaluate experimental results and reduce the relative detector error for this vacuum-less and low mass system, and invalidate current MCNP releases. Methods: The detector construction consisted of a copper conductor coated with an insulator and grounded with silver. Monte Carlo calculations in Geant4 were used to determine the net charge per proton input (gain) as a function of insulator thickness and beam energy. Kapton was chosen asmore » the insulating material and was designed to capture backscattered electrons. Charge displacement from/into Kapton was assumed to follow a linear proportionality to the origin/terminus depth toward the outer ground layer. Kapton thicknesses ranged from 0 to 200 microns, proton energies were set to match empirical studies ranging from 70 to 250 MeV. Each setup was averaged over 1 million events using the FTFP-BERT 2.0 physics list. Results: With increasing proton energy, the gain of Cu+KA gradually converges to the limit of pure copper, with relative error between 1.52% and 0.72%. The Ag layer created a more diverging behavior, accelerating the flux of negative charge into the device and increasing relative error when compared to pure copper from 1.21% to 1.63%. Conclusion: Gain vs. beam energy signatures were acquired for each device. Further analysis reveals proportionality between insulator thickness and measured gain, albeit an inverse proportionality between beam energy and in-flux of electrons. Increased silver grounding layer thickness also decreases gain, though the relative error expands with beam energy, contrary to the Kapton layer.« less

  1. A transient dopamine signal encodes subjective value and causally influences demand in an economic context

    PubMed Central

    Schelp, Scott A.; Pultorak, Katherine J.; Rakowski, Dylan R.; Gomez, Devan M.; Krzystyniak, Gregory; Das, Raibatak; Oleson, Erik B.

    2017-01-01

    The mesolimbic dopamine system is strongly implicated in motivational processes. Currently accepted theories suggest that transient mesolimbic dopamine release events energize reward seeking and encode reward value. During the pursuit of reward, critical associations are formed between the reward and cues that predict its availability. Conditioned by these experiences, dopamine neurons begin to fire upon the earliest presentation of a cue, and again at the receipt of reward. The resulting dopamine concentration scales proportionally to the value of the reward. In this study, we used a behavioral economics approach to quantify how transient dopamine release events scale with price and causally alter price sensitivity. We presented sucrose to rats across a range of prices and modeled the resulting demand curves to estimate price sensitivity. Using fast-scan cyclic voltammetry, we determined that the concentration of accumbal dopamine time-locked to cue presentation decreased with price. These data confirm and extend the notion that dopamine release events originating in the ventral tegmental area encode subjective value. Using optogenetics to augment dopamine concentration, we found that enhancing dopamine release at cue made demand more sensitive to price and decreased dopamine concentration at reward delivery. From these observations, we infer that value is decreased because of a negative reward prediction error (i.e., the animal receives less than expected). Conversely, enhancing dopamine at reward made demand less sensitive to price. We attribute this finding to a positive reward prediction error, whereby the animal perceives they received a better value than anticipated. PMID:29109253

  2. Empirical prediction of net splanchnic release of ketogenic nutrients, acetate, butyrate and β-hydroxybutyrate in ruminants: a meta-analysis.

    PubMed

    Loncke, C; Nozière, P; Bahloul, L; Vernet, J; Lapierre, H; Sauvant, D; Ortigues-Marty, I

    2015-03-01

    For energy feeding systems for ruminants to evolve towards a nutrient-based system, dietary energy supply has to be determined in terms of amount and nature of nutrients. The objective of this study was to establish response equations of the net hepatic flux and net splanchnic release of acetate, butyrate and β-hydroxybutyrate to changes in diet and animal profiles. A meta-analysis was applied on published data compiled from the FLuxes of nutrients across Organs and tissues in Ruminant Animals database, which pools the results from international publications on net splanchnic nutrient fluxes measured in multi-catheterized ruminants. Prediction variables were identified from current knowledge on digestion, hepatic and other tissue metabolism. Subsequently, physiological and other, more integrative, predictors were obtained. Models were established for intakes up to 41 g dry matter per kg BW per day and diets containing up to 70 g concentrate per 100 g dry matter. Models predicted the net hepatic fluxes or net splanchnic release of each nutrient from its net portal appearance and the animal profile. Corrections were applied to account for incomplete hepatic recovery of the blood flow marker, para-aminohippuric acid. Changes in net splanchnic release (mmol/kg BW per hour) could then be predicted by combining the previously published net portal appearance models and the present net hepatic fluxes models. The net splanchnic release of acetate and butyrate were thus predicted from the intake of ruminally fermented organic matter (RfOM) and the nature of RfOM (acetate: residual mean square error (RMSE)=0.18; butyrate: RMSE=0.01). The net splanchnic release of β-hydroxybutyrate was predicted from RfOM intake and the energy balance of the animals (RMSE=0.035), or from the net portal appearance of butyrate and the energy balance of the animals (RMSE=0.050). Models obtained were independent of ruminant species, and presented low interfering factors on the residuals, least square means or individual slopes. The model equations highlighted the importance of considering the physiological state of animals when predicting splanchnic metabolism. This work showed that it is possible to use simple predictors to accurately predict the amount and nature of ketogenic nutrients released towards peripheral tissues in both sheep and cattle at different physiological status. These results provide deeper insight into biological processes and will contribute to the development of improved tools for dietary formulation.

  3. Hazardous chemical incidents in schools--United States, 2002-2007.

    PubMed

    2008-11-07

    Chemicals that can cause adverse health effects are used in many elementary and secondary schools (e.g., in chemistry laboratories, art classrooms, automotive repair areas, printing and other vocational shops, and facility maintenance areas). Every year, unintentional and intentional releases of these chemicals, or related fires or explosions, occur in schools, causing injuries, costly cleanups, and lost school days. The federal Agency for Toxic Substances and Disease Registry (ATSDR) conducts national public health surveillance of chemical incidents through its Hazardous Substances Emergency Events Surveillance (HSEES) system. To identify school-related incidents and elucidate their causes and consequences to highlight the need for intervention, ATSDR conducted an analysis of HSEES data for 2002-2007. During that period, 423 chemical incidents in elementary and secondary schools were reported by 15 participating states. Mercury was the most common chemical released. The analysis found that 62% of reported chemical incidents at elementary and secondary schools resulted from human error (i.e., mistakes in the use or handling of a substance), and 30% of incidents resulted in at least one acute injury. Proper chemical use and management (e.g., keeping an inventory and properly storing, labeling, and disposing of chemicals) is essential to protect school building occupants. Additional education directed at raising awareness of the problem and providing resources to reduce the risk is needed to ensure that schools are safe from unnecessary dangers posed by hazardous chemicals.

  4. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    NASA Astrophysics Data System (ADS)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.

  5. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  6. Preliminary Airworthiness Evaluation AH-1S Helicopter with OGEE Tip Shape Rotor Blades

    DTIC Science & Technology

    1980-05-01

    ENGINEER PROJECT PILOT HENRY ARNAIZ PROJECT ENGINEER DTIC MAY 1980 ELECTEV SEP 2 I8 Approved for public release; distribution unlimited. A UNITED STATES...compressibility effects between flights. 7. Airspeed and altitude were obtained from a boom-mounted pitot -static probe. Corrections for position error

  7. The Use of Demographic Data in Voting Rights Litigation.

    ERIC Educational Resources Information Center

    O'Hare, William

    1991-01-01

    Issues demographic experts face concerning voting rights litigation are considered, using examples from Garza v County of Los Angeles (California) (1990). Errors and the age of census figures when released mean that court decisions about appropriate population bases and thresholds will continue to vary from one location to another. (SLD)

  8. Gridded Data in the Arctic; Benefits and Perils of Publicly Available Grids

    NASA Astrophysics Data System (ADS)

    Coakley, B.; Forsberg, R.; Gabbert, R.; Beale, J.; Kenyon, S. C.

    2015-12-01

    Our understanding of the Arctic Ocean has been hugely advanced by release of gridded bathymetry and potential field anomaly grids. The Arctic Gravity Project grid achieves excellent, near-isotropic coverage of the earth north of 64˚N by combining land, satellite, airborne, submarine, surface ship and ice set-out measurements of gravity anomalies. Since the release of the V 2.0 grid in 2008, there has been extensive icebreaker activity across the Amerasia Basin due to mapping of the Arctic coastal nation's Extended Continental Shelves (ECS). While grid resolution has been steadily improving over time, addition of higher resolution and better navigated data highlights some distortions in the grid that may influence interpretation. In addition to the new ECS data sets, gravity anomaly data has been collected from other vessels; notably the Korean Icebreaker Araon, the Japanese icebreaker Mirai and the German icebreaker Polarstern. Also the GRAV-D project of the US National Geodetic Survey has flown airborne surveys over much of Alaska. These data will be Included in the new AGP grid, which will result in a much improved product when version 3.0 is released in 2015. To make use of these measurements, it is necessary to compile them into a continuous spatial representation. Compilation is complicated by differences in survey parameters, gravimeter sensitivity and reduction methods. Cross-over errors are the classic means to assess repeatability of track measurements. Prior to the introduction of near-universal GPS positioning, positional uncertainty was evaluated by cross-over analysis. GPS positions can be treated as more or less true, enabling evaluation of differences due to contrasting sensitivity, reference and reduction techniques. For the most part, cross-over errors for racks of gravity anomaly data collected since 2008 are less than 0.5 mGals, supporting the compilation of these data with only slight adjustments. Given the different platforms used for various Arctic Ocean surveys, registration between bathymetric and gravity anomaly grids cannot be assumed. Inverse methods, which assume co-registration of data produce, sometimes surprising results when well-constrained gravity grid values are inverted against interpolated bathymetry.

  9. Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, Ronald M.

    2015-01-01

    The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.

  10. Human Factors Process Task Analysis: Liquid Oxygen Pump Acceptance Test Procedure at the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.; Voska, Ned (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.

  11. Five-Year Wilkinson Microwave Anisotropy Probe Observations: Beam Maps and Window Functions

    NASA Astrophysics Data System (ADS)

    Hill, R. S.; Weiland, J. L.; Odegard, N.; Wollack, E.; Hinshaw, G.; Larson, D.; Bennett, C. L.; Halpern, M.; Page, L.; Dunkley, J.; Gold, B.; Jarosik, N.; Kogut, A.; Limon, M.; Nolta, M. R.; Spergel, D. N.; Tucker, G. S.; Wright, E. L.

    2009-02-01

    Cosmology and other scientific results from the Wilkinson Microwave Anisotropy Probe (WMAP) mission require an accurate knowledge of the beam patterns in flight. While the degree of beam knowledge for the WMAP one-year and three-year results was unprecedented for a CMB experiment, we have significantly improved the beam determination as part of the five-year data release. Physical optics fits are done on both the A and the B sides for the first time. The cutoff scale of the fitted distortions on the primary mirror is reduced by a factor of ~2 from previous analyses. These changes enable an improvement in the hybridization of Jupiter data with beam models, which is optimized with respect to error in the main beam solid angle. An increase in main-beam solid angle of ~1% is found for the V2 and W1-W4 differencing assemblies. Although the five-year results are statistically consistent with previous ones, the errors in the five-year beam transfer functions are reduced by a factor of ~2 as compared to the three-year analysis. We present radiometry of the planet Jupiter as a test of the beam consistency and as a calibration standard; for an individual differencing assembly, errors in the measured disk temperature are ~0.5%. WMAP is the result of a partnership between Princeton University and NASA's Goddard Space Flight Center. Scientific guidance is provided by the WMAP Science Team.

  12. Toward the assimilation of biogeochemical data in the CMEMS BIOMER coupled physical-biogeochemical operational system

    NASA Astrophysics Data System (ADS)

    Lamouroux, Julien; Testut, Charles-Emmanuel; Lellouche, Jean-Michel; Perruche, Coralie; Paul, Julien

    2017-04-01

    The operational production of data-assimilated biogeochemical state of the ocean is one of the challenging core projects of the Copernicus Marine Environment Monitoring Service. In that framework - and with the April 2018 CMEMS V4 release as a target - Mercator Ocean is in charge of improving the realism of its global ¼° BIOMER coupled physical-biogeochemical (NEMO/PISCES) simulations, analyses and re-analyses, and to develop an effective capacity to routinely estimate the biogeochemical state of the ocean, through the implementation of biogeochemical data assimilation. Primary objectives are to enhance the time representation of the seasonal cycle in the real time and reanalysis systems, and to provide a better control of the production in the equatorial regions. The assimilation of BGC data will rely on a simplified version of the SEEK filter, where the error statistics do not evolve with the model dynamics. The associated forecast error covariances are based on the statistics of a collection of 3D ocean state anomalies. The anomalies are computed from a multi-year numerical experiment (free run without assimilation) with respect to a running mean in order to estimate the 7-day scale error on the ocean state at a given period of the year. These forecast error covariances rely thus on a fixed-basis seasonally variable ensemble of anomalies. This methodology, which is currently implemented in the "blue" component of the CMEMS operational forecast system, is now under adaptation to be applied to the biogeochemical part of the operational system. Regarding observations - and as a first step - the system shall rely on the CMEMS GlobColour Global Ocean surface chlorophyll concentration products, delivered in NRT. The objective of this poster is to provide a detailed overview of the implementation of the aforementioned data assimilation methodology in the CMEMS BIOMER forecasting system. Focus shall be put on (1) the assessment of the capabilities of this data assimilation methodology to provide satisfying statistics of the model variability errors (through space-time analysis of dedicated representers of satellite surface Chla observations), (2) the dedicated features of the data assimilation configuration that have been implemented so far (e.g. log-transformation of the analysis state, multivariate Chlorophyll-Nutrient control vector, etc.) and (3) the assessment of the performances of this future operational data assimilation configuration.

  13. Slow Learner Errors Analysis in Solving Fractions Problems in Inclusive Junior High School Class

    NASA Astrophysics Data System (ADS)

    Novitasari, N.; Lukito, A.; Ekawati, R.

    2018-01-01

    A slow learner whose IQ is between 71 and 89 will have difficulties in solving mathematics problems that often lead to errors. The errors could be analyzed to where the errors may occur and its type. This research is qualitative descriptive which aims to describe the locations, types, and causes of slow learner errors in the inclusive junior high school class in solving the fraction problem. The subject of this research is one slow learner of seventh-grade student which was selected through direct observation by the researcher and through discussion with mathematics teacher and special tutor which handles the slow learner students. Data collection methods used in this study are written tasks and semistructured interviews. The collected data was analyzed by Newman’s Error Analysis (NEA). Results show that there are four locations of errors, namely comprehension, transformation, process skills, and encoding errors. There are four types of errors, such as concept, principle, algorithm, and counting errors. The results of this error analysis will help teachers to identify the causes of the errors made by the slow learner.

  14. The Global Precipitation Climatology Project (GPCP) Combined Precipitation Dataset

    NASA Technical Reports Server (NTRS)

    Huffman, George J.; Adler, Robert F.; Arkin, Philip; Chang, Alfred; Ferraro, Ralph; Gruber, Arnold; Janowiak, John; McNab, Alan; Rudolf, Bruno; Schneider, Udo

    1997-01-01

    The Global Precipitation Climatology Project (GPCP) has released the GPCP Version 1 Combined Precipitation Data Set, a global, monthly precipitation dataset covering the period July 1987 through December 1995. The primary product in the dataset is a merged analysis incorporating precipitation estimates from low-orbit-satellite microwave data, geosynchronous-orbit -satellite infrared data, and rain gauge observations. The dataset also contains the individual input fields, a combination of the microwave and infrared satellite estimates, and error estimates for each field. The data are provided on 2.5 deg x 2.5 deg latitude-longitude global grids. Preliminary analyses show general agreement with prior studies of global precipitation and extends prior studies of El Nino-Southern Oscillation precipitation patterns. At the regional scale there are systematic differences with standard climatologies.

  15. New dimension analyses with error analysis for quaking aspen and black spruce

    NASA Technical Reports Server (NTRS)

    Woods, K. D.; Botkin, D. B.; Feiveson, A. H.

    1987-01-01

    Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.

  16. Addressing the unit of analysis in medical care studies: a systematic review.

    PubMed

    Calhoun, Aaron W; Guyatt, Gordon H; Cabana, Michael D; Lu, Downing; Turner, David A; Valentine, Stacey; Randolph, Adrienne G

    2008-06-01

    We assessed the frequency that patients are incorrectly used as the unit of analysis among studies of physicians' patient care behavior in articles published in high impact journals. We surveyed 30 high-impact journals across 6 medical fields for articles susceptible to unit of analysis errors published from 1994 to 2005. Three reviewers independently abstracted articles using previously published criteria to determine the presence of analytic errors. One hundred fourteen susceptible articles were found published in 15 journals, 4 journals published the majority (71 of 114 or 62.3%) of studies, 40 were intervention studies, and 74 were noninterventional studies. The unit of analysis error was present in 19 (48%) of the intervention studies and 31 (42%) of the noninterventional studies (overall error rate 44%). The frequency of the error decreased between 1994-1999 (N = 38; 65% error) and 2000-2005 (N = 76; 33% error) (P = 0.001). Although the frequency of the error in published studies is decreasing, further improvement remains desirable.

  17. First-order approximation error analysis of Risley-prism-based beam directing system.

    PubMed

    Zhao, Yanyan; Yuan, Yan

    2014-12-01

    To improve the performance of a Risley-prism system for optical detection and measuring applications, it is necessary to be able to determine the direction of the outgoing beam with high accuracy. In previous works, error sources and their impact on the performance of the Risley-prism system have been analyzed, but their numerical approximation accuracy was not high. Besides, pointing error analysis of the Risley-prism system has provided results for the case when the component errors, prism orientation errors, and assembly errors are certain. In this work, the prototype of a Risley-prism system was designed. The first-order approximations of the error analysis were derived and compared with the exact results. The directing errors of a Risley-prism system associated with wedge-angle errors, prism mounting errors, and bearing assembly errors were analyzed based on the exact formula and the first-order approximation. The comparisons indicated that our first-order approximation is accurate. In addition, the combined errors produced by the wedge-angle errors and mounting errors of the two prisms together were derived and in both cases were proved to be the sum of errors caused by the first and the second prism separately. Based on these results, the system error of our prototype was estimated. The derived formulas can be implemented to evaluate beam directing errors of any Risley-prism beam directing system with a similar configuration.

  18. Influence of Tooth Spacing Error on Gears With and Without Profile Modifications

    NASA Technical Reports Server (NTRS)

    Padmasolala, Giri; Lin, Hsiang H.; Oswald, Fred B.

    2000-01-01

    A computer simulation was conducted to investigate the effectiveness of profile modification for reducing dynamic loads in gears with different tooth spacing errors. The simulation examined varying amplitudes of spacing error and differences in the span of teeth over which the error occurs. The modification considered included both linear and parabolic tip relief. The analysis considered spacing error that varies around most of the gear circumference (similar to a typical sinusoidal error pattern) as well as a shorter span of spacing errors that occurs on only a few teeth. The dynamic analysis was performed using a revised version of a NASA gear dynamics code, modified to add tooth spacing errors to the analysis. Results obtained from the investigation show that linear tip relief is more effective in reducing dynamic loads on gears with small spacing errors but parabolic tip relief becomes more effective as the amplitude of spacing error increases. In addition, the parabolic modification is more effective for the more severe error case where the error is spread over a longer span of teeth. The findings of this study can be used to design robust tooth profile modification for improving dynamic performance of gear sets with different tooth spacing errors.

  19. MULTINEST: an efficient and robust Bayesian inference tool for cosmology and particle physics

    NASA Astrophysics Data System (ADS)

    Feroz, F.; Hobson, M. P.; Bridges, M.

    2009-10-01

    We present further development and the first public release of our multimodal nested sampling algorithm, called MULTINEST. This Bayesian inference tool calculates the evidence, with an associated error estimate, and produces posterior samples from distributions that may contain multiple modes and pronounced (curving) degeneracies in high dimensions. The developments presented here lead to further substantial improvements in sampling efficiency and robustness, as compared to the original algorithm presented in Feroz & Hobson, which itself significantly outperformed existing Markov chain Monte Carlo techniques in a wide range of astrophysical inference problems. The accuracy and economy of the MULTINEST algorithm are demonstrated by application to two toy problems and to a cosmological inference problem focusing on the extension of the vanilla Λ cold dark matter model to include spatial curvature and a varying equation of state for dark energy. The MULTINEST software, which is fully parallelized using MPI and includes an interface to COSMOMC, is available at http://www.mrao.cam.ac.uk/software/multinest/. It will also be released as part of the SUPERBAYES package, for the analysis of supersymmetric theories of particle physics, at http://www.superbayes.org.

  20. Added-value joint source modelling of seismic and geodetic data

    NASA Astrophysics Data System (ADS)

    Sudhaus, Henriette; Heimann, Sebastian; Walter, Thomas R.; Krueger, Frank

    2013-04-01

    In tectonically active regions earthquake source studies strongly support the analysis of the current faulting processes as they reveal the location and geometry of active faults, the average slip released or more. For source modelling of shallow, moderate to large earthquakes often a combination of geodetic (GPS, InSAR) and seismic data is used. A truly joint use of these data, however, usually takes place only on a higher modelling level, where some of the first-order characteristics (time, centroid location, fault orientation, moment) have been fixed already. These required basis model parameters have to be given, assumed or inferred in a previous, separate and highly non-linear modelling step using one of the these data sets alone. We present a new earthquake rupture model implementation that realizes a fully combined data integration of surface displacement measurements and seismic data in a non-linear optimization of simple but extended planar ruptures. The model implementation allows for fast forward calculations of full seismograms and surface deformation and therefore enables us to use Monte Carlo global search algorithms. Furthermore, we benefit from the complementary character of seismic and geodetic data, e. g. the high definition of the source location from geodetic data and the sensitivity of the resolution of the seismic data on moment releases at larger depth. These increased constraints from the combined dataset make optimizations efficient, even for larger model parameter spaces and with a very limited amount of a priori assumption on the source. A vital part of our approach is rigorous data weighting based on the empirically estimated data errors. We construct full data error variance-covariance matrices for geodetic data to account for correlated data noise and also weight the seismic data based on their signal-to-noise ratio. The estimation of the data errors and the fast forward modelling opens the door for Bayesian inferences of the source model parameters. The source model product then features parameter uncertainty estimates and reveals parameter trade-offs that arise from imperfect data coverage and data errors. We applied our new source modelling approach to the 2010 Haiti earthquake for which a number of apparently different seismic, geodetic and joint source models has been reported already - mostly without any model parameter estimations. We here show that the variability of all these source models seems to arise from inherent model parameter trade-offs and mostly has little statistical significance, e.g. even using a large dataset comprising seismic and geodetic data the confidence interval of the fault dip remains as wide as about 20 degrees.

  1. A Methodology for Validating Safety Heuristics Using Clinical Simulations: Identifying and Preventing Possible Technology-Induced Errors Related to Using Health Information Systems

    PubMed Central

    Borycki, Elizabeth; Kushniruk, Andre; Carvalho, Christopher

    2013-01-01

    Internationally, health information systems (HIS) safety has emerged as a significant concern for governments. Recently, research has emerged that has documented the ability of HIS to be implicated in the harm and death of patients. Researchers have attempted to develop methods that can be used to prevent or reduce technology-induced errors. Some researchers are developing methods that can be employed prior to systems release. These methods include the development of safety heuristics and clinical simulations. In this paper, we outline our methodology for developing safety heuristics specific to identifying the features or functions of a HIS user interface design that may lead to technology-induced errors. We follow this with a description of a methodological approach to validate these heuristics using clinical simulations. PMID:23606902

  2. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  3. Optimization of Melatonin Dissolution from Extended Release Matrices Using Artificial Neural Networking.

    PubMed

    Martarelli, D; Casettari, L; Shalaby, K S; Soliman, M E; Cespi, M; Bonacucina, G; Fagioli, L; Perinelli, D R; Lam, J K W; Palmieri, G F

    2016-01-01

    Efficacy of melatonin in treating sleep disorders has been demonstrated in numerous studies. Being with short half-life, melatonin needs to be formulated in extended-release tablets to prevent the fast drop of its plasma concentration. However, an attempt to mimic melatonin natural plasma levels during night time is challenging. In this work, Artificial Neural Networks (ANNs) were used to optimize melatonin release from hydrophilic polymer matrices. Twenty-seven different tablet formulations with different amounts of hydroxypropyl methylcellulose, xanthan gum and Carbopol®974P NF were prepared and subjected to drug release studies. Using dissolution test data as inputs for ANN designed by Visual Basic programming language, the ideal number of neurons in the hidden layer was determined trial and error methodology to guarantee the best performance of constructed ANN. Results showed that the ANN with nine neurons in the hidden layer had the best results. ANN was examined to check its predictability and then used to determine the best formula that can mimic the release of melatonin from a marketed brand using similarity fit factor. This work shows the possibility of using ANN to optimize the composition of prolonged-release melatonin tablets having dissolution profile desired.

  4. Grammar. Nazis. Does the Grammatical "Release the Conceptual"?

    ERIC Educational Resources Information Center

    Carroll, James Edward

    2016-01-01

    Jim Carroll noticed basic literacy errors in his Year 13s' writing, but on closer examination decided that these were not best addressed purely as literacy issues. Through an intervention based on clauses, Carroll managed to enable his students to write better, but he did this by teasing out principles of historical discourse that underpin…

  5. Author Correction to: Pooled Analyses of Phase III Studies of ADS-5102 (Amantadine) Extended-Release Capsules for Dyskinesia in Parkinson's Disease.

    PubMed

    Elmer, Lawrence W; Juncos, Jorge L; Singer, Carlos; Truong, Daniel D; Criswell, Susan R; Parashos, Sotirios; Felt, Larissa; Johnson, Reed; Patni, Rajiv

    2018-04-01

    An Online First version of this article was made available online at http://link.springer.com/journal/40263/onlineFirst/page/1 on 12 March 2018. An error was subsequently identified in the article, and the following correction should be noted.

  6. RELEASE NOTES FOR MODELS-3 VERSION 4.1 PATCH: SMOKE TOOL AND FILE CONVERTER

    EPA Science Inventory

    This software patch to the Models-3 system corrects minor errors in the Models-3 framework, provides substantial improvements in the ASCII to I/O API format conversion of the File Converter utility, and new functionalities for the SMOKE Tool. Version 4.1 of the Models-3 system...

  7. Effects of measurement errors on psychometric measurements in ergonomics studies: Implications for correlations, ANOVA, linear regression, factor analysis, and linear discriminant analysis.

    PubMed

    Liu, Yan; Salvendy, Gavriel

    2009-05-01

    This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.

  8. M4AST - A Tool for Asteroid Modelling

    NASA Astrophysics Data System (ADS)

    Birlan, Mirel; Popescu, Marcel; Irimiea, Lucian; Binzel, Richard

    2016-10-01

    M4AST (Modelling for asteroids) is an online tool devoted to the analysis and interpretation of reflection spectra of asteroids in the visible and near-infrared spectral intervals. It consists into a spectral database of individual objects and a set of routines for analysis which address scientific aspects such as: taxonomy, curve matching with laboratory spectra, space weathering models, and mineralogical diagnosis. Spectral data were obtained using groundbased facilities; part of these data are precompiled from the literature[1].The database is composed by permanent and temporary files. Each permanent file contains a header and two or three columns (wavelength, spectral reflectance, and the error on spectral reflectance). Temporary files can be uploaded anonymously, and are purged for the property of submitted data. The computing routines are organized in order to accomplish several scientific objectives: visualize spectra, compute the asteroid taxonomic class, compare an asteroid spectrum with similar spectra of meteorites, and computing mineralogical parameters. One facility of using the Virtual Observatory protocols was also developed.A new version of the service was released in June 2016. This new release of M4AST contains a database and facilities to model more than 6,000 spectra of asteroids. A new web-interface was designed. This development allows new functionalities into a user-friendly environment. A bridge system of access and exploiting the database SMASS-MIT (http://smass.mit.edu) allows the treatment and analysis of these data in the framework of M4AST environment.Reference:[1] M. Popescu, M. Birlan, and D.A. Nedelcu, "Modeling of asteroids: M4AST," Astronomy & Astrophysics 544, EDP Sciences, pp. A130, 2012.

  9. Orbital Injection of the SEDSAT Satellite: Tethered Systems Dynamics and Flight Data Analysis

    NASA Technical Reports Server (NTRS)

    Lorenzini, Enrico C.; Gullahorn, Gordon E.; Cosmo, Mario L.; Ruiz, Manuel; Pelaez, Jesus

    1996-01-01

    This report deals with the following topics which are all related to the orbital injection of the SEDSAT satellite: Dynamics and Stability of Tether Oscillations after the First Cut. The dynamics of the tether after the first cut (i.e., without the Shuttle attached to it) is investigated. The tether oscillations with the free end are analyzed in order to assess the stability of the rectilinear configuration in between the two tether cuts; analysis of Unstable Modes. The unstable modes that appear for high libration angles are further investigated in order to determine their occurrences and the possible transition from bound librations to rotations; Orbital Release Strategies for SEDSAT. A parametric analysis of the orbital decay rate of the SEDSAT satellite after the two tether cuts has been carried out as a function of the following free parameters: libration amplitude at the end of deployment, deviation angle from LV at the first cut, and orbital anomaly at the second cut. The values of these parameters that provide a minimum orbital decay rate of the satellite (after the two cuts) have been computed; and Dynamics and Control of SEDSAT. The deployment control law has been modified to cope with the new ejection velocity of the satellite from the Shuttle cargo bay. New reference profiles have been derived as well as new control parameters. Timing errors at the satellite release as a function of the variations of the initial conditions and the tension model parameters have been estimated for the modified control law.

  10. Evaluation of Water Year 2011 Glen Canyon Dam Flow Release Scenarios on Downstream Sand Storage along the Colorado River in Arizona

    USGS Publications Warehouse

    Wright, Scott A.; Grams, Paul E.

    2010-01-01

    This report describes numerical modeling simulations of sand transport and sand budgets for reaches of the Colorado River below Glen Canyon Dam. Two hypothetical Water Year 2011 annual release volumes were each evaluated with six hypothetical operational scenarios. The six operational scenarios include the current operation, scenarios with modifications to the monthly distribution of releases, and scenarios with modifications to daily flow fluctuations. Uncertainties in model predictions were evaluated by conducting simulations with error estimates for tributary inputs and mainstem transport rates. The modeling results illustrate the dependence of sand transport rates and sand budgets on the annual release volumes as well as the within year operating rules. The six operational scenarios were ranked with respect to the predicted annual sand budgets for Marble Canyon and eastern Grand Canyon reaches. While the actual WY 2011 annual release volume and levels of tributary inputs are unknown, the hypothetical conditions simulated and reported herein provide reasonable comparisons between the operational scenarios, in a relative sense, that may be used by decision makers within the Glen Canyon Dam Adaptive Management Program.

  11. Reducing errors in the GRACE gravity solutions using regularization

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4 solutions (RL04) from the Center for Space Research (CSR). Post-fit residual analysis shows that the regularized solutions fit the data to within the noise level of GRACE. A time series of filtered hydrological model is used to confirm that signal attenuation for basins in the Total Runoff Integrating Pathways (TRIP) database over 320 km radii is less than 1 cm equivalent water height RMS, which is within the noise level of GRACE.

  12. Multiple response optimization of processing and formulation parameters of Eudragit RL/RS-based matrix tablets for sustained delivery of diclofenac.

    PubMed

    Elzayat, Ehab M; Abdel-Rahman, Ali A; Ahmed, Sayed M; Alanazi, Fars K; Habib, Walid A; Sakr, Adel

    2017-11-01

    Multiple response optimization is an efficient technique to develop sustained release formulation while decreasing the number of experiments based on trial and error approach. Diclofenac matrix tablets were optimized to achieve a release profile conforming to USP monograph, matching Voltaren ® SR and withstand formulation variables. The percent of drug released at predetermined multiple time points were the response variables in the design. Statistical models were obtained with relative contour diagrams being overlaid to predict process and formulation parameters expected to produce the target release profile. Tablets were prepared by wet granulation using mixture of equivalent quantities of Eudragit RL/RS at overall polymer concentration of 10-30%w/w and compressed at 5-15KN. Drug release from the optimized formulation E4 (15%w/w, 15KN) was similar to Voltaren, conformed to USP monograph and found to be stable. Substituting lactose with mannitol, reversing the ratio between lactose and microcrystalline cellulose or increasing drug load showed no significant difference in drug release. Using dextromethorphan hydrobromide as a model soluble drug showed burst release due to higher solubility and formation of micro cavities. A numerical optimization technique was employed to develop a stable consistent promising formulation for sustained delivery of diclofenac.

  13. The Infinitesimal Jackknife with Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  14. Planck 2015 results. VI. LFI mapmaking

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Chamballu, A.; Chary, R.-R.; Christensen, P. R.; Colombi, S.; Colombo, L. P. L.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Mazzotta, P.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Novikov, D.; Novikov, I.; Paci, F.; Pagano, L.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Pierpaoli, E.; Pietrobon, D.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renzi, A.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vassallo, T.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Watson, R.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-09-01

    This paper describes the mapmaking procedure applied to Planck Low Frequency Instrument (LFI) data. The mapmaking step takes as input the calibrated timelines and pointing information. The main products are sky maps of I, Q, and U Stokes components. For the first time, we present polarization maps at LFI frequencies. The mapmaking algorithm is based on a destriping technique, which is enhanced with a noise prior. The Galactic region is masked to reduce errors arising from bandpass mismatch and high signal gradients. We apply horn-uniform radiometer weights to reduce the effects of beam-shape mismatch. The algorithm is the same as used for the 2013 release, apart from small changes in parameter settings. We validate the procedure through simulations. Special emphasis is put on the control of systematics, which is particularly important for accurate polarization analysis. We also produce low-resolution versions of the maps and corresponding noise covariance matrices. These serve as input in later analysis steps and parameter estimation. The noise covariance matrices are validated through noise Monte Carlo simulations. The residual noise in the map products is characterized through analysis of half-ring maps, noise covariance matrices, and simulations.

  15. CyREST: Turbocharging Cytoscape Access for External Tools via a RESTful API.

    PubMed

    Ono, Keiichiro; Muetze, Tanja; Kolishovski, Georgi; Shannon, Paul; Demchak, Barry

    2015-01-01

    As bioinformatic workflows become increasingly complex and involve multiple specialized tools, so does the difficulty of reliably reproducing those workflows. Cytoscape is a critical workflow component for executing network visualization, analysis, and publishing tasks, but it can be operated only manually via a point-and-click user interface. Consequently, Cytoscape-oriented tasks are laborious and often error prone, especially with multistep protocols involving many networks. In this paper, we present the new cyREST Cytoscape app and accompanying harmonization libraries. Together, they improve workflow reproducibility and researcher productivity by enabling popular languages (e.g., Python and R, JavaScript, and C#) and tools (e.g., IPython/Jupyter Notebook and RStudio) to directly define and query networks, and perform network analysis, layouts and renderings. We describe cyREST's API and overall construction, and present Python- and R-based examples that illustrate how Cytoscape can be integrated into large scale data analysis pipelines. cyREST is available in the Cytoscape app store (http://apps.cytoscape.org) where it has been downloaded over 1900 times since its release in late 2014.

  16. Test of the Equivalence Principle in an Einstein Elevator

    NASA Technical Reports Server (NTRS)

    Shapiro, Irwin I.; Glashow, S.; Lorenzini, E. C.; Cosmo, M. L.; Cheimets, P. N.; Finkelstein, N.; Schneps, M.

    2005-01-01

    This Annual Report illustrates the work carried out during the last grant-year activity on the Test of the Equivalence Principle in an Einstein Elevator. The activity focused on the following main topics: (1) analysis and conceptual design of a detector configuration suitable for the flight tests; (2) development of techniques for extracting a small signal from data strings with colored and white noise; (3) design of the mechanism that spins and releases the instrument package inside the cryostat; and (4) experimental activity carried out by our non-US partners (a summary is shown in this report). The analysis and conceptual design of the flight-detector (point 1) was focused on studying the response of the differential accelerometer during free fall, in the presence of errors and precession dynamics, for various detector's configurations. The goal was to devise a detector configuration in which an Equivalence Principle violation (EPV) signal at the sensitivity threshold level can be successfully measured and resolved out of a much stronger dynamics-related noise and gravity gradient. A detailed analysis and comprehensive simulation effort led us to a detector's design that can accomplish that goal successfully.

  17. CyREST: Turbocharging Cytoscape Access for External Tools via a RESTful API

    PubMed Central

    Ono, Keiichiro; Muetze, Tanja; Kolishovski, Georgi; Shannon, Paul; Demchak, Barry

    2015-01-01

    As bioinformatic workflows become increasingly complex and involve multiple specialized tools, so does the difficulty of reliably reproducing those workflows. Cytoscape is a critical workflow component for executing network visualization, analysis, and publishing tasks, but it can be operated only manually via a point-and-click user interface. Consequently, Cytoscape-oriented tasks are laborious and often error prone, especially with multistep protocols involving many networks. In this paper, we present the new cyREST Cytoscape app and accompanying harmonization libraries. Together, they improve workflow reproducibility and researcher productivity by enabling popular languages (e.g., Python and R, JavaScript, and C#) and tools (e.g., IPython/Jupyter Notebook and RStudio) to directly define and query networks, and perform network analysis, layouts and renderings. We describe cyREST’s API and overall construction, and present Python- and R-based examples that illustrate how Cytoscape can be integrated into large scale data analysis pipelines. cyREST is available in the Cytoscape app store (http://apps.cytoscape.org) where it has been downloaded over 1900 times since its release in late 2014. PMID:26672762

  18. Effects of Tropospheric Spatio-Temporal Correlated Noise on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, A. F.; Jacobs, C. S.

    2011-01-01

    The standard VLBI analysis models measurement noise as purely thermal errors modeled according to uncorrelated Gaussian distributions. As the price of recording bits steadily decreases, thermal errors will soon no longer dominate. It is therefore expected that troposphere and instrumentation/clock errors will increasingly become more dominant. Given that both of these errors have correlated spectra, properly modeling the error distributions will become more relevant for optimal analysis. This paper will discuss the advantages of including the correlations between tropospheric delays using a Kolmogorov spectrum and the frozen ow model pioneered by Treuhaft and Lanyi. We will show examples of applying these correlated noise spectra to the weighting of VLBI data analysis.

  19. Quantitative Analysis Tools and Digital Phantoms for Deformable Image Registration Quality Assurance.

    PubMed

    Kim, Haksoo; Park, Samuel B; Monroe, James I; Traughber, Bryan J; Zheng, Yiran; Lo, Simon S; Yao, Min; Mansur, David; Ellis, Rodney; Machtay, Mitchell; Sohn, Jason W

    2015-08-01

    This article proposes quantitative analysis tools and digital phantoms to quantify intrinsic errors of deformable image registration (DIR) systems and establish quality assurance (QA) procedures for clinical use of DIR systems utilizing local and global error analysis methods with clinically realistic digital image phantoms. Landmark-based image registration verifications are suitable only for images with significant feature points. To address this shortfall, we adapted a deformation vector field (DVF) comparison approach with new analysis techniques to quantify the results. Digital image phantoms are derived from data sets of actual patient images (a reference image set, R, a test image set, T). Image sets from the same patient taken at different times are registered with deformable methods producing a reference DVFref. Applying DVFref to the original reference image deforms T into a new image R'. The data set, R', T, and DVFref, is from a realistic truth set and therefore can be used to analyze any DIR system and expose intrinsic errors by comparing DVFref and DVFtest. For quantitative error analysis, calculating and delineating differences between DVFs, 2 methods were used, (1) a local error analysis tool that displays deformation error magnitudes with color mapping on each image slice and (2) a global error analysis tool that calculates a deformation error histogram, which describes a cumulative probability function of errors for each anatomical structure. Three digital image phantoms were generated from three patients with a head and neck, a lung and a liver cancer. The DIR QA was evaluated using the case with head and neck. © The Author(s) 2014.

  20. Risk assessment and experimental design in the development of a prolonged release drug delivery system with paliperidone.

    PubMed

    Iurian, Sonia; Turdean, Luana; Tomuta, Ioan

    2017-01-01

    This study focuses on the development of a drug product based on a risk assessment-based approach, within the quality by design paradigm. A prolonged release system was proposed for paliperidone (Pal) delivery, containing Kollidon ® SR as an insoluble matrix agent and hydroxypropyl cellulose, hydroxypropyl methylcellulose (HPMC), or sodium carboxymethyl cellulose as a hydrophilic polymer. The experimental part was preceded by the identification of potential sources of variability through Ishikawa diagrams, and failure mode and effects analysis was used to deliver the critical process parameters that were further optimized by design of experiments. A D-optimal design was used to investigate the effects of Kollidon SR ratio ( X 1 ), the type of hydrophilic polymer ( X 2 ), and the percentage of hydrophilic polymer ( X 3 ) on the percentages of dissolved Pal over 24 h ( Y 1 - Y 9 ). Effects expressed as regression coefficients and response surfaces were generated, along with a design space for the preparation of a target formulation in an experimental area with low error risk. The optimal formulation contained 27.62% Kollidon SR and 8.73% HPMC and achieved the prolonged release of Pal, with low burst effect, at ratios that were very close to the ones predicted by the model. Thus, the parameters with the highest impact on the final product quality were studied, and safe ranges were established for their variations. Finally, a risk mitigation and control strategy was proposed to assure the quality of the system, by constant process monitoring.

  1. Evidence for a global seismic-moment release sequence

    USGS Publications Warehouse

    Bufe, C.G.; Perkins, D.M.

    2005-01-01

    Temporal clustering of the larger earthquakes (foreshock-mainshock-aftershock) followed by relative quiescence (stress shadow) are characteristic of seismic cycles along plate boundaries. A global seismic-moment release history, based on a little more than 100 years of instrumental earthquake data in an extended version of the catalog of Pacheco and Sykes (1992), illustrates similar behavior for Earth as a whole. Although the largest earthquakes have occurred in the circum-Pacific region, an analysis of moment release in the hemisphere antipodal to the Pacific plate shows a very similar pattern. Monte Carlo simulations confirm that the global temporal clustering of great shallow earthquakes during 1952-1964 at M ??? 9.0 is highly significant (4% random probability) as is the clustering of the events of M ??? 8.6 (0.2% random probability) during 1950-1965. We have extended the Pacheco and Sykes (1992) catalog from 1989 through 2001 using Harvard moment centroid data. Immediately after the 1950-1965 cluster, significant quiescence at and above M 8.4 begins and continues until 2001 (0.5% random probability). In alternative catalogs derived by correcting for possible random errors in magnitude estimates in the extended Pacheco-Sykes catalog, the clustering of M ??? 9 persists at a significant level. These observations indicate that, for great earthquakes, Earth behaves as a coherent seismotectonic system. A very-large-scale mechanism for global earthquake triggering and/or stress transfer is implied. There are several candidates, but so far only viscoelastic relaxation has been modeled on a global scale.

  2. Analysis of measured data of human body based on error correcting frequency

    NASA Astrophysics Data System (ADS)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  3. Phasic dopamine signals: from subjective reward value to formal economic utility

    PubMed Central

    Schultz, Wolfram; Carelli, Regina M; Wightman, R Mark

    2015-01-01

    Although rewards are physical stimuli and objects, their value for survival and reproduction is subjective. The phasic, neurophysiological and voltammetric dopamine reward prediction error response signals subjective reward value. The signal incorporates crucial reward aspects such as amount, probability, type, risk, delay and effort. Differences of dopamine release dynamics with temporal delay and effort in rodents may derive from methodological issues and require further study. Recent designs using concepts and behavioral tools from experimental economics allow to formally characterize the subjective value signal as economic utility and thus to establish a neuronal value function. With these properties, the dopamine response constitutes a utility prediction error signal. PMID:26719853

  4. Error Analysis in Mathematics. Technical Report #1012

    ERIC Educational Resources Information Center

    Lai, Cheng-Fei

    2012-01-01

    Error analysis is a method commonly used to identify the cause of student errors when they make consistent mistakes. It is a process of reviewing a student's work and then looking for patterns of misunderstanding. Errors in mathematics can be factual, procedural, or conceptual, and may occur for a number of reasons. Reasons why students make…

  5. Error analysis in stereo vision for location measurement of 3D point

    NASA Astrophysics Data System (ADS)

    Li, Yunting; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.

  6. Functional Multijoint Position Reproduction Acuity in Overhead-Throwing Athletes

    PubMed Central

    Tripp, Brady L; Uhl, Timothy L; Mattacola, Carl G; Srinivasan, Cidambi; Shapiro, Robert

    2006-01-01

    Context: Baseball players rely on the sensorimotor system to uphold the balance between upper extremity stability and mobility while maintaining athletic performance. However, few researchers have studied functional multijoint measures of sensorimotor acuity in overhead-throwing athletes. Objective: To compare sensorimotor acuity between 2 high-demand functional positions and among planes of motion within individual joints and to describe a novel method of measuring sensorimotor function. Design: Single-session, repeated-measures design. Setting: University musculoskeletal research laboratory. Patients or Other Participants: Twenty-one National Collegiate Athletic Association Division I baseball players (age = 20.8 ± 1.5 years, height = 181.3 ± 5.1 cm, mass = 87.8 ± 9.1 kg) with no history of upper extremity injury or central nervous system disorder. Main Outcome Measure(s): We measured active multijoint position reproduction acuity in multiple planes using an electromagnetic tracking device. Subjects reproduced 2 positions: arm cock and ball release. We calculated absolute and variable error for individual motions at the scapulothoracic, glenohumeral, elbow, and wrist joints and calculated overall joint acuity with 3-dimensional variable error. Results: Acuity was significantly better in the arm-cock position compared with ball release at the scapulothoracic and glenohumeral joints. We observed significant differences among planes of motion within the scapulothoracic and glenohumeral joints at ball release. Scapulothoracic internal rotation and glenohumeral horizontal abduction and rotation displayed less acuity than other motions. Conclusions: We established the reliability of a functional measure of upper extremity sensorimotor system acuity in baseball players. Using this technique, we observed differences in acuity between 2 test positions and among planes of motion within the glenohumeral and scapulothoracic joints. Clinicians may consider these differences when designing and implementing sensorimotor system training. Our error scores are similar in magnitude to those reported using single-joint and single-plane measures. However, 3-dimensional, multijoint measures allow practical, unconstrained test positions and offer additional insight into the upper extremity as a functional unit. PMID:16791298

  7. Analysis of LNG peakshaving-facility release-prevention systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelto, P.J.; Baker, E.G.; Powers, T.B.

    1982-05-01

    The purpose of this study is to provide an analysis of release prevention systems for a reference LNG peakshaving facility. An overview assessment of the reference peakshaving facility, which preceeded this effort, identified 14 release scenarios which are typical of the potential hazards involved in the operation of LNG peakshaving facilities. These scenarios formed the basis for this more detailed study. Failure modes and effects analysis and fault tree analysis were used to estimate the expected frequency of each release scenario for the reference peakshaving facility. In addition, the effectiveness of release prevention, release detection, and release control systems weremore » evaluated.« less

  8. Influence of drug property and product design on in vitro-in vivo correlation of complex modified-release dosage forms.

    PubMed

    Qiu, Yihong; Li, Xia; Duan, John Z

    2014-02-01

    The present study examines how drug's inherent properties and product design influence the evaluation and applications of in vitro-in vivo correlation (IVIVC) for modified-release (MR) dosage forms consisting of extended-release (ER) and immediate-release (IR) components with bimodal drug release. Three analgesic drugs were used as model compounds, and simulations of in vivo pharmacokinetic profiles were conducted using different release rates of the ER component and various IR percentages. Plasma concentration-time profiles exhibiting a wide range of tmax and maximum observed plasma concentration (Cmax) were obtained from superposition of the simulated IR and ER profiles based on a linear IVIVC. It was found that depending on the drug and dosage form design, direct use of the superposed IR and ER data for IVIVC modeling and prediction may (1) be acceptable within errors, (2) become unreliable and less meaningful because of the confounding effect from the non-negligible IR contribution to Cmax, or (3) be meaningless because of the insensitivity of Cmax to release rate change of the ER component. Therefore, understanding the drug, design and drug release characteristics of the product is essential for assessing the validity, accuracy, and reliability of IVIVC of complex MR products obtained via directly modeling of in vivo data. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association.

  9. Rectification of General Relativity, Experimental Verifications, and Errors of the Wheeler School

    NASA Astrophysics Data System (ADS)

    Lo, C. Y.

    2013-09-01

    General relativity is not yet consistent. Pauli has misinterpreted Einstein's 1916 equivalence principle that can derive a valid field equation. The Wheeler School has distorted Einstein's 1916 principle to be his 1911 assumption of equivalence, and created new errors. Moreover, errors on dynamic solutions have allowed the implicit assumption of a unique coupling sign that violates the principle of causality. This leads to the space-time singularity theorems of Hawking and Penrose who "refute" applications for microscopic phenomena, and obstruct efforts to obtain a valid equation for the dynamic case. These errors also explain the mistakes in the press release of the 1993 Nobel Committee, who was unaware of the non-existence of dynamic solutions. To illustrate the damages to education, the MIT Open Course Phys. 8.033 is chosen. Rectification of errors confirms that E = mc2 is only conditionally valid, and leads to the discovery of the charge-mass interaction that is experimentally confirmed and subsequently the unification of gravitation and electromagnetism. The charge-mass interaction together with the unification predicts the weight reduction (instead of increment) of charged capacitors and heated metals, and helps to explain NASA's Pioneer anomaly and potentially other anomalies as well.

  10. Minimization of model representativity errors in identification of point source emission from atmospheric concentration measurements

    NASA Astrophysics Data System (ADS)

    Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar

    2017-11-01

    Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.

  11. Public health consequences on vulnerable populations from acute chemical releases.

    PubMed

    Ruckart, Perri Zeitz; Orr, Maureen F

    2008-07-09

    Data from a large, multi-state surveillance system on acute chemical releases were analyzed to describe the type of events that are potentially affecting vulnerable populations (children, elderly and hospitalized patients) in order to better prevent and plan for these types of incidents in the future. During 2003-2005, there were 231 events where vulnerable populations were within ¼ mile of the event and the area of impact was greater than 200 feet from the facility/point of release. Most events occurred on a weekday during times when day care centers or schools were likely to be in session. Equipment failure and human error caused a majority of the releases. Agencies involved in preparing for and responding to chemical emergencies should work with hospitals, nursing homes, day care centers, and schools to develop policies and procedures for initiating appropriate protective measures and managing the medical needs of patients. Chemical emergency response drills should involve the entire community to protect those that may be more susceptible to harm.

  12. Public Health Consequences on Vulnerable Populations from Acute Chemical Releases

    PubMed Central

    Ruckart, Perri Zeitz; Orr, Maureen F.

    2008-01-01

    Data from a large, multi-state surveillance system on acute chemical releases were analyzed to describe the type of events that are potentially affecting vulnerable populations (children, elderly and hospitalized patients) in order to better prevent and plan for these types of incidents in the future. During 2003–2005, there were 231 events where vulnerable populations were within ¼ mile of the event and the area of impact was greater than 200 feet from the facility/point of release. Most events occurred on a weekday during times when day care centers or schools were likely to be in session. Equipment failure and human error caused a majority of the releases. Agencies involved in preparing for and responding to chemical emergencies should work with hospitals, nursing homes, day care centers, and schools to develop policies and procedures for initiating appropriate protective measures and managing the medical needs of patients. Chemical emergency response drills should involve the entire community to protect those that may be more susceptible to harm. PMID:21572842

  13. Mps1 and Ipl1/Aurora B act sequentially to correctly orient chromosomes on the meiotic spindle of budding yeast.

    PubMed

    Meyer, Régis E; Kim, Seoyoung; Obeso, David; Straight, Paul D; Winey, Mark; Dawson, Dean S

    2013-03-01

    The conserved kinases Mps1 and Ipl1/Aurora B are critical for enabling chromosomes to attach to microtubules so that partner chromosomes will be segregated correctly from each other, but the precise roles of these kinases have been unclear. We imaged live yeast cells to elucidate the stages of chromosome-microtubule interactions and their regulation by Ipl1 and Mps1 through meiosis I. Ipl1 was found to release kinetochore-microtubule (kMT) associations after meiotic entry, liberating chromosomes to begin homologous pairing. Surprisingly, most chromosome pairs began their spindle interactions with incorrect kMT attachments. Ipl1 released these improper connections, whereas Mps1 triggered the formation of new force-generating microtubule attachments. This microtubule release and reattachment cycle could prevent catastrophic chromosome segregation errors in meiosis.

  14. Safety and Performance Analysis of the Non-Radar Oceanic/Remote Airspace In-Trail Procedure

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.; Munoz, Cesar A.

    2007-01-01

    This document presents a safety and performance analysis of the nominal case for the In-Trail Procedure (ITP) in a non-radar oceanic/remote airspace. The analysis estimates the risk of collision between the aircraft performing the ITP and a reference aircraft. The risk of collision is only estimated for the ITP maneuver and it is based on nominal operating conditions. The analysis does not consider human error, communication error conditions, or the normal risk of flight present in current operations. The hazards associated with human error and communication errors are evaluated in an Operational Hazards Analysis presented elsewhere.

  15. Error Analysis of Brailled Instructional Materials Produced by Public School Personnel in Texas

    ERIC Educational Resources Information Center

    Herzberg, Tina

    2010-01-01

    In this study, a detailed error analysis was performed to determine if patterns of errors existed in braille transcriptions. The most frequently occurring errors were the insertion of letters or words that were not contained in the original print material; the incorrect usage of the emphasis indicator; and the incorrect formatting of titles,…

  16. Integrated analysis of error detection and recovery

    NASA Technical Reports Server (NTRS)

    Shin, K. G.; Lee, Y. H.

    1985-01-01

    An integrated modeling and analysis of error detection and recovery is presented. When fault latency and/or error latency exist, the system may suffer from multiple faults or error propagations which seriously deteriorate the fault-tolerant capability. Several detection models that enable analysis of the effect of detection mechanisms on the subsequent error handling operations and the overall system reliability were developed. Following detection of the faulty unit and reconfiguration of the system, the contaminated processes or tasks have to be recovered. The strategies of error recovery employed depend on the detection mechanisms and the available redundancy. Several recovery methods including the rollback recovery are considered. The recovery overhead is evaluated as an index of the capabilities of the detection and reconfiguration mechanisms.

  17. A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading

    NASA Astrophysics Data System (ADS)

    Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo

    A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).

  18. Error Analysis: Past, Present, and Future

    ERIC Educational Resources Information Center

    McCloskey, George

    2017-01-01

    This commentary will take an historical perspective on the Kaufman Test of Educational Achievement (KTEA) error analysis, discussing where it started, where it is today, and where it may be headed in the future. In addition, the commentary will compare and contrast the KTEA error analysis procedures that are rooted in psychometric methodology and…

  19. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    PubMed Central

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  20. A POSTERIORI ERROR ANALYSIS OF TWO STAGE COMPUTATION METHODS WITH APPLICATION TO EFFICIENT DISCRETIZATION AND THE PARAREAL ALGORITHM.

    PubMed

    Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff

    2016-01-01

    We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.

  1. A simple, objective analysis scheme for scatterometer data. [Seasat A satellite observation of wind over ocean

    NASA Technical Reports Server (NTRS)

    Levy, G.; Brown, R. A.

    1986-01-01

    A simple economical objective analysis scheme is devised and tested on real scatterometer data. It is designed to treat dense data such as those of the Seasat A Satellite Scatterometer (SASS) for individual or multiple passes, and preserves subsynoptic scale features. Errors are evaluated with the aid of sampling ('bootstrap') statistical methods. In addition, sensitivity tests have been performed which establish qualitative confidence in calculated fields of divergence and vorticity. The SASS wind algorithm could be improved; however, the data at this point are limited by instrument errors rather than analysis errors. The analysis error is typically negligible in comparison with the instrument error, but amounts to 30 percent of the instrument error in areas of strong wind shear. The scheme is very economical, and thus suitable for large volumes of dense data such as SASS data.

  2. An Integrative Perspective on the Role of Dopamine in Schizophrenia

    PubMed Central

    Maia, Tiago V.; Frank, Michael J.

    2017-01-01

    We propose that schizophrenia involves a combination of decreased phasic dopamine responses for relevant stimuli and increased spontaneous phasic dopamine release. Using insights from computational reinforcement-learning models and basic-science studies of the dopamine system, we show that each of these two disturbances contributes to a specific symptom domain and explains a large set of experimental findings associated with that domain. Reduced phasic responses for relevant stimuli help to explain negative symptoms and provide a unified explanation for the following experimental findings in schizophrenia, most of which have been shown to correlate with negative symptoms: reduced learning from rewards; blunted activation of the ventral striatum, midbrain, and other limbic regions for rewards and positive prediction errors; blunted activation of the ventral striatum during reward anticipation; blunted autonomic responding for relevant stimuli; blunted neural activation for aversive outcomes and aversive prediction errors; reduced willingness to expend effort for rewards; and psychomotor slowing. Increased spontaneous phasic dopamine release helps to explain positive symptoms and provides a unified explanation for the following experimental findings in schizophrenia, most of which have been shown to correlate with positive symptoms: aberrant learning for neutral cues (assessed with behavioral and autonomic responses), and aberrant, increased activation of the ventral striatum, midbrain, and other limbic regions for neutral cues, neutral outcomes, and neutral prediction errors. Taken together, then, these two disturbances explain many findings in schizophrenia. We review evidence supporting their co-occurrence and consider their differential implications for the treatment of positive and negative symptoms. PMID:27452791

  3. Micro Computer Feedback Report for the Strategic Leader Development Inventory

    DTIC Science & Technology

    1993-05-01

    POS or NEG variables CALL CREATE MEM DIR ;make a memory directory JC SELS ;exat I error CALL SELECT-SCREEN ;dlsplay select screen JC SEL4 ;no flles in...get keyboaI Input CMP AL,1Bh3 ;ls I an Esc key ? JNZ SEL2 ;X not goto nrod test G-95 JMP SEL4 ;Exit SEL2: CMP AL,OOh Iskapick? JZ SEL ;I YES exit loop...position CALL READ DATE ;gat DOS daoe od 4e CALL F4ND -ERO ;kxlae OW In data ue JC SEL.5 SEL4 : CALL RELEASE MEM DIR ;release meu block CLC ;cler carry fag

  4. [Cost-effective analysis of rotation from sustained-release morphine tablet to transdermal fentanyl of matrix type or sustained-release oxycodone tablet].

    PubMed

    Ise, Yuya; Wako, Tetsuya; Miura, Yoshihiko; Katayama, Shirou; Shimizu, Hisanori

    2009-12-01

    The present study was undertaken to determine the pharmacoeconomics of switching from sustained-release morphine tablet to matrix type (MT) of transdermal fontanel or sustained-release Oxycodone tablet. Cost-effective analysis was performed using a simulation model along with decision analysis. The analysis was done from the payer's perspective. The cost-effective ratio/patient of transdermal MT fontanel (22, 539 yen)was lower than that of sustained -release Oxycodone tablet (23, 630 yen), although a sensitivity analysis could not indicate that this result was reliable. These results suggest the possibility that transdermal MT fontanel was much less expensive than a sustained-release Oxycodone tablet.

  5. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis. Revision 1.12

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

    1997-01-01

    We proposed a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and is required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has two important applications, which we term the assessment application and the objective analysis application. For the assessment application, our approach results in new objective measures of forecast skill which are more in line with subjective measures of forecast skill and which are useful in validating models and diagnosing their shortcomings. With regard to the objective analysis application, meteorological analysis schemes balance forecast error and observational error to obtain an optimal analysis. Presently, representations of the error covariance matrix used to measure the forecast error are severely limited. For the objective analysis application our approach will improve analyses by providing a more realistic measure of the forecast error. We expect, a priori, that our approach should greatly improve the utility of remotely sensed data which have relatively high horizontal resolution, but which are indirectly related to the conventional atmospheric variables. In this project, we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP) and 500 hPa geopotential height fields for forecasts of the short and medium range. Since the forecasts are generated by the GEOS (Goddard Earth Observing System) data assimilation system with and without ERS 1 scatterometer data, these preliminary studies serve several purposes. They (1) provide a testbed for the use of the distortion representation of forecast errors, (2) act as one means of validating the GEOS data assimilation system and (3) help to describe the impact of the ERS 1 scatterometer data.

  6. An Analysis of a Finite Element Method for Convection-Diffusion Problems. Part II. A Posteriori Error Estimates and Adaptivity.

    DTIC Science & Technology

    1983-03-01

    AN ANALYSIS OF A FINITE ELEMENT METHOD FOR CONVECTION- DIFFUSION PROBLEMS PART II: A POSTERIORI ERROR ESTIMATES AND ADAPTIVITY by W. G. Szymczak Y 6a...PERIOD COVERED AN ANALYSIS OF A FINITE ELEMENT METHOD FOR final life of the contract CONVECTION- DIFFUSION PROBLEM S. Part II: A POSTERIORI ERROR ...Element Method for Convection- Diffusion Problems. Part II: A Posteriori Error Estimates and Adaptivity W. G. Szvmczak and I. Babu~ka# Laboratory for

  7. 77 FR 70517 - Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Notice of Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-26

    ... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-68262; File No. SR-CBOE-2012-108] Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Notice of Proposed Rule Change To Address Authority To Cancel Orders When a Technical or Systems Issue Occurs and To Describe the Operation of Routing Service Error Accounts November 19...

  8. 77 FR 70511 - Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Notice of Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-26

    ... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-68265; File No. SR-CBOE-2012-109] Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Notice of Proposed Rule Change Related to CBSX To Address Authority To Cancel Orders When a Technical or Systems Issue Occurs and To Describe the Operation of Routing Service Error...

  9. Adult Training and Education: Results from the National Household Education Surveys Program of 2016. First Look. NCES 2017-103rev

    ERIC Educational Resources Information Center

    Cronen, Stephanie; McQuiggan, Meghan; Isenberg, Emily

    2018-01-01

    This First Look report provides selected key findings on adults' attainment of nondegree credentials (licenses, certifications, and postsecondary certificates), and their completion of work experience programs such as apprenticeships and internships. This version of the report corrects an error in three tables in the originally released version…

  10. Sediment transport primer: estimating bed-material transport in gravel-bed rivers

    Treesearch

    Peter Wilcock; John Pitlick; Yantao Cui

    2009-01-01

    This primer accompanies the release of BAGS, software developed to calculate sediment transport rate in gravel-bed rivers. BAGS and other programs facilitate calculation and can reduce some errors, but cannot ensure that calculations are accurate or relevant. This primer was written to help the software user define relevant and tractable problems, select appropriate...

  11. Failure analysis and modeling of a multicomputer system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Subramani, Sujatha Srinivasan

    1990-01-01

    This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).

  12. Simultaneous Control of Error Rates in fMRI Data Analysis

    PubMed Central

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-01-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730

  13. CBP TOOLBOX VERSION 2.0: CODE INTEGRATION ENHANCEMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, F.; Flach, G.; BROWN, K.

    2013-06-01

    This report describes enhancements made to code integration aspects of the Cementitious Barriers Project (CBP) Toolbox as a result of development work performed at the Savannah River National Laboratory (SRNL) in collaboration with Vanderbilt University (VU) in the first half of fiscal year 2013. Code integration refers to the interfacing to standalone CBP partner codes, used to analyze the performance of cementitious materials, with the CBP Software Toolbox. The most significant enhancements are: 1) Improved graphical display of model results. 2) Improved error analysis and reporting. 3) Increase in the default maximum model mesh size from 301 to 501 nodes.more » 4) The ability to set the LeachXS/Orchestra simulation times through the GoldSim interface. These code interface enhancements have been included in a new release (Version 2.0) of the CBP Toolbox.« less

  14. Methods of automatic nucleotide-sequence analysis. Multicomponent spectrophotometric analysis of mixtures of nucleic acid components by a least-squares procedure

    PubMed Central

    Lee, Sheila; McMullen, D.; Brown, G. L.; Stokes, A. R.

    1965-01-01

    1. A theoretical analysis of the errors in multicomponent spectrophotometric analysis of nucleoside mixtures, by a least-squares procedure, has been made to obtain an expression for the error coefficient, relating the error in calculated concentration to the error in extinction measurements. 2. The error coefficients, which depend only on the `library' of spectra used to fit the experimental curves, have been computed for a number of `libraries' containing the following nucleosides found in s-RNA: adenosine, guanosine, cytidine, uridine, 5-ribosyluracil, 7-methylguanosine, 6-dimethylaminopurine riboside, 6-methylaminopurine riboside and thymine riboside. 3. The error coefficients have been used to determine the best conditions for maximum accuracy in the determination of the compositions of nucleoside mixtures. 4. Experimental determinations of the compositions of nucleoside mixtures have been made and the errors found to be consistent with those predicted by the theoretical analysis. 5. It has been demonstrated that, with certain precautions, the multicomponent spectrophotometric method described is suitable as a basis for automatic nucleotide-composition analysis of oligonucleotides containing nine nucleotides. Used in conjunction with continuous chromatography and flow chemical techniques, this method can be applied to the study of the sequence of s-RNA. PMID:14346087

  15. Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis

    NASA Technical Reports Server (NTRS)

    Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.

    2017-01-01

    This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.

  16. Measuring the Acoustic Release of a Chemotherapeutic Agent from Folate-Targeted Polymeric Micelles.

    PubMed

    Abusara, Ayah; Abdel-Hafez, Mamoun; Husseini, Ghaleb

    2018-08-01

    In this paper, we compare the use of Bayesian filters for the estimation of release and re-encapsulation rates of a chemotherapeutic agent (namely Doxorubicin) from nanocarriers in an acoustically activated drug release system. The study is implemented using an advanced kinetic model that takes into account cavitation events causing the antineoplastic agent's release from polymeric micelles upon exposure to ultrasound. This model is an improvement over the previous representations of acoustic release that used simple zero-, first- and second-order release and re-encapsulation kinetics to study acoustically triggered drug release from polymeric micelles. The new model incorporates drug release and micellar reassembly events caused by cavitation allowing for the controlled release of chemotherapeutics specially and temporally. Different Bayesian estimators are tested for this purpose including Kalman filters (KF), Extended Kalman filters (EKF), Particle filters (PF), and multi-model KF and EKF. Simulated and experimental results are used to verify the performance of the above-mentioned estimators. The proposed methods demonstrate the utility and high-accuracy of using estimation methods in modeling this drug delivery technique. The results show that, in both cases (linear and non-linear dynamics), the modeling errors are expensive but can be minimized using a multi-model approach. In addition, particle filters are more flexible filters that perform reasonably well compared to the other two filters. The study improved the accuracy of the kinetic models used to capture acoustically activated drug release from polymeric micelles, which may in turn help in designing hardware and software capable of precisely controlling the delivered amount of chemotherapeutics to cancerous tissue.

  17. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  18. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps/incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  19. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  20. Error Analysis and Validation for Insar Height Measurement Induced by Slant Range

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Li, T.; Fan, W.; Geng, X.

    2018-04-01

    InSAR technique is an important method for large area DEM extraction. Several factors have significant influence on the accuracy of height measurement. In this research, the effect of slant range measurement for InSAR height measurement was analysis and discussed. Based on the theory of InSAR height measurement, the error propagation model was derived assuming no coupling among different factors, which directly characterise the relationship between slant range error and height measurement error. Then the theoretical-based analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of slant range error to height measurement. In addition, the simulation validation of InSAR error model induced by slant range was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were further discussed and evaluated.

  1. Tolerance analysis of optical telescopes using coherent addition of wavefront errors

    NASA Technical Reports Server (NTRS)

    Davenport, J. W.

    1982-01-01

    A near diffraction-limited telescope requires that tolerance analysis be done on the basis of system wavefront error. One method of analyzing the wavefront error is to represent the wavefront error function in terms of its Zernike polynomial expansion. A Ramsey-Korsch ray trace package, a computer program that simulates the tracing of rays through an optical telescope system, was expanded to include the Zernike polynomial expansion up through the fifth-order spherical term. An option to determine a 3 dimensional plot of the wavefront error function was also included in the Ramsey-Korsch package. Several assimulation runs were analyzed to determine the particular set of coefficients in the Zernike expansion that are effected by various errors such as tilt, decenter and despace. A 3 dimensional plot of each error up through the fifth-order spherical term was also included in the study. Tolerance analysis data are presented.

  2. Error Pattern Analysis Applied to Technical Writing: An Editor's Guide for Writers.

    ERIC Educational Resources Information Center

    Monagle, E. Brette

    The use of error pattern analysis can reduce the time and money spent on editing and correcting manuscripts. What is required is noting, classifying, and keeping a frequency count of errors. First an editor should take a typical page of writing and circle each error. After the editor has done a sufficiently large number of pages to identify an…

  3. A Study of Reading Errors Using Goodman's Miscue Analysis and Cloze Procedure.

    ERIC Educational Resources Information Center

    Farren, Sean N.

    A study of 11 boys, aged 12 to 14 with low reading ability, was conducted to discover what kinds of errors they made and whether or not differences might exist between error patterns in silent and oral reading. Miscue analysis was used to test oral reading while cloze procedures were used to test silent reading. Errors were categorized according…

  4. Some Deep Structure Manifestations in Second Language Errors of English Voiced and Voiceless "th."

    ERIC Educational Resources Information Center

    Moustafa, Margaret Heiss

    Native speakers of Egyptian Arabic make errors in their pronunciation of English that cannot always be accounted for by a contrastive analysis of Egyptian analysis of Egyptain Arabic and English. This study focuses on three types of errors in the pronunciation of voiced and voiceless "th" made by fluent speakers of English. These errors were noted…

  5. Validation of prostate-specific antigen laboratory values recorded in Surveillance, Epidemiology, and End Results registries.

    PubMed

    Adamo, Margaret Peggy; Boten, Jessica A; Coyle, Linda M; Cronin, Kathleen A; Lam, Clara J K; Negoita, Serban; Penberthy, Lynne; Stevens, Jennifer L; Ward, Kevin C

    2017-02-15

    Researchers have used prostate-specific antigen (PSA) values collected by central cancer registries to evaluate tumors for potential aggressive clinical disease. An independent study collecting PSA values suggested a high error rate (18%) related to implied decimal points. To evaluate the error rate in the Surveillance, Epidemiology, and End Results (SEER) program, a comprehensive review of PSA values recorded across all SEER registries was performed. Consolidated PSA values for eligible prostate cancer cases in SEER registries were reviewed and compared with text documentation from abstracted records. Four types of classification errors were identified: implied decimal point errors, abstraction or coding implementation errors, nonsignificant errors, and changes related to "unknown" values. A total of 50,277 prostate cancer cases diagnosed in 2012 were reviewed. Approximately 94.15% of cases did not have meaningful changes (85.85% correct, 5.58% with a nonsignificant change of <1 ng/mL, and 2.80% with no clinical change). Approximately 5.70% of cases had meaningful changes (1.93% due to implied decimal point errors, 1.54% due to abstract or coding errors, and 2.23% due to errors related to unknown categories). Only 419 of the original 50,277 cases (0.83%) resulted in a change in disease stage due to a corrected PSA value. The implied decimal error rate was only 1.93% of all cases in the current validation study, with a meaningful error rate of 5.81%. The reasons for the lower error rate in SEER are likely due to ongoing and rigorous quality control and visual editing processes by the central registries. The SEER program currently is reviewing and correcting PSA values back to 2004 and will re-release these data in the public use research file. Cancer 2017;123:697-703. © 2016 American Cancer Society. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.

  6. Analyzing human errors in flight mission operations

    NASA Technical Reports Server (NTRS)

    Bruno, Kristin J.; Welz, Linda L.; Barnes, G. Michael; Sherif, Josef

    1993-01-01

    A long-term program is in progress at JPL to reduce cost and risk of flight mission operations through a defect prevention/error management program. The main thrust of this program is to create an environment in which the performance of the total system, both the human operator and the computer system, is optimized. To this end, 1580 Incident Surprise Anomaly reports (ISA's) from 1977-1991 were analyzed from the Voyager and Magellan projects. A Pareto analysis revealed that 38 percent of the errors were classified as human errors. A preliminary cluster analysis based on the Magellan human errors (204 ISA's) is presented here. The resulting clusters described the underlying relationships among the ISA's. Initial models of human error in flight mission operations are presented. Next, the Voyager ISA's will be scored and included in the analysis. Eventually, these relationships will be used to derive a theoretically motivated and empirically validated model of human error in flight mission operations. Ultimately, this analysis will be used to make continuous process improvements continuous process improvements to end-user applications and training requirements. This Total Quality Management approach will enable the management and prevention of errors in the future.

  7. Data Analysis & Statistical Methods for Command File Errors

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  8. Leak localization and quantification with a small unmanned aerial system

    NASA Astrophysics Data System (ADS)

    Golston, L.; Zondlo, M. A.; Frish, M. B.; Aubut, N. F.; Yang, S.; Talbot, R. W.

    2017-12-01

    Methane emissions from oil and gas facilities are a recognized source of greenhouse gas emissions, requiring cost-effective and reliable monitoring systems to support leak detection and repair programs. We describe a set of methods for locating and quantifying natural gas leaks using a small unmanned aerial system (sUAS) equipped with a path-integrated methane sensor along with ground-based wind measurements. The algorithms are developed as part of a system for continuous well pad scale (100 m2 area) monitoring, supported by a series of over 200 methane release trials covering multiple release locations and flow rates. Test measurements include data obtained on a rotating boom platform as well as flight tests on a sUAS. The system is found throughout the trials to reliably distinguish between cases with and without a methane release down to 6 scfh (0.032 g/s). Among several methods evaluated for horizontal localization, the location corresponding to the maximum integrated methane reading have performed best with a median error of ± 1 m if two or more flights are averaged, or ± 1.2 m for individual flights. Additionally, a method of rotating the data around the estimated leak location is developed, with the leak magnitude calculated as the average crosswind integrated flux in the region near the source location. Validation of these methods will be presented, including blind test results. Sources of error, including GPS uncertainty, meteorological variables, and flight pattern coverage, will be discussed.

  9. Development of a Work Control System for Propulsion Testing at NASA Stennis

    NASA Technical Reports Server (NTRS)

    Messer, Elizabeth A.

    2005-01-01

    This paper will explain the requirements and steps taken to develop the current Propulsion Test Directorate electronic work control system for Test Operations. The PTD Work Control System includes work authorization and technical instruction documents, such as test preparation sheets, discrepancy reports, test requests, pre-test briefing reports, and other test operations supporting tools. The environment that existed in the E-Complex test areas in the late 1990's was one of enormous growth which brought people of diverse backgrounds together for the sole purpose of testing propulsion hardware. The problem that faced us was that these newly formed teams did not have a consistent and clearly understood method for writing, performing or verifying work. A paper system was developed that would allow the teams to use the same forms, but this still presented problems in the large amount of errors occurring, such as lost paperwork and inconsistent implementation. In a sampling of errors in August 1999, the paper work control system encountered 250 errors out of 230 documents released and completed, for an error rate of 111%.

  10. ILRS Activities in Monitoring Systematic Errors in SLR Data

    NASA Astrophysics Data System (ADS)

    Pavlis, E. C.; Luceri, V.; Kuzmicz-Cieslak, M.; Bianco, G.

    2017-12-01

    The International Laser Ranging Service (ILRS) contributes to ITRF development unique information that only Satellite Laser Ranging—SLR is sensitive to: the definition of the origin, and in equal parts with VLBI, the scale of the model. For the development of ITRF2014, the ILRS analysts adopted a revision of the internal standards and procedures in generating our contribution from the eight ILRS Analysis Centers. The improved results for the ILRS components were reflected in the resulting new time series of the ITRF origin and scale, showing insignificant trends and tighter scatter. This effort was further extended after the release of ITRF2014, with the execution of a Pilot Project (PP) in the 2016-2017 timeframe that demonstrated the robust estimation of persistent systematic errors at the millimeter level. ILRS ASC is now turning this into an operational tool to monitor station performance and to generate a history of systematics at each station, to be used with each re-analysis for future ITRF model developments. This is part of a broader ILRS effort to improve the quality control of the data collection process as well as that of our products. To this end, the ILRS has established a "Quality Control Board—QCB" that comprises of members from the analysis and engineering groups, the Central Bureau, and even user groups with special interests. The QCB meets by telecon monthly and oversees the various ongoing projects, develops ideas for new tools and future products. This presentation will focus on the main topic with an update on the results so far, the schedule for the near future and its operational implementation, along with a brief description of upcoming new ILRS products.

  11. General model for the pointing error analysis of Risley-prism system based on ray direction deviation in light refraction

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen; Bai, Qing

    2016-09-01

    The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.

  12. Spelling Errors of Dyslexic Children in Bosnian Language with Transparent Orthography

    ERIC Educational Resources Information Center

    Duranovic, Mirela

    2017-01-01

    The purpose of this study was to explore the nature of spelling errors made by children with dyslexia in Bosnian language with transparent orthography. Three main error categories were distinguished: phonological, orthographic, and grammatical errors. An analysis of error type showed 86% of phonological errors, 10% of orthographic errors, and 4%…

  13. Using integrated models to minimize environmentally induced wavefront error in optomechanical design and analysis

    NASA Astrophysics Data System (ADS)

    Genberg, Victor L.; Michels, Gregory J.

    2017-08-01

    The ultimate design goal of an optical system subjected to dynamic loads is to minimize system level wavefront error (WFE). In random response analysis, system WFE is difficult to predict from finite element results due to the loss of phase information. In the past, the use of ystem WFE was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for determining system level WFE using a linear optics model is presented. An error estimate is included in the analysis output based on fitting errors of mode shapes. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  14. [Character of refractive errors in population study performed by the Area Military Medical Commission in Lodz].

    PubMed

    Nowak, Michał S; Goś, Roman; Smigielski, Janusz

    2008-01-01

    To determine the prevalence of refractive errors in population. A retrospective review of medical examinations for entry to the military service from The Area Military Medical Commission in Lodz. Ophthalmic examinations were performed. We used statistic analysis to review the results. Statistic analysis revealed that refractive errors occurred in 21.68% of the population. The most commen refractive error was myopia. 1) The most commen ocular diseases are refractive errors, especially myopia (21.68% in total). 2) Refractive surgery and contact lenses should be allowed as the possible correction of refractive errors for military service.

  15. Implementation of an experimental program to investigate the performance characteristics of OMEGA navigation

    NASA Technical Reports Server (NTRS)

    Baxa, E. G., Jr.

    1974-01-01

    A theoretical formulation of differential and composite OMEGA error is presented to establish hypotheses about the functional relationships between various parameters and OMEGA navigational errors. Computer software developed to provide for extensive statistical analysis of the phase data is described. Results from the regression analysis used to conduct parameter sensitivity studies on differential OMEGA error tend to validate the theoretically based hypothesis concerning the relationship between uncorrected differential OMEGA error and receiver separation range and azimuth. Limited results of measurement of receiver repeatability error and line of position measurement error are also presented.

  16. Reliable LC-MS quantitative glycomics using iGlycoMab stable isotope labeled glycans as internal standards.

    PubMed

    Zhou, Shiyue; Tello, Nadia; Harvey, Alex; Boyes, Barry; Orlando, Ron; Mechref, Yehia

    2016-06-01

    Glycans have numerous functions in various biological processes and participate in the progress of diseases. Reliable quantitative glycomic profiling techniques could contribute to the understanding of the biological functions of glycans, and lead to the discovery of potential glycan biomarkers for diseases. Although LC-MS is a powerful analytical tool for quantitative glycomics, the variation of ionization efficiency and MS intensity bias are influencing quantitation reliability. Internal standards can be utilized for glycomic quantitation by MS-based methods to reduce variability. In this study, we used stable isotope labeled IgG2b monoclonal antibody, iGlycoMab, as an internal standard to reduce potential for errors and to reduce variabililty due to sample digestion, derivatization, and fluctuation of nanoESI efficiency in the LC-MS analysis of permethylated N-glycans released from model glycoproteins, human blood serum, and breast cancer cell line. We observed an unanticipated degradation of isotope labeled glycans, tracked a source of such degradation, and optimized a sample preparation protocol to minimize degradation of the internal standard glycans. All results indicated the effectiveness of using iGlycoMab to minimize errors originating from sample handling and instruments. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations.

    PubMed

    Tornøe, Christoffer W; Overgaard, Rune V; Agersø, Henrik; Nielsen, Henrik A; Madsen, Henrik; Jonsson, E Niclas

    2005-08-01

    The objective of the present analysis was to explore the use of stochastic differential equations (SDEs) in population pharmacokinetic/pharmacodynamic (PK/PD) modeling. The intra-individual variability in nonlinear mixed-effects models based on SDEs is decomposed into two types of noise: a measurement and a system noise term. The measurement noise represents uncorrelated error due to, for example, assay error while the system noise accounts for structural misspecifications, approximations of the dynamical model, and true random physiological fluctuations. Since the system noise accounts for model misspecifications, the SDEs provide a diagnostic tool for model appropriateness. The focus of the article is on the implementation of the Extended Kalman Filter (EKF) in NONMEM for parameter estimation in SDE models. Various applications of SDEs in population PK/PD modeling are illustrated through a systematic model development example using clinical PK data of the gonadotropin releasing hormone (GnRH) antagonist degarelix. The dynamic noise estimates were used to track variations in model parameters and systematically build an absorption model for subcutaneously administered degarelix. The EKF-based algorithm was successfully implemented in NONMEM for parameter estimation in population PK/PD models described by systems of SDEs. The example indicated that it was possible to pinpoint structural model deficiencies, and that valuable information may be obtained by tracking unexplained variations in parameters.

  18. Preparation and evaluation of novel metronidazole sustained release and floating matrix tablets.

    PubMed

    Asnaashari, Solmaz; Khoei, Nazaninossadat Seyed; Zarrintan, Mohammad Hosein; Adibkia, Khosro; Javadzadeh, Yousef

    2011-08-01

    In the present study, metronidazole was used for preparing floating dosage forms that are designed to retain in the stomach for a long time and have developed as a drug delivery system for better eradication of Helicobacter Pylori in peptic ulcer diseases. For this means, various formulations were designed using multi-factorial design. HPMC, psyllium and carbopol in different concentrations were used as floating agents, and sodium bicarbonate was added as a gas-forming agent. Hardness, friability, drug loading, floating ability and release profiles as well as kinetics of release were assessed. Formulations containing HPMC as filler showed prolonged lag times for buoyancy. Adding psyllium to these formulations had reduced relative lag times. Overall, selected formulations were able to float immediately and showed buoyancy for at least 8?h. Meanwhile, sustained profiles of drug release were also obtained. Kinetically, among the 10 assessed models, the release pattern of metronidazole from the tablets fitted best to Power law, Weibull and Higuchi models in respect overall to mean percentage error values of 3.8, 4.73 and 5.77, respectively, for calcium carbonate-based tablets and, 2.95, 6.39 and 3.9, respectively, for calcium silicate-based tablets. In general, these systems can float in the gastric condition and control the drug release from the tablets.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pirhonen, P.

    Life-cycle assessment is usually based on regular discharges that occur at a more or less constant rate. Nevertheless, the more factors that are taken into account in the LCA the better picture it gives on the environmental aspects of a product. In this study an approach to incorporate accidental releases into a products` life-cycle assessment was developed. In this approach accidental releases are divided into two categories. The first category consists of those unplanned releases which occur with a predicted level and frequency. Due to the high frequency and small release size at a time, these accidental releases can bemore » compared to continuous emissions. Their global impacts are studied in this approach. Accidental releases of the second category are sudden, unplanned releases caused by exceptional situations, e.g. technical failure, action error or disturbances in process conditions. These releases have a singular character and local impacts are typical of them. As far as the accidental releases of the second category are concerned, the approach introduced in this study results in a risk value for every stage of a life-cycle, the sum of which is a risk value for the whole life-cycle. Risk value is based on occurrence frequencies of incidents and potential environmental damage caused by releases. Risk value illustrates the level of potential damage caused by accidental releases related to the system under study and is meant to be used for comparison of these levels of two different products. It can also be used to compare the risk levels of different stages of the life-cycle. An approach was illustrated using petrol as an example product. The whole life-cycle of petrol from crude oil production to the consumption of petrol was studied.« less

  20. Evaluation of errors in quantitative determination of asbestos in rock

    NASA Astrophysics Data System (ADS)

    Baietto, Oliviero; Marini, Paola; Vitaliti, Martina

    2016-04-01

    The quantitative determination of the content of asbestos in rock matrices is a complex operation which is susceptible to important errors. The principal methodologies for the analysis are Scanning Electron Microscopy (SEM) and Phase Contrast Optical Microscopy (PCOM). Despite the PCOM resolution is inferior to that of SEM, PCOM analysis has several advantages, including more representativity of the analyzed sample, more effective recognition of chrysotile and a lower cost. The DIATI LAA internal methodology for the analysis in PCOM is based on a mild grinding of a rock sample, its subdivision in 5-6 grain size classes smaller than 2 mm and a subsequent microscopic analysis of a portion of each class. The PCOM is based on the optical properties of asbestos and of the liquids with note refractive index in which the particles in analysis are immersed. The error evaluation in the analysis of rock samples, contrary to the analysis of airborne filters, cannot be based on a statistical distribution. In fact for airborne filters a binomial distribution (Poisson), which theoretically defines the variation in the count of fibers resulting from the observation of analysis fields, chosen randomly on the filter, can be applied. The analysis in rock matrices instead cannot lean on any statistical distribution because the most important object of the analysis is the size of the of asbestiform fibers and bundles of fibers observed and the resulting relationship between the weights of the fibrous component compared to the one granular. The error evaluation generally provided by public and private institutions varies between 50 and 150 percent, but there are not, however, specific studies that discuss the origin of the error or that link it to the asbestos content. Our work aims to provide a reliable estimation of the error in relation to the applied methodologies and to the total content of asbestos, especially for the values close to the legal limits. The error assessments must be made through the repetition of the same analysis on the same sample to try to estimate the error on the representativeness of the sample and the error related to the sensitivity of the operator, in order to provide a sufficiently reliable uncertainty of the method. We used about 30 natural rock samples with different asbestos content, performing 3 analysis on each sample to obtain a trend sufficiently representative of the percentage. Furthermore we made on one chosen sample 10 repetition of the analysis to try to define more specifically the error of the methodology.

  1. Exploring the Phenotype of Phonological Reading Disability as a Function of the Phonological Deficit Severity: Evidence from the Error Analysis Paradigm in Arabic

    ERIC Educational Resources Information Center

    Taha, Haitham; Ibrahim, Raphiq; Khateb, Asaid

    2014-01-01

    The dominant error types were investigated as a function of phonological processing (PP) deficit severity in four groups of impaired readers. For this aim, an error analysis paradigm distinguishing between four error types was used. The findings revealed that the different types of impaired readers were characterized by differing predominant error…

  2. Errors Analysis of Solving Linear Inequalities among the Preparatory Year Students at King Saud University

    ERIC Educational Resources Information Center

    El-khateeb, Mahmoud M. A.

    2016-01-01

    The purpose of this study aims to investigate the errors classes occurred by the Preparatory year students at King Saud University, through analysis student responses to the items of the study test, and to identify the varieties of the common errors and ratios of common errors that occurred in solving inequalities. In the collection of the data,…

  3. 14 CFR 417.227 - Toxic release hazard analysis.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Toxic release hazard analysis. 417.227..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.227 Toxic release hazard analysis. A flight safety analysis must establish flight commit criteria that protect the public from any...

  4. Why Is Rainfall Error Analysis Requisite for Data Assimilation and Climate Modeling?

    NASA Technical Reports Server (NTRS)

    Hou, Arthur Y.; Zhang, Sara Q.

    2004-01-01

    Given the large temporal and spatial variability of precipitation processes, errors in rainfall observations are difficult to quantify yet crucial to making effective use of rainfall data for improving atmospheric analysis, weather forecasting, and climate modeling. We highlight the need for developing a quantitative understanding of systematic and random errors in precipitation observations by examining explicit examples of how each type of errors can affect forecasts and analyses in global data assimilation. We characterize the error information needed from the precipitation measurement community and how it may be used to improve data usage within the general framework of analysis techniques, as well as accuracy requirements from the perspective of climate modeling and global data assimilation.

  5. An Error Analysis for the Finite Element Method Applied to Convection Diffusion Problems.

    DTIC Science & Technology

    1981-03-01

    D TFhG-]NOLOGY k 4b 00 \\" ) ’b Technical Note BN-962 AN ERROR ANALYSIS FOR THE FINITE ELEMENT METHOD APPLIED TO CONVECTION DIFFUSION PROBLEM by I...Babu~ka and W. G. Szym’czak March 1981 V.. UNVI I Of- ’i -S AN ERROR ANALYSIS FOR THE FINITE ELEMENT METHOD P. - 0 w APPLIED TO CONVECTION DIFFUSION ...AOAO98 895 MARYLAND UNIVYCOLLEGE PARK INST FOR PHYSICAL SCIENCE--ETC F/G 12/I AN ERROR ANALYIS FOR THE FINITE ELEMENT METHOD APPLIED TO CONV..ETC (U

  6. GRACE star camera noise

    NASA Astrophysics Data System (ADS)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  7. A fingerprint key binding algorithm based on vector quantization and error correction

    NASA Astrophysics Data System (ADS)

    Li, Liang; Wang, Qian; Lv, Ke; He, Ning

    2012-04-01

    In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds. Experimental results demonstrate the effectiveness of our ideas.

  8. Defining near misses: towards a sharpened definition based on empirical data about error handling processes.

    PubMed

    Kessels-Habraken, Marieke; Van der Schaaf, Tjerk; De Jonge, Jan; Rutte, Christel

    2010-05-01

    Medical errors in health care still occur frequently. Unfortunately, errors cannot be completely prevented and 100% safety can never be achieved. Therefore, in addition to error reduction strategies, health care organisations could also implement strategies that promote timely error detection and correction. Reporting and analysis of so-called near misses - usually defined as incidents without adverse consequences for patients - are necessary to gather information about successful error recovery mechanisms. This study establishes the need for a clearer and more consistent definition of near misses to enable large-scale reporting and analysis in order to obtain such information. Qualitative incident reports and interviews were collected on four units of two Dutch general hospitals. Analysis of the 143 accompanying error handling processes demonstrated that different incident types each provide unique information about error handling. Specifically, error handling processes underlying incidents that did not reach the patient differed significantly from those of incidents that reached the patient, irrespective of harm, because of successful countermeasures that had been taken after error detection. We put forward two possible definitions of near misses and argue that, from a practical point of view, the optimal definition may be contingent on organisational context. Both proposed definitions could yield large-scale reporting of near misses. Subsequent analysis could enable health care organisations to improve the safety and quality of care proactively by (1) eliminating failure factors before real accidents occur, (2) enhancing their ability to intercept errors in time, and (3) improving their safety culture. Copyright 2010 Elsevier Ltd. All rights reserved.

  9. International challenge to predict the impact of radioxenon releases from medical isotope production on a comprehensive nuclear test ban treaty sampling station

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslinger, Paul W.; Bowyer, Ted W.; Achim, Pascal

    Abstract The International Monitoring System (IMS) is part of the verification regime for the Comprehensive Nuclear-Test-Ban-Treaty Organization (CTBTO). At entry-into-force, half of the 80 radionuclide stations will be able to measure concentrations of several radioactive xenon isotopes produced in nuclear explosions, and then the full network may be populated with xenon monitoring afterward (Bowyer et al., 2013). Fission-based production of 99Mo for medical purposes also releases radioxenon isotopes to the atmosphere (Saey, 2009). One of the ways to mitigate the effect of emissions from medical isotope production is the use of stack monitoring data, if it were available, so thatmore » the effect of radioactive xenon emissions could be subtracted from the effect from a presumed nuclear explosion, when detected at an IMS station location. To date, no studies have addressed the impacts the time resolution or data accuracy of stack monitoring data have on predicted concentrations at an IMS station location. Recently, participants from seven nations used atmospheric transport modeling to predict the time-history of 133Xe concentration measurements at an IMS station in Germany using stack monitoring data from a medical isotope production facility in Belgium. Participants received only stack monitoring data and used the atmospheric transport model and meteorological data of their choice. Some of the models predicted the highest measured concentrations quite well (a high composite statistical model comparison rank or a small mean square error with the measured values). The results suggest release data on a 15 min time spacing is best. The model comparison rank and ensemble analysis suggests that combining multiple models may provide more accurate predicted concentrations than any single model. Further research is needed to identify optimal methods for selecting ensemble members and those methods may depend on the specific transport problem. None of the submissions based only on the stack monitoring data predicted the small measured concentrations very well. The one submission that best predicted small concentrations also included releases from nuclear power plants. Modeling of sources by other nuclear facilities with smaller releases than medical isotope production facilities may be important in discriminating those releases from releases from a nuclear explosion.« less

  10. Quantitative evaluation of patient-specific quality assurance using online dosimetry system

    NASA Astrophysics Data System (ADS)

    Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk

    2018-01-01

    In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).

  11. Reevaluating Recovery: Perceived Violations and Preemptive Interventions on Emergency Psychiatry Rounds

    PubMed Central

    Cohen, Trevor; Blatter, Brett; Almeida, Carlos; Patel, Vimla L.

    2007-01-01

    Objective Contemporary error research suggests that the quest to eradicate error is misguided. Error commission, detection, and recovery are an integral part of cognitive work, even at the expert level. In collaborative workspaces, the perception of potential error is directly observable: workers discuss and respond to perceived violations of accepted practice norms. As perceived violations are captured and corrected preemptively, they do not fit Reason’s widely accepted definition of error as “failure to achieve an intended outcome.” However, perceived violations suggest the aversion of potential error, and consequently have implications for error prevention. This research aims to identify and describe perceived violations of the boundaries of accepted procedure in a psychiatric emergency department (PED), and how they are resolved in practice. Design Clinical discourse from fourteen PED patient rounds was audio-recorded. Excerpts from recordings suggesting perceived violations or incidents of miscommunication were extracted and analyzed using qualitative coding methods. The results are interpreted in relation to prior research on vulnerabilities to error in the PED. Results Thirty incidents of perceived violations or miscommunication are identified and analyzed. Of these, only one medication error was formally reported. Other incidents would not have been detected by a retrospective analysis. Conclusions The analysis of perceived violations expands the data available for error analysis beyond occasional reported adverse events. These data are prospective: responses are captured in real time. This analysis supports a set of recommendations to improve the quality of care in the PED and other critical care contexts. PMID:17329728

  12. An investigation of error characteristics and coding performance

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.

    1993-01-01

    The first year's effort on NASA Grant NAG5-2006 was an investigation to characterize typical errors resulting from the EOS dorn link. The analysis methods developed for this effort were used on test data from a March 1992 White Sands Terminal Test. The effectiveness of a concatenated coding scheme of a Reed Solomon outer code and a convolutional inner code versus a Reed Solomon only code scheme has been investigated as well as the effectiveness of a Periodic Convolutional Interleaver in dispersing errors of certain types. The work effort consisted of development of software that allows simulation studies with the appropriate coding schemes plus either simulated data with errors or actual data with errors. The software program is entitled Communication Link Error Analysis (CLEAN) and models downlink errors, forward error correcting schemes, and interleavers.

  13. An error analysis perspective for patient alignment systems.

    PubMed

    Figl, Michael; Kaar, Marcus; Hoffman, Rainer; Kratochwil, Alfred; Hummel, Johann

    2013-09-01

    This paper analyses the effects of error sources which can be found in patient alignment systems. As an example, an ultrasound (US) repositioning system and its transformation chain are assessed. The findings of this concept can also be applied to any navigation system. In a first step, all error sources were identified and where applicable, corresponding target registration errors were computed. By applying error propagation calculations on these commonly used registration/calibration and tracking errors, we were able to analyse the components of the overall error. Furthermore, we defined a special situation where the whole registration chain reduces to the error caused by the tracking system. Additionally, we used a phantom to evaluate the errors arising from the image-to-image registration procedure, depending on the image metric used. We have also discussed how this analysis can be applied to other positioning systems such as Cone Beam CT-based systems or Brainlab's ExacTrac. The estimates found by our error propagation analysis are in good agreement with the numbers found in the phantom study but significantly smaller than results from patient evaluations. We probably underestimated human influences such as the US scan head positioning by the operator and tissue deformation. Rotational errors of the tracking system can multiply these errors, depending on the relative position of tracker and probe. We were able to analyse the components of the overall error of a typical patient positioning system. We consider this to be a contribution to the optimization of the positioning accuracy for computer guidance systems.

  14. Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant

    PubMed Central

    Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar

    2015-01-01

    Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485

  15. Interpreting the Weibull fitting parameters for diffusion-controlled release data

    NASA Astrophysics Data System (ADS)

    Ignacio, Maxime; Chubynsky, Mykyta V.; Slater, Gary W.

    2017-11-01

    We examine the diffusion-controlled release of molecules from passive delivery systems using both analytical solutions of the diffusion equation and numerically exact Lattice Monte Carlo data. For very short times, the release process follows a √{ t } power law, typical of diffusion processes, while the long-time asymptotic behavior is exponential. The crossover time between these two regimes is determined by the boundary conditions and initial loading of the system. We show that while the widely used Weibull function provides a reasonable fit (in terms of statistical error), it has two major drawbacks: (i) it does not capture the correct limits and (ii) there is no direct connection between the fitting parameters and the properties of the system. Using a physically motivated interpolating fitting function that correctly includes both time regimes, we are able to predict the values of the Weibull parameters which allows us to propose a physical interpretation.

  16. CREME96 and Related Error Rate Prediction Methods

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and Analysis of Cosmic Ray Effects in Electronics). The Single Event Figure of Merit method was also revised to use the solar minimum galactic cosmic ray spectrum and extended to circular orbits down to 200 km at any inclination. More recently a series of commercial codes was developed by TRAD (Test & Radiations) which includes the OMERE code which calculates single event effects. There are other error rate prediction methods which use Monte Carlo techniques. In this chapter the analytic methods for estimating the environment within spacecraft will be discussed.

  17. SHARE: system design and case studies for statistical health information release

    PubMed Central

    Gardner, James; Xiong, Li; Xiao, Yonghui; Gao, Jingjing; Post, Andrew R; Jiang, Xiaoqian; Ohno-Machado, Lucila

    2013-01-01

    Objectives We present SHARE, a new system for statistical health information release with differential privacy. We present two case studies that evaluate the software on real medical datasets and demonstrate the feasibility and utility of applying the differential privacy framework on biomedical data. Materials and Methods SHARE releases statistical information in electronic health records with differential privacy, a strong privacy framework for statistical data release. It includes a number of state-of-the-art methods for releasing multidimensional histograms and longitudinal patterns. We performed a variety of experiments on two real datasets, the surveillance, epidemiology and end results (SEER) breast cancer dataset and the Emory electronic medical record (EeMR) dataset, to demonstrate the feasibility and utility of SHARE. Results Experimental results indicate that SHARE can deal with heterogeneous data present in medical data, and that the released statistics are useful. The Kullback–Leibler divergence between the released multidimensional histograms and the original data distribution is below 0.5 and 0.01 for seven-dimensional and three-dimensional data cubes generated from the SEER dataset, respectively. The relative error for longitudinal pattern queries on the EeMR dataset varies between 0 and 0.3. While the results are promising, they also suggest that challenges remain in applying statistical data release using the differential privacy framework for higher dimensional data. Conclusions SHARE is one of the first systems to provide a mechanism for custodians to release differentially private aggregate statistics for a variety of use cases in the medical domain. This proof-of-concept system is intended to be applied to large-scale medical data warehouses. PMID:23059729

  18. 78 FR 16349 - Self-Regulatory Organizations; NASDAQ OMX BX, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-14

    ... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-69071; File No. SR-BX-2013-020] Self-Regulatory... Amend Chapter V, Regulation of Trading on BX Options, Section 6, Obvious Errors March 7, 2013. Pursuant.... 78s(b)(1). \\2\\ 17 CFR 240.19b-4. I. Self-Regulatory Organization's Statement of the Terms of Substance...

  19. 77 FR 70496 - Self-Regulatory Organizations; C2 Options Exchange, Incorporated; Notice of Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-26

    ... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-68260; File No. SR-C2-2012-038] Self-Regulatory Organizations; C2 Options Exchange, Incorporated; Notice of Proposed Rule Change To Address Authority To Cancel Orders When a Technical or Systems Issue Occurs and To Describe the Operation of Routing Service Error Accounts November 19, 2012....

  20. 75 FR 80791 - Pure Magnesium From the People's Republic of China: Final Results of the 2008-2009 Antidumping...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-23

    ... hourly wage-rate data for El Salvador and released corrected data to the parties.\\6\\ \\4\\ See Memorandum... the File, ``Wage Rate Calculation--Error in Currency Conversion of the Hourly Wage Rate for El Salvador,'' dated of July 15, 2010. We received case briefs from Petitioner and TMI on July 29, 2010, and...

  1. HRR Upgrade to mass loss calorimeter and modified Schlyter test for FR Wood

    Treesearch

    Mark A. Dietenberger; Charles R. Boardman

    2013-01-01

    Enhanced Heat Release Rate (HRR) methodology has been extended to the Mass Loss Calorimeter (MLC) and the Modified Schlyter flame spread test to evaluate fire retardant effectiveness used on wood based materials. Modifications to MLC include installation of thermopile on the chimney walls to correct systematic errors to the sensible HRR calculations to account for...

  2. Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.

    PubMed

    Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia

    2017-06-01

    Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.

  3. The Incorporation and Initialization of Cloud Water/ice in AN Operational Forecast Model

    NASA Astrophysics Data System (ADS)

    Zhao, Qingyun

    Quantitative precipitation forecasts have been one of the weakest aspects of numerical weather prediction models. Theoretical studies show that the errors in precipitation calculation can arise from three sources: errors in the large-scale forecasts of primary variables, errors in the crude treatment of condensation/evaporation and precipitation processes, and errors in the model initial conditions. A new precipitation parameterization scheme has been developed to investigate the forecast value of improved precipitation physics via the introduction of cloud water and cloud ice into a numerical prediction model. The main feature of this scheme is the explicit calculation of cloud water and cloud ice in both the convective and stratiform precipitation parameterization. This scheme has been applied to the eta model at the National Meteorological Center. Four extensive tests have been performed. The statistical results showed a significant improvement in the model precipitation forecasts. Diagnostic studies suggest that the inclusion of cloud ice is important in transferring water vapor to precipitation and in the enhancement of latent heat release; the latter subsequently affects the vertical motion field significantly. Since three-dimensional cloud data is absent from the analysis/assimilation system for most numerical models, a method has been proposed to incorporate observed precipitation and nephanalysis data into the data assimilation system to obtain the initial cloud field for the eta model. In this scheme, the initial moisture and vertical motion fields are also improved at the same time as cloud initialization. The physical initialization is performed in a dynamical initialization framework that uses the Newtonian dynamical relaxation method to nudge the model's wind and mass fields toward analyses during a 12-hour data assimilation period. Results from a case study showed that a realistic cloud field was produced by this method at the end of the data assimilation period. Precipitation forecasts have been significantly improved as a result of the improved initial cloud, moisture and vertical motion fields.

  4. Evaluation and error apportionment of an ensemble of ...

    EPA Pesticide Factsheets

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) helping to detect causes of models error, and iii) identifying the processes and scales most urgently requiring dedicated investigations. The analysis is conducted within the framework of the third phase of the Air Quality Model Evaluation International Initiative (AQMEII) and tackles model performance gauging through measurement-to-model comparison, error decomposition and time series analysis of the models biases for several fields (ozone, CO, SO2, NO, NO2, PM10, PM2.5, wind speed, and temperature). The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while apportioning the error to its constituent parts (bias, variance and covariance) can help to assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the former phases of AQMEII.The application of the error apportionment method to the AQMEII Phase 3 simulations provides several key insights. In addition to reaffirming the strong impact

  5. Kinematic patterns underlying disguised movements: Spatial and temporal dissimilarity compared to genuine movement patterns.

    PubMed

    Helm, Fabian; Munzert, Jörn; Troje, Nikolaus F

    2017-08-01

    This study examined the kinematic characteristics of disguised movements by applying linear discriminant (LDA) and dissimilarity analyses to the motion data from 788 disguised and 792 non-disguised 7-m penalty throws performed by novice and expert handball field players. Results of the LDA showed that discrimination between type of throws (disguised vs. non-disguised) was more error-prone when throws were performed by experts (spatial: 4.6%; temporal: 29.6%) compared to novices (spatial: 1.0%; temporal: 20.2%). The dissimilarity analysis revealed significantly smaller spatial dissimilarities and variations between type of throws in experts compared to novices (p<0.001), but also showed that these spatial dissimilarities and variations increased significantly in both groups the closer the throws came to the moment of (predicted) ball release. In contrast, temporal dissimilarities did not differ significantly between groups. Thus, our data clearly demonstrate that expertise in disguising one's own action intentions results in an ability to perform disguised penalty throws that are highly similar to genuine throws. We suggest that this expertise depends mainly on keeping spatial dissimilarities small. However, the attempt to disguise becomes a challenge the closer one gets to the action outcome (i.e., ball release) becoming visible. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Integrated field and laboratory tests to evaluate effects of metals-impacted wetlands on amphibians: A case study from Montana

    USGS Publications Warehouse

    Linder, G.; ,

    2003-01-01

    Mining activities frequently impact wildlife habitats, and a wide range of habitats may require evaluations of the linkages between wildlife and environmental stressors common to mining activities (e.g., physical alteration of habitat, releases of chemicals such as metals and other inorganic constituents as part of the mining operation). Wetlands, for example, are frequently impacted by mining activities. Within an ecological assessment for a wetland, toxicity evaluations for representative species may be advantageous to the site evaluation, since these species could be exposed to complex chemical mixtures potentially released from the site. Amphibian species common to these transition zones between terrestrial and aquatic habitats are one key biological indicator of exposure, and integrated approaches which involve both field and laboratory methods focused on amphibians are critical to the assessment process. The laboratory and field evaluations of a wetland in western Montana illustrates the integrated approach to risk assessment and causal analysis. Here, amphibians were used to evaluate the potential toxicity associated with heavy metal-laden sediments deposited in a reservoir. Field and laboratory methods were applied to a toxicity assessment for metals characteristic of mine tailings to reduce potential "lab to field" extrapolation errors and provide adaptive management programs with critical site-specific information targeted on remediation.

  7. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Treesearch

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  8. Hartman Testing of X-Ray Telescopes

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.; Biskasch, Michael; Zhang, William W.

    2013-01-01

    Hartmann testing of x-ray telescopes is a simple test method to retrieve and analyze alignment errors and low-order circumferential errors of x-ray telescopes and their components. A narrow slit is scanned along the circumference of the telescope in front of the mirror and the centroids of the images are calculated. From the centroid data, alignment errors, radius variation errors, and cone-angle variation errors can be calculated. Mean cone angle, mean radial height (average radius), and the focal length of the telescope can also be estimated if the centroid data is measured at multiple focal plane locations. In this paper we present the basic equations that are used in the analysis process. These equations can be applied to full circumference or segmented x-ray telescopes. We use the Optical Surface Analysis Code (OSAC) to model a segmented x-ray telescope and show that the derived equations and accompanying analysis retrieves the alignment errors and low order circumferential errors accurately.

  9. Alignment of 1000 Genomes Project reads to reference assembly GRCh38.

    PubMed

    Zheng-Bradley, Xiangqun; Streeter, Ian; Fairley, Susan; Richardson, David; Clarke, Laura; Flicek, Paul

    2017-07-01

    The 1000 Genomes Project produced more than 100 trillion basepairs of short read sequence from more than 2600 samples in 26 populations over a period of five years. In its final phase, the project released over 85 million genotyped and phased variants on human reference genome assembly GRCh37. An updated reference assembly, GRCh38, was released in late 2013, but there was insufficient time for the final phase of the project analysis to change to the new assembly. Although it is possible to lift the coordinates of the 1000 Genomes Project variants to the new assembly, this is a potentially error-prone process as coordinate remapping is most appropriate only for non-repetitive regions of the genome and those that did not see significant change between the two assemblies. It will also miss variants in any region that was newly added to GRCh38. Thus, to produce the highest quality variants and genotypes on GRCh38, the best strategy is to realign the reads and recall the variants based on the new alignment. As the first step of variant calling for the 1000 Genomes Project data, we have finished remapping all of the 1000 Genomes sequence reads to GRCh38 with alternative scaffold-aware BWA-MEM. The resulting alignments are available as CRAM, a reference-based sequence compression format. The data have been released on our FTP site and are also available from European Nucleotide Archive to facilitate researchers discovering variants on the primary sequences and alternative contigs of GRCh38. © The Authors 2017. Published by Oxford University Press.

  10. Instructions to "push as hard as you can" improve average chest compression depth in dispatcher-assisted cardiopulmonary resuscitation.

    PubMed

    Mirza, Muzna; Brown, Todd B; Saini, Devashish; Pepper, Tracy L; Nandigam, Hari Krishna; Kaza, Niroop; Cofield, Stacey S

    2008-10-01

    Cardiopulmonary resuscitation (CPR) with adequate chest compression depth appears to improve first shock success in cardiac arrest. We evaluate the effect of simplification of chest compression instructions on compression depth in dispatcher-assisted CPR protocol. Data from two randomized, double-blinded, controlled trials with identical methodology were combined to obtain 332 records for this analysis. Subjects were randomized to either modified Medical Priority Dispatch System (MPDS) v11.2 protocol or a new simplified protocol. The main difference between the protocols was the instruction to "push as hard as you can" in the simplified protocol, compared to "push down firmly 2in. (5cm)" in MPDS. Data were recorded via a Laerdal ResusciAnne SkillReporter manikin. Primary outcome measures included: chest compression depth, proportion of compressions without error, with adequate depth and with total release. Instructions to "push as hard as you can", compared to "push down firmly 2in. (5cm)", resulted in improved chest compression depth (36.4 mm vs. 29.7 mm, p<0.0001), and improved median proportion of chest compressions done to the correct depth (32% vs. <1%, p<0.0001). No significant difference in median proportion of compressions with total release (100% for both) and average compression rate (99.7 min(-1) vs. 97.5 min(-1), p<0.56) was found. Modifying dispatcher-assisted CPR instructions by changing "push down firmly 2in. (5cm)" to "push as hard as you can" achieved improvement in chest compression depth at no cost to total release or average chest compression rate.

  11. Risk assessment and experimental design in the development of a prolonged release drug delivery system with paliperidone

    PubMed Central

    Iurian, Sonia; Turdean, Luana; Tomuta, Ioan

    2017-01-01

    This study focuses on the development of a drug product based on a risk assessment-based approach, within the quality by design paradigm. A prolonged release system was proposed for paliperidone (Pal) delivery, containing Kollidon® SR as an insoluble matrix agent and hydroxypropyl cellulose, hydroxypropyl methylcellulose (HPMC), or sodium carboxymethyl cellulose as a hydrophilic polymer. The experimental part was preceded by the identification of potential sources of variability through Ishikawa diagrams, and failure mode and effects analysis was used to deliver the critical process parameters that were further optimized by design of experiments. A D-optimal design was used to investigate the effects of Kollidon SR ratio (X1), the type of hydrophilic polymer (X2), and the percentage of hydrophilic polymer (X3) on the percentages of dissolved Pal over 24 h (Y1–Y9). Effects expressed as regression coefficients and response surfaces were generated, along with a design space for the preparation of a target formulation in an experimental area with low error risk. The optimal formulation contained 27.62% Kollidon SR and 8.73% HPMC and achieved the prolonged release of Pal, with low burst effect, at ratios that were very close to the ones predicted by the model. Thus, the parameters with the highest impact on the final product quality were studied, and safe ranges were established for their variations. Finally, a risk mitigation and control strategy was proposed to assure the quality of the system, by constant process monitoring. PMID:28331293

  12. Nonlinear truncation error analysis of finite difference schemes for the Euler equations

    NASA Technical Reports Server (NTRS)

    Klopfer, G. H.; Mcrae, D. S.

    1983-01-01

    It is pointed out that, in general, dissipative finite difference integration schemes have been found to be quite robust when applied to the Euler equations of gas dynamics. The present investigation considers a modified equation analysis of both implicit and explicit finite difference techniques as applied to the Euler equations. The analysis is used to identify those error terms which contribute most to the observed solution errors. A technique for analytically removing the dominant error terms is demonstrated, resulting in a greatly improved solution for the explicit Lax-Wendroff schemes. It is shown that the nonlinear truncation errors are quite large and distributed quite differently for each of the three conservation equations as applied to a one-dimensional shock tube problem.

  13. Assessing the In Vitro Drug Release from Lipid-Core Nanocapsules: a New Strategy Combining Dialysis Sac and a Continuous-Flow System.

    PubMed

    de Andrade, Diego Fontana; Zuglianello, Carine; Pohlmann, Adriana Raffin; Guterres, Silvia Stanisçuaski; Beck, Ruy Carlos Ruver

    2015-12-01

    The in vitro assessment of drug release from polymeric nanocapsules suspensions is one of the most studied parameters in the development of drug-loaded nanoparticles. Nevertheless, official methods for the evaluation of drug release from submicrometric carriers are not available. In this work, a new approach to assess the in vitro drug release profile from drug-loaded lipid-core nanocapsules (LNC) was proposed. A continuous-flow system (open system) was designed to evaluate the in vitro drug release profiles from different LNC formulations containing prednisolone or clobetasol propionate (LNC-CP) as drug model (LNC-PD) using a homemade apparatus. The release medium was constantly renewed throughout the experiment. A dialysis bag containing 5 mL of formulation (0.5 mg mL(-1)) was maintained inside the apparatus, under magnetic stirring and controlled temperature (37°C). In parallel, studies based on the conventional dialysis sac technique (closed system) were performed. It was possible to discriminate the in vitro drug release profile of different formulations using the open system. The proposed strategy improved the sink condition, by constantly renewing the release medium, thus maintaining the drug concentration farther from the saturated concentration in the release medium. Moreover, problems due to sampling errors can be easily overcome using this semi-automated system, since the collection is done automatically without interference from the analyst. The system proposed in this paper brings important methodological and analytical advantages, becoming a promising prototype semi-automated apparatus for performing in vitro drug release studies from drug-loaded lipid-core nanocapsules and other related nanoparticle drug delivery systems.

  14. Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2008-01-01

    The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…

  15. Mark-Up-Based Writing Error Analysis Model in an On-Line Classroom.

    ERIC Educational Resources Information Center

    Feng, Cheng; Yano, Yoneo; Ogata, Hiroaki

    2000-01-01

    Describes a new component called "Writing Error Analysis Model" (WEAM) in the CoCoA system for teaching writing composition in Japanese as a foreign language. The Weam can be used for analyzing learners' morphological errors and selecting appropriate compositions for learners' revising exercises. (Author/VWL)

  16. Exploratory Factor Analysis of Reading, Spelling, and Math Errors

    ERIC Educational Resources Information Center

    O'Brien, Rebecca; Pan, Xingyu; Courville, Troy; Bray, Melissa A.; Breaux, Kristina; Avitia, Maria; Choi, Dowon

    2017-01-01

    Norm-referenced error analysis is useful for understanding individual differences in students' academic skill development and for identifying areas of skill strength and weakness. The purpose of the present study was to identify underlying connections between error categories across five language and math subtests of the Kaufman Test of…

  17. Investigation on coupling error characteristics in angular rate matching based ship deformation measurement approach

    NASA Astrophysics Data System (ADS)

    Yang, Shuai; Wu, Wei; Wang, Xingshu; Xu, Zhiguang

    2018-01-01

    The coupling error in the measurement of ship hull deformation can significantly influence the attitude accuracy of the shipborne weapons and equipments. It is therefore important to study the characteristics of the coupling error. In this paper, an comprehensive investigation on the coupling error is reported, which has a potential of deducting the coupling error in the future. Firstly, the causes and characteristics of the coupling error are analyzed theoretically based on the basic theory of measuring ship deformation. Then, simulations are conducted for verifying the correctness of the theoretical analysis. Simulation results show that the cross-correlation between dynamic flexure and ship angular motion leads to the coupling error in measuring ship deformation, and coupling error increases with the correlation value between them. All the simulation results coincide with the theoretical analysis.

  18. Analysis technique for controlling system wavefront error with active/adaptive optics

    NASA Astrophysics Data System (ADS)

    Genberg, Victor L.; Michels, Gregory J.

    2017-08-01

    The ultimate goal of an active mirror system is to control system level wavefront error (WFE). In the past, the use of this technique was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for controlling system level WFE using a linear optics model is presented. An error estimate is included in the analysis output for both surface error disturbance fitting and actuator influence function fitting. To control adaptive optics, the technique has been extended to write system WFE in state space matrix form. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  19. MO-FG-202-06: Improving the Performance of Gamma Analysis QA with Radiomics- Based Image Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wootton, L; Nyflot, M; Ford, E

    2016-06-15

    Purpose: The use of gamma analysis for IMRT quality assurance has well-known limitations. Traditionally, a simple thresholding technique is used to evaluated passing criteria. However, like any image the gamma distribution is rich in information which thresholding mostly discards. We therefore propose a novel method of analyzing gamma images that uses quantitative image features borrowed from radiomics, with the goal of improving error detection. Methods: 368 gamma images were generated from 184 clinical IMRT beams. For each beam the dose to a phantom was measured with EPID dosimetry and compared to the TPS dose calculated with and without normally distributedmore » (2mm sigma) errors in MLC positions. The magnitude of 17 intensity histogram and size-zone radiomic features were derived from each image. The features that differed most significantly between image sets were determined with ROC analysis. A linear machine-learning model was trained on these features to classify images as with or without errors on 180 gamma images.The model was then applied to an independent validation set of 188 additional gamma distributions, half with and half without errors. Results: The most significant features for detecting errors were histogram kurtosis (p=0.007) and three size-zone metrics (p<1e-6 for each). The sizezone metrics detected clusters of high gamma-value pixels under mispositioned MLCs. The model applied to the validation set had an AUC of 0.8, compared to 0.56 for traditional gamma analysis with the decision threshold restricted to 98% or less. Conclusion: A radiomics-based image analysis method was developed that is more effective in detecting error than traditional gamma analysis. Though the pilot study here considers only MLC position errors, radiomics-based methods for other error types are being developed, which may provide better error detection and useful information on the source of detected errors. This work was partially supported by a grant from the Agency for Healthcare Research and Quality, grant number R18 HS022244-01.« less

  20. Cost-Effectiveness Analysis of an Automated Medication System Implemented in a Danish Hospital Setting.

    PubMed

    Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan

    To evaluate the cost-effectiveness of an automated medication system (AMS) implemented in a Danish hospital setting. An economic evaluation was performed alongside a controlled before-and-after effectiveness study with one control ward and one intervention ward. The primary outcome measure was the number of errors in the medication administration process observed prospectively before and after implementation. To determine the difference in proportion of errors after implementation of the AMS, logistic regression was applied with the presence of error(s) as the dependent variable. Time, group, and interaction between time and group were the independent variables. The cost analysis used the hospital perspective with a short-term incremental costing approach. The total 6-month costs with and without the AMS were calculated as well as the incremental costs. The number of avoided administration errors was related to the incremental costs to obtain the cost-effectiveness ratio expressed as the cost per avoided administration error. The AMS resulted in a statistically significant reduction in the proportion of errors in the intervention ward compared with the control ward. The cost analysis showed that the AMS increased the ward's 6-month cost by €16,843. The cost-effectiveness ratio was estimated at €2.01 per avoided administration error, €2.91 per avoided procedural error, and €19.38 per avoided clinical error. The AMS was effective in reducing errors in the medication administration process at a higher overall cost. The cost-effectiveness analysis showed that the AMS was associated with affordable cost-effectiveness rates. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  1. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human errormore » analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.« less

  2. SU-F-T-330: Characterization of the Clinically Released ScandiDos Discover Diode Array for In-Vivo Dose Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saenz, D; Gutierrez, A

    Purpose: The ScandiDos Discover has obtained FDA clearance and is now clinically released. We studied the essential attenuation and beam hardening components as well as tested the diode array’s ability to detect changes in absolute dose and MLC leaf positions. Methods: The ScandiDos Discover was mounted on the heads of an Elekta VersaHD and a Varian 23EX. Beam attenuation measurements were made at 10 cm depth for 6 MV and 18 MV beam energies. The PDD(10) was measured as a metric for the effect on beam quality. Next, a plan consisting of two orthogonal 10 × 10 cm2 fields wasmore » used to adjust the dose per fraction by scaling monitor units to test the absolute dose detection sensitivity of the Discover. A second plan (conformal arc) was then delivered several times independently on the Elekta VersaHD. Artificially introduced MLC position errors in the four central leaves were then added. The errors were incrementally increased from 1 mm to 4 mm and back across seven control points. Results: The absolute dose measured at 10 cm depth decreased by 1.2% and 0.7% for 6 MV and 18 MV beam with the Discover, respectively. Attenuation depended slightly on the field size but only changed the attenuation by 0.1% across 5 × 5 cm{sup 2} and 20 − 20 cm{sup 2} fields. The change in PDD(10) for a 10 − 10 cm{sup 2} field was +0.1% and +0.6% for 6 MV and 18 MV, respectively. Changes in monitor units from −5.0% to 5.0% were faithfully detected. Detected leaf errors were within 1.0 mm of intended errors. Conclusion: A novel in-vivo dosimeter monitoring the radiation beam during treatment was examined through its attenuation and beam hardening characteristics. The device tracked with changes in absolute dose as well as introduced leaf position deviations.« less

  3. System review: a method for investigating medical errors in healthcare settings.

    PubMed

    Alexander, G L; Stone, T T

    2000-01-01

    System analysis is a process of evaluating objectives, resources, structure, and design of businesses. System analysis can be used by leaders to collaboratively identify breakthrough opportunities to improve system processes. In healthcare systems, system analysis can be used to review medical errors (system occurrences) that may place patients at risk for injury, disability, and/or death. This study utilizes a case management approach to identify medical errors. Utilizing an interdisciplinary approach, a System Review Team was developed to identify trends in system occurrences, facilitate communication, and enhance the quality of patient care by reducing medical errors.

  4. Reliable absolute analog code retrieval approach for 3D measurement

    NASA Astrophysics Data System (ADS)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Chen, Deyun

    2017-11-01

    The wrapped phase of phase-shifting approach can be unwrapped by using Gray code, but both the wrapped phase error and Gray code decoding error can result in period jump error, which will lead to gross measurement error. Therefore, this paper presents a reliable absolute analog code retrieval approach. The combination of unequal-period Gray code and phase shifting patterns at high frequencies are used to obtain high-frequency absolute analog code, and at low frequencies, the same unequal-period combination patterns are used to obtain the low-frequency absolute analog code. Next, the difference between the two absolute analog codes was employed to eliminate period jump errors, and a reliable unwrapped result can be obtained. Error analysis was used to determine the applicable conditions, and this approach was verified through theoretical analysis. The proposed approach was further verified experimentally. Theoretical analysis and experimental results demonstrate that the proposed approach can perform reliable analog code unwrapping.

  5. An Integrative Perspective on the Role of Dopamine in Schizophrenia.

    PubMed

    Maia, Tiago V; Frank, Michael J

    2017-01-01

    We propose that schizophrenia involves a combination of decreased phasic dopamine responses for relevant stimuli and increased spontaneous phasic dopamine release. Using insights from computational reinforcement-learning models and basic-science studies of the dopamine system, we show that each of these two disturbances contributes to a specific symptom domain and explains a large set of experimental findings associated with that domain. Reduced phasic responses for relevant stimuli help to explain negative symptoms and provide a unified explanation for the following experimental findings in schizophrenia, most of which have been shown to correlate with negative symptoms: reduced learning from rewards; blunted activation of the ventral striatum, midbrain, and other limbic regions for rewards and positive prediction errors; blunted activation of the ventral striatum during reward anticipation; blunted autonomic responding for relevant stimuli; blunted neural activation for aversive outcomes and aversive prediction errors; reduced willingness to expend effort for rewards; and psychomotor slowing. Increased spontaneous phasic dopamine release helps to explain positive symptoms and provides a unified explanation for the following experimental findings in schizophrenia, most of which have been shown to correlate with positive symptoms: aberrant learning for neutral cues (assessed with behavioral and autonomic responses), and aberrant, increased activation of the ventral striatum, midbrain, and other limbic regions for neutral cues, neutral outcomes, and neutral prediction errors. Taken together, then, these two disturbances explain many findings in schizophrenia. We review evidence supporting their co-occurrence and consider their differential implications for the treatment of positive and negative symptoms. Copyright © 2016 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  6. Public health consequences of mercury spills: Hazardous Substances Emergency Events Surveillance system, 1993-1998.

    PubMed Central

    Zeitz, Perri; Orr, Maureen F; Kaye, Wendy E

    2002-01-01

    We analyzed data from states that participated in the Hazardous Substances Emergency Events Surveillance (HSEES) system maintained by the Agency for Toxic Substances and Disease Registry to describe the public health consequences of mercury releases. From 1993 through 1998, HSEES captured 406 events in which mercury was the only substance released. Schools and universities, private residences, and health care facilities were the most frequent locations involved in mercury events, and human error was the contributing factor for most of the releases. Fourteen persons experienced adverse health effects as a result of the releases. An additional 31 persons had documented elevated levels of mercury in the blood. No fatalities resulted. Evacuations were ordered in 90 (22%) of the events, and the length of evacuation ranged from 1 hr to 46 days. Mercury spills have a significant public health impact and economic burden. Some actions that could potentially lessen the consequences of mercury spills are to switch to mercury-free alternatives, train people in the safe handling and disposal of mercury, and keep mercury securely stored when it is necessary to have it on hand. PMID:11836139

  7. Baryon acoustic oscillations from the complete SDSS-III Lyα-quasar cross-correlation function at z = 2.4

    DOE PAGES

    du Mas des Bourboux, Helion; Le Goff, Jean-Marc; Blomqvist, Michael; ...

    2017-08-08

    We present a measurement of baryon acoustic oscillations (BAO) in the cross-correlation of quasars with the Lyα-forest flux-transmission at a mean redshift z = 2.40. The measurement uses the complete SDSS-III data sample: 168,889 forests and 234,367 quasars from the SDSS Data Release DR12. In addition to the statistical improvement on our previous study using DR11, we have implemented numerous improvements at the analysis level allowing a more accurate measurement of this cross-correlation. We also developed the first simulations of the cross-correlation allowing us to test different aspects of our data analysis and to search for potential systematic errors inmore » the determination of the BAO peak position. We measure the two ratios D H(z = 2.40)=r d = 9.01 ± 0.36 and D M(z = 2.40)=r d = 35.7 ±1.7, where the errors include marginalization over the non-linear velocity of quasars and the metal - quasar cross-correlation contribution, among other effects. These results are within 1.8σ of the prediction of the flat-ΛCDM model describing the observed CMB anisotropies.We combine this study with the Lyα-forest auto-correlation function (Bautista et al. 2017), yielding D H(z = 2.40)=r d = 8.94 ± 0.22 and D M(z = 2.40)=r d = 36.6 ± 1.2, within 2.3σ of the same flat-ΛCDM model.« less

  8. Baryon acoustic oscillations from the complete SDSS-III Lyα-quasar cross-correlation function at z = 2.4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    du Mas des Bourboux, Helion; Le Goff, Jean-Marc; Blomqvist, Michael

    We present a measurement of baryon acoustic oscillations (BAO) in the cross-correlation of quasars with the Lyα-forest flux-transmission at a mean redshift z = 2.40. The measurement uses the complete SDSS-III data sample: 168,889 forests and 234,367 quasars from the SDSS Data Release DR12. In addition to the statistical improvement on our previous study using DR11, we have implemented numerous improvements at the analysis level allowing a more accurate measurement of this cross-correlation. We also developed the first simulations of the cross-correlation allowing us to test different aspects of our data analysis and to search for potential systematic errors inmore » the determination of the BAO peak position. We measure the two ratios D H(z = 2.40)=r d = 9.01 ± 0.36 and D M(z = 2.40)=r d = 35.7 ±1.7, where the errors include marginalization over the non-linear velocity of quasars and the metal - quasar cross-correlation contribution, among other effects. These results are within 1.8σ of the prediction of the flat-ΛCDM model describing the observed CMB anisotropies.We combine this study with the Lyα-forest auto-correlation function (Bautista et al. 2017), yielding D H(z = 2.40)=r d = 8.94 ± 0.22 and D M(z = 2.40)=r d = 36.6 ± 1.2, within 2.3σ of the same flat-ΛCDM model.« less

  9. Validation of high throughput screening of human sera for detection of anti-PA IgG by Enzyme-Linked Immunosorbent Assay (ELISA) as an emergency response to an anthrax incident

    PubMed Central

    Semenova, Vera A.; Steward-Clark, Evelene; Maniatis, Panagiotis; Epperson, Monica; Sabnis, Amit; Schiffer, Jarad

    2017-01-01

    To improve surge testing capability for a response to a release of Bacillus anthracis, the CDC anti-Protective Antigen (PA) IgG Enzyme-Linked Immunosorbent Assay (ELISA) was re-designed into a high throughput screening format. The following assay performance parameters were evaluated: goodness of fit (measured as the mean reference standard r2), accuracy (measured as percent error), precision (measured as coefficient of variance (CV)), lower limit of detection (LLOD), lower limit of quantification (LLOQ), dilutional linearity, diagnostic sensitivity (DSN) and diagnostic specificity (DSP). The paired sets of data for each sample were evaluated by Concordance Correlation Coefficient (CCC) analysis. The goodness of fit was 0.999; percent error between the expected and observed concentration for each sample ranged from −4.6% to 14.4%. The coefficient of variance ranged from 9.0% to 21.2%. The assay LLOQ was 2.6 μg/mL. The regression analysis results for dilutional linearity data were r2 = 0.952, slope = 1.02 and intercept = −0.03. CCC between assays was 0.974 for the median concentration of serum samples. The accuracy and precision components of CCC were 0.997 and 0.977, respectively. This high throughput screening assay is precise, accurate, sensitive and specific. Anti-PA IgG concentrations determined using two different assays proved high levels of agreement. The method will improve surge testing capability 18-fold from 4 to 72 sera per assay plate. PMID:27814939

  10. Analysis and calibration of Safecasta data relative to the 2011 Fukushima Daiichi nuclear accident

    NASA Astrophysics Data System (ADS)

    Cervone, G.; Hultquist, C.

    2017-12-01

    Citizen-led movements producing scientific hazard data during disasters are increasingly common. After the Japanese earthquake-triggered tsunami in 2011, and the resulting radioactive releases at the damaged Fukushima Daiichi nuclear power plants, citizens monitored on-ground levels of radiation with innovative mobile devices built from off-the-shelf components. To date, the citizen-led Safecast project has recorded 50 million radiation measurements world- wide, with the majority of these measurements from Japan. A robust methodology is presented to calibrate contributed Safecast radiation measurements acquired between 2011 and 2016 in the Fukushima prefecture of Japan. The Safecast data are calibrated using official observations acquired by the U.S. Department of Energy at the time of the 2011 Fukushima Daiichi power plant nuclear accident. The methodology performs a series of interpolations between the official and contributed datasets at specific time windows and at corresponding spatial locations. The coefficients found are aggregated and interpolated using cubic and linear methods to generate time dependent calibration function. Normal background radiation, decay rates and missing values are taken into account during the analysis. Results show that the official Safecast static transformation function overestimates the official measurements because it fails to capture the presence of two different Cesium isotopes and their changing ratio with time. The new time dependent calibration function takes into account the presence of different Cesium isotopes, and minimizes the error between official and contributed data. This time dependent Safecast calibration function is necessary until 2030, after which date the error caused by the isotopes ratio will become negligible.

  11. Validation of high throughput screening of human sera for detection of anti-PA IgG by Enzyme-Linked Immunosorbent Assay (ELISA) as an emergency response to an anthrax incident.

    PubMed

    Semenova, Vera A; Steward-Clark, Evelene; Maniatis, Panagiotis; Epperson, Monica; Sabnis, Amit; Schiffer, Jarad

    2017-01-01

    To improve surge testing capability for a response to a release of Bacillus anthracis, the CDC anti-Protective Antigen (PA) IgG Enzyme-Linked Immunosorbent Assay (ELISA) was re-designed into a high throughput screening format. The following assay performance parameters were evaluated: goodness of fit (measured as the mean reference standard r 2 ), accuracy (measured as percent error), precision (measured as coefficient of variance (CV)), lower limit of detection (LLOD), lower limit of quantification (LLOQ), dilutional linearity, diagnostic sensitivity (DSN) and diagnostic specificity (DSP). The paired sets of data for each sample were evaluated by Concordance Correlation Coefficient (CCC) analysis. The goodness of fit was 0.999; percent error between the expected and observed concentration for each sample ranged from -4.6% to 14.4%. The coefficient of variance ranged from 9.0% to 21.2%. The assay LLOQ was 2.6 μg/mL. The regression analysis results for dilutional linearity data were r 2  = 0.952, slope = 1.02 and intercept = -0.03. CCC between assays was 0.974 for the median concentration of serum samples. The accuracy and precision components of CCC were 0.997 and 0.977, respectively. This high throughput screening assay is precise, accurate, sensitive and specific. Anti-PA IgG concentrations determined using two different assays proved high levels of agreement. The method will improve surge testing capability 18-fold from 4 to 72 sera per assay plate. Published by Elsevier Ltd.

  12. Anatomic, clinical, and neuropsychological correlates of spelling errors in primary progressive aphasia.

    PubMed

    Shim, Hyungsub; Hurley, Robert S; Rogalski, Emily; Mesulam, M-Marsel

    2012-07-01

    This study evaluates spelling errors in the three subtypes of primary progressive aphasia (PPA): agrammatic (PPA-G), logopenic (PPA-L), and semantic (PPA-S). Forty-one PPA patients and 36 age-matched healthy controls were administered a test of spelling. The total number of errors and types of errors in spelling to dictation of regular words, exception words and nonwords, were recorded. Error types were classified based on phonetic plausibility. In the first analysis, scores were evaluated by clinical diagnosis. Errors in spelling exception words and phonetically plausible errors were seen in PPA-S. Conversely, PPA-G was associated with errors in nonword spelling and phonetically implausible errors. In the next analysis, spelling scores were correlated to other neuropsychological language test scores. Significant correlations were found between exception word spelling and measures of naming and single word comprehension. Nonword spelling correlated with tests of grammar and repetition. Global language measures did not correlate significantly with spelling scores, however. Cortical thickness analysis based on MRI showed that atrophy in several language regions of interest were correlated with spelling errors. Atrophy in the left supramarginal gyrus and inferior frontal gyrus (IFG) pars orbitalis correlated with errors in nonword spelling, while thinning in the left temporal pole and fusiform gyrus correlated with errors in exception word spelling. Additionally, phonetically implausible errors in regular word spelling correlated with thinning in the left IFG pars triangularis and pars opercularis. Together, these findings suggest two independent systems for spelling to dictation, one phonetic (phoneme to grapheme conversion), and one lexical (whole word retrieval). Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Generalized Structured Component Analysis with Uniqueness Terms for Accommodating Measurement Error

    PubMed Central

    Hwang, Heungsun; Takane, Yoshio; Jung, Kwanghee

    2017-01-01

    Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling (SEM), where latent variables are approximated by weighted composites of indicators. It has no formal mechanism to incorporate errors in indicators, which in turn renders components prone to the errors as well. We propose to extend GSCA to account for errors in indicators explicitly. This extension, called GSCAM, considers both common and unique parts of indicators, as postulated in common factor analysis, and estimates a weighted composite of indicators with their unique parts removed. Adding such unique parts or uniqueness terms serves to account for measurement errors in indicators in a manner similar to common factor analysis. Simulation studies are conducted to compare parameter recovery of GSCAM and existing methods. These methods are also applied to fit a substantively well-established model to real data. PMID:29270146

  14. Analyse des erreurs et grammaire generative: La syntaxe de l'interrogation en francais (Error Analysis and Generative Grammar: The Syntax of Interrogation in French).

    ERIC Educational Resources Information Center

    Py, Bernard

    A progress report is presented of a study which applies a system of generative grammar to error analysis. The objective of the study was to reconstruct the grammar of students' interlanguage, using a systematic analysis of errors. (Interlanguage refers to the linguistic competence of a student who possesses a relatively systematic body of rules,…

  15. Behind Human Error: Cognitive Systems, Computers and Hindsight

    DTIC Science & Technology

    1994-12-01

    evaluations • Organize and/or conduct workshops and conferences CSERIAC is a Department of Defense Information Analysis Cen- ter sponsored by the Defense...Process 185 Neutral Observer Criteria 191 Error Analysis as Causal Judgment 193 Error as Information 195 A Fundamental Surprise 195 What is Human...Kahnemann, 1974), and in risk analysis (Dougherty and Fragola, 1990). The discussions have continued in a wide variety of forums, includ- ing the

  16. Developing and Validating Path-Dependent Uncertainty Estimates for use with the Regional Seismic Travel Time (RSTT) Model

    NASA Astrophysics Data System (ADS)

    Begnaud, M. L.; Anderson, D. N.; Phillips, W. S.; Myers, S. C.; Ballard, S.

    2016-12-01

    The Regional Seismic Travel Time (RSTT) tomography model has been developed to improve travel time predictions for regional phases (Pn, Sn, Pg, Lg) in order to increase seismic location accuracy, especially for explosion monitoring. The RSTT model is specifically designed to exploit regional phases for location, especially when combined with teleseismic arrivals. The latest RSTT model (version 201404um) has been released (http://www.sandia.gov/rstt). Travel time uncertainty estimates for RSTT are determined using one-dimensional (1D), distance-dependent error models, that have the benefit of being very fast to use in standard location algorithms, but do not account for path-dependent variations in error, and structural inadequacy of the RSTTT model (e.g., model error). Although global in extent, the RSTT tomography model is only defined in areas where data exist. A simple 1D error model does not accurately model areas where RSTT has not been calibrated. We are developing and validating a new error model for RSTT phase arrivals by mathematically deriving this multivariate model directly from a unified model of RSTT embedded into a statistical random effects model that captures distance, path and model error effects. An initial method developed is a two-dimensional path-distributed method using residuals. The goals for any RSTT uncertainty method are for it to be both readily useful for the standard RSTT user as well as improve travel time uncertainty estimates for location. We have successfully tested using the new error model for Pn phases and will demonstrate the method and validation of the error model for Sn, Pg, and Lg phases.

  17. Statistical atmospheric inversion of local gas emissions by coupling the tracer release technique and local-scale transport modelling: a test case with controlled methane emissions

    NASA Astrophysics Data System (ADS)

    Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe

    2017-12-01

    This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances between the different methane and acetylene sources. The results from these controlled experiments demonstrate that, when the targeted and tracer gases are not well collocated, this new approach provides a better estimate of the emission rates than the tracer release technique. As an example, the relative error between the estimated and actual emission rates is reduced from 32 % with the tracer release technique to 16 % with the combined approach in the case of a tracer located 60 m upwind of a single methane source. Further studies and more complex implementations with more advanced transport models and more advanced optimisations of their configuration will be required to generalise the applicability of the approach and strengthen its robustness.

  18. Analysis of ICESat Data Using Kalman Filter and Kriging to Study Height Changes in East Antarctica

    NASA Technical Reports Server (NTRS)

    Herring, Thomas A.

    2005-01-01

    We analyze ICESat derived heights collected between Feb. 03-Nov. 04 using a kriging/Kalman filtering approach to investigate height changes in East Antarctica. The model's parameters are height change to an a priori static digital height model, seasonal signal expressed as an amplitude Beta and phase Theta, and height-change rate dh/dt for each (100 km)(exp 2) block. From the Kalman filter results, dh/dt has a mean of -0.06 m/yr in the flat interior of East Antarctica. Spatially correlated pointing errors in the current data releases give uncertainties in the range 0.06 m/yr, making height change detection unreliable at this time. Our test shows that when using all available data with pointing knowledge equivalent to that of Laser 2a, height change detection with an accuracy level 0.02 m/yr can be achieved over flat terrains in East Antarctica.

  19. Realization and testing of a deployable space telescope based on tape springs

    NASA Astrophysics Data System (ADS)

    Lei, Wang; Li, Chuang; Zhong, Peifeng; Chong, Yaqin; Jing, Nan

    2017-08-01

    For its compact size and light weight, space telescope with deployable support structure for its secondary mirror is very suitable as an optical payload for a nanosatellite or a cubesat. Firstly the realization of a prototype deployable space telescope based on tape springs is introduced in this paper. The deployable telescope is composed of primary mirror assembly, secondary mirror assembly, 6 foldable tape springs to support the secondary mirror assembly, deployable baffle, aft optic components, and a set of lock-released devices based on shape memory alloy, etc. Then the deployment errors of the secondary mirror are measured with three-coordinate measuring machine to examine the alignment accuracy between the primary mirror and the deployed secondary mirror. Finally modal identification is completed for the telescope in deployment state to investigate its dynamic behavior with impact hammer testing. The results of the experimental modal identification agree with those from finite element analysis well.

  20. Accuracy improvement of the H-drive air-levitating wafer inspection stage based on error analysis and compensation

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Liu, Pinkuan

    2018-04-01

    In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.

  1. Planck 2015 results: VI. LFI mapmaking

    DOE PAGES

    Ade, P. A. R.; Aghanim, N.; Ashdown, M.; ...

    2016-09-20

    This article describes the mapmaking procedure applied to Planck Low Frequency Instrument (LFI) data. The mapmaking step takes as input the calibrated timelines and pointing information. The main products are sky maps of I, Q, and U Stokes components. For the first time, we present polarization maps at LFI frequencies. The mapmaking algorithm is based on a destriping technique, which is enhanced with a noise prior. The Galactic region is masked to reduce errors arising from bandpass mismatch and high signal gradients. We apply horn-uniform radiometer weights to reduce the effects of beam-shape mismatch. The algorithm is the same asmore » used for the 2013 release, apart from small changes in parameter settings. We validate the procedure through simulations. Special emphasis is put on the control of systematics, which is particularly important for accurate polarization analysis. We also produce low-resolution versions of the maps and corresponding noise covariance matrices. These serve as input in later analysis steps and parameter estimation. The noise covariance matrices are validated through noise Monte Carlo simulations. The residual noise in the map products is characterized through analysis of half-ring maps, noise covariance matrices, and simulations.« less

  2. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  3. High-frequency video capture and a computer program with frame-by-frame angle determination functionality as tools that support judging in artistic gymnastics.

    PubMed

    Omorczyk, Jarosław; Nosiadek, Leszek; Ambroży, Tadeusz; Nosiadek, Andrzej

    2015-01-01

    The main aim of this study was to verify the usefulness of selected simple methods of recording and fast biomechanical analysis performed by judges of artistic gymnastics in assessing a gymnast's movement technique. The study participants comprised six artistic gymnastics judges, who assessed back handsprings using two methods: a real-time observation method and a frame-by-frame video analysis method. They also determined flexion angles of knee and hip joints using the computer program. In the case of the real-time observation method, the judges gave a total of 5.8 error points with an arithmetic mean of 0.16 points for the flexion of the knee joints. In the high-speed video analysis method, the total amounted to 8.6 error points and the mean value amounted to 0.24 error points. For the excessive flexion of hip joints, the sum of the error values was 2.2 error points and the arithmetic mean was 0.06 error points during real-time observation. The sum obtained using frame-by-frame analysis method equaled 10.8 and the mean equaled 0.30 error points. Error values obtained through the frame-by-frame video analysis of movement technique were higher than those obtained through the real-time observation method. The judges were able to indicate the number of the frame in which the maximal joint flexion occurred with good accuracy. Using the real-time observation method as well as the high-speed video analysis performed without determining the exact angle for assessing movement technique were found to be insufficient tools for improving the quality of judging.

  4. A study for systematic errors of the GLA forecast model in tropical regions

    NASA Technical Reports Server (NTRS)

    Chen, Tsing-Chang; Baker, Wayman E.; Pfaendtner, James; Corrigan, Martin

    1988-01-01

    From the sensitivity studies performed with the Goddard Laboratory for Atmospheres (GLA) analysis/forecast system, it was revealed that the forecast errors in the tropics affect the ability to forecast midlatitude weather in some cases. Apparently, the forecast errors occurring in the tropics can propagate to midlatitudes. Therefore, the systematic error analysis of the GLA forecast system becomes a necessary step in improving the model's forecast performance. The major effort of this study is to examine the possible impact of the hydrological-cycle forecast error on dynamical fields in the GLA forecast system.

  5. JPL-ANTOPT antenna structure optimization program

    NASA Technical Reports Server (NTRS)

    Strain, D. M.

    1994-01-01

    New antenna path-length error and pointing-error structure optimization codes were recently added to the MSC/NASTRAN structural analysis computer program. Path-length and pointing errors are important measured of structure-related antenna performance. The path-length and pointing errors are treated as scalar displacements for statics loading cases. These scalar displacements can be subject to constraint during the optimization process. Path-length and pointing-error calculations supplement the other optimization and sensitivity capabilities of NASTRAN. The analysis and design functions were implemented as 'DMAP ALTERs' to the Design Optimization (SOL 200) Solution Sequence of MSC-NASTRAN, Version 67.5.

  6. Implications of Error Analysis Studies for Academic Interventions

    ERIC Educational Resources Information Center

    Mather, Nancy; Wendling, Barbara J.

    2017-01-01

    We reviewed 13 studies that focused on analyzing student errors on achievement tests from the Kaufman Test of Educational Achievement-Third edition (KTEA-3). The intent was to determine what instructional implications could be derived from in-depth error analysis. As we reviewed these studies, several themes emerged. We explain how a careful…

  7. A Conjoint Analysis Framework for Evaluating User Preferences in Machine Translation

    PubMed Central

    Kirchhoff, Katrin; Capurro, Daniel; Turner, Anne M.

    2013-01-01

    Despite much research on machine translation (MT) evaluation, there is surprisingly little work that directly measures users’ intuitive or emotional preferences regarding different types of MT errors. However, the elicitation and modeling of user preferences is an important prerequisite for research on user adaptation and customization of MT engines. In this paper we explore the use of conjoint analysis as a formal quantitative framework to assess users’ relative preferences for different types of translation errors. We apply our approach to the analysis of MT output from translating public health documents from English into Spanish. Our results indicate that word order errors are clearly the most dispreferred error type, followed by word sense, morphological, and function word errors. The conjoint analysis-based model is able to predict user preferences more accurately than a baseline model that chooses the translation with the fewest errors overall. Additionally we analyze the effect of using a crowd-sourced respondent population versus a sample of domain experts and observe that main preference effects are remarkably stable across the two samples. PMID:24683295

  8. Analysis of Free-Space Coupling to Photonic Lanterns in the Presence of Tilt Errors

    DTIC Science & Technology

    2017-05-01

    Analysis of Free- Space Coupling to Photonic Lanterns in the Presence of Tilt Errors Timothy M. Yarnall, David J. Geisler, Curt M. Schieler...Massachusetts Avenue Cambridge, MA 02139, USA Abstract—Free space coupling to photonic lanterns is more tolerant to tilt errors and F -number mismatch than...these errors. I. INTRODUCTION Photonic lanterns provide a means for transitioning from the free space regime to the single-mode fiber (SMF) regime by

  9. Quotation accuracy in medical journal articles-a systematic review and meta-analysis.

    PubMed

    Jergas, Hannah; Baethge, Christopher

    2015-01-01

    Background. Quotations and references are an indispensable element of scientific communication. They should support what authors claim or provide important background information for readers. Studies indicate, however, that quotations not serving their purpose-quotation errors-may be prevalent. Methods. We carried out a systematic review, meta-analysis and meta-regression of quotation errors, taking account of differences between studies in error ascertainment. Results. Out of 559 studies screened we included 28 in the main analysis, and estimated major, minor and total quotation error rates of 11,9%, 95% CI [8.4, 16.6] 11.5% [8.3, 15.7], and 25.4% [19.5, 32.4]. While heterogeneity was substantial, even the lowest estimate of total quotation errors was considerable (6.7%). Indirect references accounted for less than one sixth of all quotation problems. The findings remained robust in a number of sensitivity and subgroup analyses (including risk of bias analysis) and in meta-regression. There was no indication of publication bias. Conclusions. Readers of medical journal articles should be aware of the fact that quotation errors are common. Measures against quotation errors include spot checks by editors and reviewers, correct placement of citations in the text, and declarations by authors that they have checked cited material. Future research should elucidate if and to what degree quotation errors are detrimental to scientific progress.

  10. ATC operational error analysis.

    DOT National Transportation Integrated Search

    1972-01-01

    The primary causes of operational errors are discussed and the effects of these errors on an ATC system's performance are described. No attempt is made to specify possible error models for the spectrum of blunders that can occur although previous res...

  11. Residual thermal and moisture influences on the strain energy release rate analysis of edge delamination

    NASA Technical Reports Server (NTRS)

    Obrien, T. K.; Raju, I. S.; Garber, D. P.

    1985-01-01

    A laminated plate theory analysis is developed to calculate the strain energy release rate associated with edge delamination growth in a composite laminate. The analysis includes the contribution of residual thermal and moisture stresses to the strain energy released. The strain energy release rate, G, increased when residual thermal effects were combined with applied mechanical strains, but then decreased when increasing moisture content was included. A quasi-three-dimensional finite element analysis indicated identical trends and demonstrated these same trends for the individual strain energy release rate components, G sub I and G sub II, associated with interlaminar tension and shear. An experimental study indicated that for T300/5208 graphite-epoxy composites, the inclusion of residual thermal and moisture stresses did not significantly alter the calculation of interlaminar fracture toughness from strain energy release rate analysis of edge delamination data taken at room temperature, ambient conditions.

  12. Kinematic Analysis of Speech Sound Sequencing Errors Induced by Delayed Auditory Feedback.

    PubMed

    Cler, Gabriel J; Lee, Jackson C; Mittelman, Talia; Stepp, Cara E; Bohland, Jason W

    2017-06-22

    Delayed auditory feedback (DAF) causes speakers to become disfluent and make phonological errors. Methods for assessing the kinematics of speech errors are lacking, with most DAF studies relying on auditory perceptual analyses, which may be problematic, as errors judged to be categorical may actually represent blends of sounds or articulatory errors. Eight typical speakers produced nonsense syllable sequences under normal and DAF (200 ms). Lip and tongue kinematics were captured with electromagnetic articulography. Time-locked acoustic recordings were transcribed, and the kinematics of utterances with and without perceived errors were analyzed with existing and novel quantitative methods. New multivariate measures showed that for 5 participants, kinematic variability for productions perceived to be error free was significantly increased under delay; these results were validated by using the spatiotemporal index measure. Analysis of error trials revealed both typical productions of a nontarget syllable and productions with articulatory kinematics that incorporated aspects of both the target and the perceived utterance. This study is among the first to characterize articulatory changes under DAF and provides evidence for different classes of speech errors, which may not be perceptually salient. New methods were developed that may aid visualization and analysis of large kinematic data sets. https://doi.org/10.23641/asha.5103067.

  13. Comprehensive analysis of a medication dosing error related to CPOE.

    PubMed

    Horsky, Jan; Kuperman, Gilad J; Patel, Vimla L

    2005-01-01

    This case study of a serious medication error demonstrates the necessity of a comprehensive methodology for the analysis of failures in interaction between humans and information systems. The authors used a novel approach to analyze a dosing error related to computer-based ordering of potassium chloride (KCl). The method included a chronological reconstruction of events and their interdependencies from provider order entry usage logs, semistructured interviews with involved clinicians, and interface usability inspection of the ordering system. Information collected from all sources was compared and evaluated to understand how the error evolved and propagated through the system. In this case, the error was the product of faults in interaction among human and system agents that methods limited in scope to their distinct analytical domains would not identify. The authors characterized errors in several converging aspects of the drug ordering process: confusing on-screen laboratory results review, system usability difficulties, user training problems, and suboptimal clinical system safeguards that all contributed to a serious dosing error. The results of the authors' analysis were used to formulate specific recommendations for interface layout and functionality modifications, suggest new user alerts, propose changes to user training, and address error-prone steps of the KCl ordering process to reduce the risk of future medication dosing errors.

  14. Kinematic Analysis of Speech Sound Sequencing Errors Induced by Delayed Auditory Feedback

    PubMed Central

    Lee, Jackson C.; Mittelman, Talia; Stepp, Cara E.; Bohland, Jason W.

    2017-01-01

    Purpose Delayed auditory feedback (DAF) causes speakers to become disfluent and make phonological errors. Methods for assessing the kinematics of speech errors are lacking, with most DAF studies relying on auditory perceptual analyses, which may be problematic, as errors judged to be categorical may actually represent blends of sounds or articulatory errors. Method Eight typical speakers produced nonsense syllable sequences under normal and DAF (200 ms). Lip and tongue kinematics were captured with electromagnetic articulography. Time-locked acoustic recordings were transcribed, and the kinematics of utterances with and without perceived errors were analyzed with existing and novel quantitative methods. Results New multivariate measures showed that for 5 participants, kinematic variability for productions perceived to be error free was significantly increased under delay; these results were validated by using the spatiotemporal index measure. Analysis of error trials revealed both typical productions of a nontarget syllable and productions with articulatory kinematics that incorporated aspects of both the target and the perceived utterance. Conclusions This study is among the first to characterize articulatory changes under DAF and provides evidence for different classes of speech errors, which may not be perceptually salient. New methods were developed that may aid visualization and analysis of large kinematic data sets. Supplemental Material https://doi.org/10.23641/asha.5103067 PMID:28655038

  15. Impact and quantification of the sources of error in DNA pooling designs.

    PubMed

    Jawaid, A; Sham, P

    2009-01-01

    The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.

  16. Error of semiclassical eigenvalues in the semiclassical limit - an asymptotic analysis of the Sinai billiard

    NASA Astrophysics Data System (ADS)

    Dahlqvist, Per

    1999-10-01

    We estimate the error in the semiclassical trace formula for the Sinai billiard under the assumption that the largest source of error is due to penumbra diffraction: namely, diffraction effects for trajectories passing within a distance Ricons/Journals/Common/cdot" ALT="cdot" ALIGN="TOP"/>O((kR)-2/3) to the disc and trajectories being scattered in very forward directions. Here k is the momentum and R the radius of the scatterer. The semiclassical error is estimated by perturbing the Berry-Keating formula. The analysis necessitates an asymptotic analysis of very long periodic orbits. This is obtained within an approximation originally due to Baladi, Eckmann and Ruelle. We find that the average error, for sufficiently large values of kR, will exceed the mean level spacing.

  17. Study on Network Error Analysis and Locating based on Integrated Information Decision System

    NASA Astrophysics Data System (ADS)

    Yang, F.; Dong, Z. H.

    2017-10-01

    Integrated information decision system (IIDS) integrates multiple sub-system developed by many facilities, including almost hundred kinds of software, which provides with various services, such as email, short messages, drawing and sharing. Because the under-layer protocols are different, user standards are not unified, many errors are occurred during the stages of setup, configuration, and operation, which seriously affect the usage. Because the errors are various, which may be happened in different operation phases, stages, TCP/IP communication protocol layers, sub-system software, it is necessary to design a network error analysis and locating tool for IIDS to solve the above problems. This paper studies on network error analysis and locating based on IIDS, which provides strong theory and technology supports for the running and communicating of IIDS.

  18. Validation of Metrics as Error Predictors

    NASA Astrophysics Data System (ADS)

    Mendling, Jan

    In this chapter, we test the validity of metrics that were defined in the previous chapter for predicting errors in EPC business process models. In Section 5.1, we provide an overview of how the analysis data is generated. Section 5.2 describes the sample of EPCs from practice that we use for the analysis. Here we discuss a disaggregation by the EPC model group and by error as well as a correlation analysis between metrics and error. Based on this sample, we calculate a logistic regression model for predicting error probability with the metrics as input variables in Section 5.3. In Section 5.4, we then test the regression function for an independent sample of EPC models from textbooks as a cross-validation. Section 5.5 summarizes the findings.

  19. 27 CFR 70.151 - Administrative appeal of the erroneous filing of notice of Federal tax lien.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... rights to property of such person for a release of lien alleging an error in the filing of notice of lien... the erroneous filing of notice of Federal tax lien. 70.151 Section 70.151 Alcohol, Tobacco Products... Lien for Taxes § 70.151 Administrative appeal of the erroneous filing of notice of Federal tax lien. (a...

  20. 77 FR 20863 - Self-Regulatory Organizations; EDGA Exchange, Inc.; Notice of Filing of Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-06

    ... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-66714; File No. SR-EDGA-2012-09] Self-Regulatory Organizations; EDGA Exchange, Inc.; Notice of Filing of Proposed Rule Change Relating to Amendments to Rule 2.11 That Establish the Authority To Cancel Orders and Describe the Operation of an Error Account April 2, 2012. Pursuant to Section 19(b)(...

  1. 77 FR 20854 - Self-Regulatory Organizations; EDGX Exchange, Inc.; Notice of Filing of Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-06

    ... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-66713; File No. SR-EDGX-2012-08] Self-Regulatory Organizations; EDGX Exchange, Inc.; Notice of Filing of Proposed Rule Change Relating to Amendments to Rule 2.11 That Establish the Authority To Cancel Orders and Describe the Operation of an Error Account April 2, 2012. Pursuant to Section 19(b)(...

  2. Global Behavior in Large Scale Systems

    DTIC Science & Technology

    2013-12-05

    release. AIR FORCE RESEARCH LABORATORY AF OFFICE OF SCIENTIFIC RESEARCH (AFOSR)/RSL ARLINGTON, VIRGINIA 22203 AIR FORCE MATERIEL COMMAND AFRL-OSR-VA...and Research 875 Randolph Street, Suite 325 Room 3112, Arlington, VA 22203 December 3, 2013 1 Abstract This research attained two main achievements: 1...microscopic random interactions among the agents. 2 1 Introduction In this research we considered two main problems: 1) large deviation error performance in

  3. Testicular gonadotropin-releasing hormone II receptor (GnRHR-II) knockdown constitutively impairs diurnal testosterone secretion in the boar

    USDA-ARS?s Scientific Manuscript database

    The second mammalian GnRH isoform (GnRH-II) and its specific receptor (GnRHR-II) are highly expressed in the testis, suggesting an important role in testis biology. Gene coding errors prevent the production of GnRH-II and GnRHR-II in many species, but both genes are functional in swine. We have demo...

  4. Error analysis and correction of discrete solutions from finite element codes

    NASA Technical Reports Server (NTRS)

    Thurston, G. A.; Stein, P. A.; Knight, N. F., Jr.; Reissner, J. E.

    1984-01-01

    Many structures are an assembly of individual shell components. Therefore, results for stresses and deflections from finite element solutions for each shell component should agree with the equations of shell theory. This paper examines the problem of applying shell theory to the error analysis and the correction of finite element results. The general approach to error analysis and correction is discussed first. Relaxation methods are suggested as one approach to correcting finite element results for all or parts of shell structures. Next, the problem of error analysis of plate structures is examined in more detail. The method of successive approximations is adapted to take discrete finite element solutions and to generate continuous approximate solutions for postbuckled plates. Preliminary numerical results are included.

  5. Optical dosimetry probes to validate Monte Carlo and empirical-method-based NIR dose planning in the brain.

    PubMed

    Verleker, Akshay Prabhu; Shaffer, Michael; Fang, Qianqian; Choi, Mi-Ran; Clare, Susan; Stantz, Keith M

    2016-12-01

    A three-dimensional photon dosimetry in tissues is critical in designing optical therapeutic protocols to trigger light-activated drug release. The objective of this study is to investigate the feasibility of a Monte Carlo-based optical therapy planning software by developing dosimetry tools to characterize and cross-validate the local photon fluence in brain tissue, as part of a long-term strategy to quantify the effects of photoactivated drug release in brain tumors. An existing GPU-based 3D Monte Carlo (MC) code was modified to simulate near-infrared photon transport with differing laser beam profiles within phantoms of skull bone (B), white matter (WM), and gray matter (GM). A novel titanium-based optical dosimetry probe with isotropic acceptance was used to validate the local photon fluence, and an empirical model of photon transport was developed to significantly decrease execution time for clinical application. Comparisons between the MC and the dosimetry probe measurements were on an average 11.27%, 13.25%, and 11.81% along the illumination beam axis, and 9.4%, 12.06%, 8.91% perpendicular to the beam axis for WM, GM, and B phantoms, respectively. For a heterogeneous head phantom, the measured % errors were 17.71% and 18.04% along and perpendicular to beam axis. The empirical algorithm was validated by probe measurements and matched the MC results (R20.99), with average % error of 10.1%, 45.2%, and 22.1% relative to probe measurements, and 22.6%, 35.8%, and 21.9% relative to the MC, for WM, GM, and B phantoms, respectively. The simulation time for the empirical model was 6 s versus 8 h for the GPU-based Monte Carlo for a head phantom simulation. These tools provide the capability to develop and optimize treatment plans for optimal release of pharmaceuticals in the treatment of cancer. Future work will test and validate these novel delivery and release mechanisms in vivo.

  6. Optimization of sustained release aceclofenac microspheres using response surface methodology.

    PubMed

    Deshmukh, Rameshwar K; Naik, Jitendra B

    2015-03-01

    Polymeric microspheres containing aceclofenac were prepared by single emulsion (oil-in-water) solvent evaporation method using response surface methodology (RSM). Microspheres were prepared by changing formulation variables such as the amount of Eudragit® RS100 and the amount of polyvinyl alcohol (PVA) by statistical experimental design in order to enhance the encapsulation efficiency (E.E.) of the microspheres. The resultant microspheres were evaluated for their size, morphology, E.E., and in vitro drug release. The amount of Eudragit® RS100 and the amount of PVA were found to be significant factors respectively for determining the E.E. of the microspheres. A linear mathematical model equation fitted to the data was used to predict the E.E. in the optimal region. Optimized formulation of microspheres was prepared using optimal process variables setting in order to evaluate the optimization capability of the models generated according to IV-optimal design. The microspheres showed high E.E. (74.14±0.015% to 85.34±0.011%) and suitably sustained drug release (minimum; 40% to 60%; maximum) over a period of 12h. The optimized microspheres formulation showed E.E. of 84.87±0.005 with small error value (1.39). The low magnitudes of error and the significant value of R(2) in the present investigation prove the high prognostic ability of the design. The absence of interactions between drug and polymers was confirmed by Fourier transform infrared (FTIR) spectroscopy. Differential scanning calorimetry (DSC) and X-ray powder diffractometry (XRPD) revealed the dispersion of drug within microspheres formulation. The microspheres were found to be discrete, spherical with smooth surface. The results demonstrate that these microspheres could be promising delivery system to sustain the drug release and improve the E.E. thus prolong drug action and achieve the highest healing effect with minimal gastrointestinal side effects. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  8. The Fermi Large Area Telescope on Orbit: Event Classification, Instrument Response Functions, and Calibration

    NASA Technical Reports Server (NTRS)

    Ackermann, M.; Ajello, M.; Albert, A.; Allafort, A.; Atwood, W. B.; Axelsson, M.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; hide

    2012-01-01

    The Fermi Large Area Telescope (Fermi-LAT, hereafter LAT), the primary instrument on the Fermi Gamma-ray Space Telescope (Fermi) mission, is an imaging, wide field-of-view, high-energy -ray telescope, covering the energy range from 20 MeV to more than 300 GeV. During the first years of the mission the LAT team has gained considerable insight into the in-flight performance of the instrument. Accordingly, we have updated the analysis used to reduce LAT data for public release as well as the Instrument Response Functions (IRFs), the description of the instrument performance provided for data analysis. In this paper we describe the effects that motivated these updates. Furthermore, we discuss how we originally derived IRFs from Monte Carlo simulations and later corrected those IRFs for discrepancies observed between flight and simulated data. We also give details of the validations performed using flight data and quantify the residual uncertainties in the IRFs. Finally, we describe techniques the LAT team has developed to propagate those uncertainties into estimates of the systematic errors on common measurements such as fluxes and spectra of astrophysical sources.

  9. Spelling Errors of Dyslexic Children in Bosnian Language With Transparent Orthography.

    PubMed

    Duranović, Mirela

    The purpose of this study was to explore the nature of spelling errors made by children with dyslexia in Bosnian language with transparent orthography. Three main error categories were distinguished: phonological, orthographic, and grammatical errors. An analysis of error type showed 86% of phonological errors,10% of orthographic errors, and 4% of grammatical errors. Furthermore, the majority errors were the omissions and substitutions, followed by the insertions, omission of rules of assimilation by voicing, and errors with utilization of suffix. We can conclude that phonological errors were dominant in children with dyslexia at all grade levels.

  10. Automated documentation error detection and notification improves anesthesia billing performance.

    PubMed

    Spring, Stephen F; Sandberg, Warren S; Anupama, Shaji; Walsh, John L; Driscoll, William D; Raines, Douglas E

    2007-01-01

    Documentation of key times and events is required to obtain reimbursement for anesthesia services. The authors installed an information management system to improve record keeping and billing performance but found that a significant number of their records still could not be billed in a timely manner, and some records were never billed at all because they contained documentation errors. Computer software was developed that automatically examines electronic anesthetic records and alerts clinicians to documentation errors by alphanumeric page and e-mail. The software's efficacy was determined retrospectively by comparing billing performance before and after its implementation. Staff satisfaction with the software was assessed by survey. After implementation of this software, the percentage of anesthetic records that could never be billed declined from 1.31% to 0.04%, and the median time to correct documentation errors decreased from 33 days to 3 days. The average time to release an anesthetic record to the billing service decreased from 3.0+/-0.1 days to 1.1+/-0.2 days. More than 90% of staff found the system to be helpful and easier to use than the previous manual process for error detection and notification. This system allowed the authors to reduce the median time to correct documentation errors and the number of anesthetic records that were never billed by at least an order of magnitude. The authors estimate that these improvements increased their department's revenue by approximately $400,000 per year.

  11. [Validation of a method for notifying and monitoring medication errors in pediatrics].

    PubMed

    Guerrero-Aznar, M D; Jiménez-Mesa, E; Cotrina-Luque, J; Villalba-Moreno, A; Cumplido-Corbacho, R; Fernández-Fernández, L

    2014-12-01

    To analyze the impact of a multidisciplinary and decentralized safety committee in the pediatric management unit, and the joint implementation of a computing network application for reporting medication errors, monitoring the follow-up of the errors, and an analysis of the improvements introduced. An observational, descriptive, cross-sectional, pre-post intervention study was performed. An analysis was made of medication errors reported to the central safety committee in the twelve months prior to introduction, and those reported to the decentralized safety committee in the management unit in the nine months after implementation, using the computer application, and the strategies generated by the analysis of reported errors. Number of reported errors/10,000 days of stay, number of reported errors with harm per 10,000 days of stay, types of error, categories based on severity, stage of the process, and groups involved in the notification of medication errors. Reported medication errors increased 4.6 -fold, from 7.6 notifications of medication errors per 10,000 days of stay in the pre-intervention period to 36 in the post-intervention, rate ratio 0.21 (95% CI; 0.11-0.39) (P<.001). The medication errors with harm or requiring monitoring reported per 10,000 days of stay, was virtually unchanged from one period to the other ratio rate 0,77 (95% IC; 0,31-1,91) (P>.05). The notification of potential errors or errors without harm per 10,000 days of stay increased 17.4-fold (rate ratio 0.005., 95% CI; 0.001-0.026, P<.001). The increase in medication errors notified in the post-intervention period is a reflection of an increase in the motivation of health professionals to report errors through this new method. Copyright © 2013 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.

  12. [Improving blood safety: errors management in transfusion medicine].

    PubMed

    Bujandrić, Nevenka; Grujić, Jasmina; Krga-Milanović, Mirjana

    2014-01-01

    The concept of blood safety includes the entire transfusion chain starting with the collection of blood from the blood donor, and ending with blood transfusion to the patient. The concept involves quality management system as the systematic monitoring of adverse reactions and incidents regarding the blood donor or patient. Monitoring of near-miss errors show the critical points in the working process and increase transfusion safety. The aim of the study was to present the analysis results of adverse and unexpected events in transfusion practice with a potential risk to the health of blood donors and patients. One-year retrospective study was based on the collection, analysis and interpretation of written reports on medical errors in the Blood Transfusion Institute of Vojvodina. Errors were distributed according to the type, frequency and part of the working process where they occurred. Possible causes and corrective actions were described for each error. The study showed that there were not errors with potential health consequences for the blood donor/patient. Errors with potentially damaging consequences for patients were detected throughout the entire transfusion chain. Most of the errors were identified in the preanalytical phase. The human factor was responsible for the largest number of errors. Error reporting system has an important role in the error management and the reduction of transfusion-related risk of adverse events and incidents. The ongoing analysis reveals the strengths and weaknesses of the entire process and indicates the necessary changes. Errors in transfusion medicine can be avoided in a large percentage and prevention is cost-effective, systematic and applicable.

  13. Towards a Usability and Error "Safety Net": A Multi-Phased Multi-Method Approach to Ensuring System Usability and Safety.

    PubMed

    Kushniruk, Andre; Senathirajah, Yalini; Borycki, Elizabeth

    2017-01-01

    The usability and safety of health information systems have become major issues in the design and implementation of useful healthcare IT. In this paper we describe a multi-phased multi-method approach to integrating usability engineering methods into system testing to ensure both usability and safety of healthcare IT upon widespread deployment. The approach involves usability testing followed by clinical simulation (conducted in-situ) and "near-live" recording of user interactions with systems. At key stages in this process, usability problems are identified and rectified forming a usability and technology-induced error "safety net" that catches different types of usability and safety problems prior to releasing systems widely in healthcare settings.

  14. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  15. Covariance Analysis Tool (G-CAT) for Computing Ascent, Descent, and Landing Errors

    NASA Technical Reports Server (NTRS)

    Boussalis, Dhemetrios; Bayard, David S.

    2013-01-01

    G-CAT is a covariance analysis tool that enables fast and accurate computation of error ellipses for descent, landing, ascent, and rendezvous scenarios, and quantifies knowledge error contributions needed for error budgeting purposes. Because GCAT supports hardware/system trade studies in spacecraft and mission design, it is useful in both early and late mission/ proposal phases where Monte Carlo simulation capability is not mature, Monte Carlo simulation takes too long to run, and/or there is a need to perform multiple parametric system design trades that would require an unwieldy number of Monte Carlo runs. G-CAT is formulated as a variable-order square-root linearized Kalman filter (LKF), typically using over 120 filter states. An important property of G-CAT is that it is based on a 6-DOF (degrees of freedom) formulation that completely captures the combined effects of both attitude and translation errors on the propagated trajectories. This ensures its accuracy for guidance, navigation, and control (GN&C) analysis. G-CAT provides the desired fast turnaround analysis needed for error budgeting in support of mission concept formulations, design trade studies, and proposal development efforts. The main usefulness of a covariance analysis tool such as G-CAT is its ability to calculate the performance envelope directly from a single run. This is in sharp contrast to running thousands of simulations to obtain similar information using Monte Carlo methods. It does this by propagating the "statistics" of the overall design, rather than simulating individual trajectories. G-CAT supports applications to lunar, planetary, and small body missions. It characterizes onboard knowledge propagation errors associated with inertial measurement unit (IMU) errors (gyro and accelerometer), gravity errors/dispersions (spherical harmonics, masscons), and radar errors (multiple altimeter beams, multiple Doppler velocimeter beams). G-CAT is a standalone MATLAB- based tool intended to run on any engineer's desktop computer.

  16. State and force observers based on multibody models and the indirect Kalman filter

    NASA Astrophysics Data System (ADS)

    Sanjurjo, Emilio; Dopico, Daniel; Luaces, Alberto; Naya, Miguel Ángel

    2018-06-01

    The aim of this work is to present two new methods to provide state observers by combining multibody simulations with indirect extended Kalman filters. One of the methods presented provides also input force estimation. The observers have been applied to two mechanism with four different sensor configurations, and compared to other multibody-based observers found in the literature to evaluate their behavior, namely, the unscented Kalman filter (UKF), and the indirect extended Kalman filter with simplified Jacobians (errorEKF). The new methods have some more computational cost than the errorEKF, but still much less than the UKF. Regarding their accuracy, both are better than the errorEKF. The method with input force estimation outperforms also the UKF, while the method without force estimation achieves results almost identical to those of the UKF. All the methods have been implemented as a reusable MATLAB® toolkit which has been released as Open Source in https://github.com/MBDS/mbde-matlab.

  17. Parameterization of bulk condensation in numerical cloud models

    NASA Technical Reports Server (NTRS)

    Kogan, Yefim L.; Martin, William J.

    1994-01-01

    The accuracy of the moist saturation adjustment scheme has been evaluated using a three-dimensional explicit microphysical cloud model. It was found that the error in saturation adjustment depends strongly on the Cloud Condensation Nucleii (CCN) concentration in the ambient atmosphere. The scheme provides rather accurate results in the case where a sufficiently large number of CCN (on the order of several hundred per cubic centimeter) is available. However, under conditions typical of marine stratocumulus cloud layers with low CCN concentration, the error in the amounts of condensed water vapor and released latent heat may be as large as 40%-50%. A revision of the saturation adjustment scheme is devised that employs the CCN concentration, dynamical supersaturation, and cloud water content as additional variables in the calculation of the condensation rate. The revised condensation model reduced the error in maximum updraft and cloud water content in the climatically significant case of marine stratocumulus cloud layers by an order of magnitude.

  18. [Analysis of causes of incorrect use of dose aerosols].

    PubMed

    Petro, W; Gebert, P; Lauber, B

    1994-03-01

    Preparations administered by inhalation make relatively high demands on the skill and knowledge of the patient in handling this form of application, for the effectivity of the therapy is inseparably linked to its faultless application. The present article aims at analysing possible mistakes in handling and at finding the most effective way of avoiding them. Several groups of patients with different previous knowledge were analysed in respect of handling skill and the influence of training on an improvement of the same; the patients' self-assessment was analysed by questioning them. Most mistakes are committed by patients whose only information consists of the contents of the package circular. Written instructions alone cannot convey sufficient information especially on how to synchronize the release operations. Major mistakes are insufficient expiration before application in 85.6% of the patients and lack of synchronisation in 55.9%, while the lowest rate of errors in respect of handling was seen in patients who had undergone training and instruction. Training in application associated with demonstration and subsequent exercise reduces the error ratio to a tolerable level. Pulverizers free from propelling gas and preparations applied by means of a spacer are clearly superior to others in respect of a comparatively low error rate. 99.3% of all patients believe they are correctly following the instructions, but on going into the question more deeply it becomes apparent that 37.1% of them make incorrect statements. Hence, practical training in application should get top priority in the treatment of obstructive diseases of the airways. The individual steps of inhalation technique must be explained in detail and demonstrated by means of a placebo dosage aerosol.(ABSTRACT TRUNCATED AT 250 WORDS)

  19. Ventral striatal prediction error signaling is associated with dopamine synthesis capacity and fluid intelligence

    PubMed Central

    Schlagenhauf, Florian; Rapp, Michael A.; Huys, Quentin J. M.; Beck, Anne; Wüstenberg, Torsten; Deserno, Lorenz; Buchholz, Hans-Georg; Kalbitzer, Jan; Buchert, Ralph; Kienast, Thorsten; Cumming, Paul; Plotkin, Michail; Kumakura, Yoshitaka; Grace, Anthony A.; Dolan, Raymond J.; Heinz, Andreas

    2013-01-01

    Fluid intelligence represents the capacity for flexible problem solving and rapid behavioral adaptation. Rewards drive flexible behavioral adaptation, in part via a teaching signal expressed as reward prediction errors in the ventral striatum, which has been associated with phasic dopamine release in animal studies. We examined a sample of 28 healthy male adults using multimodal imaging and biological parametric mapping with 1) functional magnetic resonance imaging during a reversal learning task and 2) in a subsample of 17 subjects also with positron emission tomography using 6-[18F]fluoro-L-DOPA to assess dopamine synthesis capacity. Fluid intelligence was measured using a battery of nine standard neuropsychological tests. Ventral striatal BOLD correlates of reward prediction errors were positively correlated with fluid intelligence and, in the right ventral striatum, also inversely correlated with dopamine synthesis capacity (FDOPA Kinapp). When exploring aspects of fluid intelligence, we observed that prediction error signaling correlates with complex attention and reasoning. These findings indicate that individual differences in the capacity for flexible problem solving may be driven by ventral striatal activation during reward-related learning, which in turn proved to be inversely associated with ventral striatal dopamine synthesis capacity. PMID:22344813

  20. Failure analysis and modeling of a VAXcluster system

    NASA Technical Reports Server (NTRS)

    Tang, Dong; Iyer, Ravishankar K.; Subramani, Sujatha S.

    1990-01-01

    This paper discusses the results of a measurement-based analysis of real error data collected from a DEC VAXcluster multicomputer system. In addition to evaluating basic system dependability characteristics such as error and failure distributions and hazard rates for both individual machines and for the VAXcluster, reward models were developed to analyze the impact of failures on the system as a whole. The results show that more than 46 percent of all failures were due to errors in shared resources. This is despite the fact that these errors have a recovery probability greater than 0.99. The hazard rate calculations show that not only errors, but also failures occur in bursts. Approximately 40 percent of all failures occur in bursts and involved multiple machines. This result indicates that correlated failures are significant. Analysis of rewards shows that software errors have the lowest reward (0.05 vs 0.74 for disk errors). The expected reward rate (reliability measure) of the VAXcluster drops to 0.5 in 18 hours for the 7-out-of-7 model and in 80 days for the 3-out-of-7 model.

  1. Nanoscale Coloristic Pigments: Upper Limits on Releases from Pigmented Plastic during Environmental Aging, In Food Contact, and by Leaching.

    PubMed

    Neubauer, Nicole; Scifo, Lorette; Navratilova, Jana; Gondikas, Andreas; Mackevica, Aiga; Borschneck, Daniel; Chaurand, Perrine; Vidal, Vladimir; Rose, Jerome; von der Kammer, Frank; Wohlleben, Wendel

    2017-10-17

    The life cycle of nanoscale pigments in plastics may cause environmental or human exposure by various release scenarios. We investigated spontaneous and induced release with mechanical stress during/after simulated sunlight and rain degradation of polyethylene (PE) with organic and inorganic pigments. Additionally, primary leaching in food contact and secondary leaching from nanocomposite fragments with an increased surface into environmental media was examined. Standardized protocols/methods for release sampling, detection, and characterization of release rate and form were applied: Transformation of the bulk material was analyzed by Scanning Electron Microscopy (SEM), X-ray-tomography and Fourier-Transform Infrared spectroscopy (FTIR); releases were quantified by Inductively Coupled Plasma Mass Spectrometry (ICP-MS), single-particle-ICP-MS (sp-ICP-MS), Transmission Electron Microscopy (TEM), Analytical Ultracentrifugation (AUC), and UV/Vis spectroscopy. In all scenarios, the detectable particulate releases were attributed primarily to contaminations from handling and machining of the plastics, and were not identified with the pigments, although the contamination of 4 mg/kg (Fe) was dwarfed by the intentional content of 5800 mg/kg (Fe as Fe 2 O 3 pigment). We observed modulations (which were at least partially preventable by UV stabilizers) when comparing as-produced and aged nanocomposites, but no significant increase of releases. Release of pigments was negligible within the experimental error for all investigated scenarios, with upper limits of 10 mg/m 2 or 1600 particles/mL. This is the first holistic confirmation that pigment nanomaterials remain strongly contained in a plastic that has low diffusion and high persistence such as the polyolefin High Density Polyethylene (HDPE).

  2. Acute hazardous substance releases resulting in adverse health consequences in children: Hazardous Substances Emergency Events Surveillance system, 1996-2003.

    PubMed

    Wattigney, Wendy A; Kaye, Wendy E; Orr, Maureen F

    2007-11-01

    Because of their small size and ongoing organ development, children may be more susceptible than adults to the harmful effects of toxic chemicals. The objective of the study reported here was to identify frequent locations, released substances, and factors contributing to short-term chemical exposures associated with adverse health consequences experienced by children. The study examined the Hazardous Substances Emergency Events Surveillance (HSEES) system data from 1996-2003. Eligible events involved the acute release of a hazardous substance associated with at least one child being injured. The study found that injured children were predominantly at school, home, or a recreational center when events took place. School-related events were associated with the accidental release of acids and the release of pepper spray by pranksters. Carbon monoxide poisonings occurring in the home, retail stores, entertainment facilities, and hotels were responsible for about 10 percent of events involving child victims. Chlorine was one of the top chemicals harmful to children, particularly at public swimming pools. Although human error contributed to the majority of releases involving child victims, equipment failure was responsible for most chlorine and ammonia releases. The authors conclude that chemical releases resulting in injury to children occur mostly in schools, homes, and recreational areas. Surveillance of acute hazardous chemical releases helped identify contributing causes and can guide the development of prevention outreach activities. Chemical accidents cannot be entirely prevented, but efforts can be taken to provide safer environments in which children can live, learn, and play. Wide dissemination of safety recommendations and education programs is required to protect children from needless environmental dangers.

  3. A Case of Error Disclosure: A Communication Privacy Management Analysis

    PubMed Central

    Petronio, Sandra; Helft, Paul R.; Child, Jeffrey T.

    2013-01-01

    To better understand the process of disclosing medical errors to patients, this research offers a case analysis using Petronios’s theoretical frame of Communication Privacy Management (CPM). Given the resistance clinicians often feel about error disclosure, insights into the way choices are made by the clinicians in telling patients about the mistake has the potential to address reasons for resistance. Applying the evidenced-based CPM theory, developed over the last 35 years and dedicated to studying disclosure phenomenon, to disclosing medical mistakes potentially has the ability to reshape thinking about the error disclosure process. Using a composite case representing a surgical mistake, analysis based on CPM theory is offered to gain insights into conversational routines and disclosure management choices of revealing a medical error. The results of this analysis show that an underlying assumption of health information ownership by the patient and family can be at odds with the way the clinician tends to control disclosure about the error. In addition, the case analysis illustrates that there are embedded patterns of disclosure that emerge out of conversations the clinician has with the patient and the patient’s family members. These patterns unfold privacy management decisions on the part of the clinician that impact how the patient is told about the error and the way that patients interpret the meaning of the disclosure. These findings suggest the need for a better understanding of how patients manage their private health information in relationship to their expectations for the way they see the clinician caring for or controlling their health information about errors. Significance for public health Much of the mission central to public health sits squarely on the ability to communicate effectively. This case analysis offers an in-depth assessment of how error disclosure is complicated by misunderstandings, assuming ownership and control over information, unwittingly following conversational scripts that convey misleading messages, and the difficulty in regulating privacy boundaries in the stressful circumstances that occur with error disclosures. As a consequence, the potential contribution to public health is the ability to more clearly see the significance of the disclosure process that has implications for many public health issues. PMID:25170501

  4. Credibility of Uncertainty Analyses for 131-I Pathway Assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, F O.; Anspaugh, L. R.; Apostoaei, A. I.

    2004-05-01

    We would like to make your readers aware of numerous concerns we have with respect to the paper by A. A. Simpkins and D. M. Hamby on Uncertainty in transport factors used to calculate historic dose from 131I releases at the Savannah River Site. The paper by Simpkins and Hamby concludes by saying their uncertainty analysis would add credibility to current dose reconstruction efforts of public exposures to historic releases of 131I from the operations at the Savannah River Site, yet we have found their paper to be afflicted with numerous errors in assumptions and methodology, which in turn leadmore » to grossly misleading conclusions. Perhaps the most egregious errors are their conclusions, which state that: a. the vegetable pathway, not the ingestion of fresh milk, was the main contributor to thyroid dose for exposure to 131I (even though dietary intake of vegetables was less in the past than at present), and b. the probability distribution assigned to the fraction of iodine released in the elemental form (Uniform 0, 0.6) is responsible for 64.6% of the total uncertainty in thyroid dose, given a unit release of 131I to the atmosphere. The assumptions used in the paper by Simpkins and Hamby lead to a large overestimate of the contamination of vegetables by airborne 131I. The interception by leafy and non-leafy vegetables of freshly deposited 131I is known to be highly dependent on the growth form of the crop and the standing crop biomass of leafy material. Unrealistic assumptions are made for losses of 131I from food processing, preparation, and storage prior to human consumption. These assumptions tend to bias their conclusions toward an overestimate of the amount of 131I retained by vegetation prior to consumption. For example, the generic assumption of a 6-d hold-up time is used for the loss from radioactive decay for the time period from harvest to human consumption of fruits, vegetables, and grains. We anticipate hold-up times of many weeks, if not months, between harvest and consumption for most grains and non-leafy forms of vegetation. The combined assumptions made by Simpkins and Hamby about the fraction of fresh deposition intercepted by vegetation, and the rather short hold-up time for most vegetables consumed, probably caused the authors to conclude that the consumption of 131I-contaminated vegetables was more important to dose than was the consumption of fresh sources of milk. This conclusion is surprising, given that the consumption rate assumed for whole milk was rather large and that the value of the milk transfer coefficient was also higher and more uncertain than most distributions reported in the literature. In our experience, the parameters contributing most to the uncertainty in dose for the 131I air-deposition-vegetation-milk-human-thyroid pathway are the deposition velocity for elemental iodine, the mass interception factor for pasture vegetation, the milk transfer coefficient, and the thyroid dose conversion factor. In none of our previous investigations has the consumption of fruits, vegetables, and grains been the dominant contributor to the thyroid dose (or the uncertainty in dose) when the individual also was engaged in the consumption of even moderate quantities of fresh milk. The results of the relative contribution of uncertain input parameters to the overall uncertainty in exposure are counterintuitive. We suspect that calculational errors may have occurred in their application of the software that was used to estimate the relative sensitivity for each uncertain input variable. Their claim that the milk transfer coefficient contributed only 4% to the total uncertainty in the aggregated transfer from release to dose, and that the uncertainty in the vegetation interception fraction contributed only 3.3%, despite relatively large uncertainties assigned to both of these variables, violates our sense of face validity.« less

  5. Technical Report Series on Global Modeling and Data Assimilation. Volume 40; Soil Moisture Active Passive (SMAP) Project Assessment Report for the Beta-Release L4_SM Data Product

    NASA Technical Reports Server (NTRS)

    Koster, Randal D.; Reichle, Rolf H.; De Lannoy, Gabrielle J. M.; Liu, Qing; Colliander, Andreas; Conaty, Austin; Jackson, Thomas; Kimball, John

    2015-01-01

    During the post-launch SMAP calibration and validation (Cal/Val) phase there are two objectives for each science data product team: 1) calibrate, verify, and improve the performance of the science algorithm, and 2) validate the accuracy of the science data product as specified in the science requirements and according to the Cal/Val schedule. This report provides an assessment of the SMAP Level 4 Surface and Root Zone Soil Moisture Passive (L4_SM) product specifically for the product's public beta release scheduled for 30 October 2015. The primary objective of the beta release is to allow users to familiarize themselves with the data product before the validated product becomes available. The beta release also allows users to conduct their own assessment of the data and to provide feedback to the L4_SM science data product team. The assessment of the L4_SM data product includes comparisons of SMAP L4_SM soil moisture estimates with in situ soil moisture observations from core validation sites and sparse networks. The assessment further includes a global evaluation of the internal diagnostics from the ensemble-based data assimilation system that is used to generate the L4_SM product. This evaluation focuses on the statistics of the observation-minus-forecast (O-F) residuals and the analysis increments. Together, the core validation site comparisons and the statistics of the assimilation diagnostics are considered primary validation methodologies for the L4_SM product. Comparisons against in situ measurements from regional-scale sparse networks are considered a secondary validation methodology because such in situ measurements are subject to upscaling errors from the point-scale to the grid cell scale of the data product. Based on the limited set of core validation sites, the assessment presented here meets the criteria established by the Committee on Earth Observing Satellites for Stage 1 validation and supports the beta release of the data. The validation against sparse network measurements and the evaluation of the assimilation diagnostics address Stage 2 validation criteria by expanding the assessment to regional and global scales.

  6. Sensitivity of mesoscale-model forecast skill to some initial-data characteristics, data density, data position, analysis procedure and measurement error

    NASA Technical Reports Server (NTRS)

    Warner, Thomas T.; Key, Lawrence E.; Lario, Annette M.

    1989-01-01

    The effects of horizontal and vertical data resolution, data density, data location, different objective analysis algorithms, and measurement error on mesoscale-forecast accuracy are studied with observing-system simulation experiments. Domain-averaged errors are shown to generally decrease with time. It is found that the vertical distribution of error growth depends on the initial vertical distribution of the error itself. Larger gravity-inertia wave noise is produced in forecasts with coarser vertical data resolution. The use of a low vertical resolution observing system with three data levels leads to more forecast errors than moderate and high vertical resolution observing systems with 8 and 14 data levels. Also, with poor vertical resolution in soundings, the initial and forecast errors are not affected by the horizontal data resolution.

  7. A Linguistic Analysis of Errors in the Compositions of Arba Minch University Students

    ERIC Educational Resources Information Center

    Tizazu, Yoseph

    2014-01-01

    This study reports the dominant linguistic errors that occur in the written productions of Arba Minch University (hereafter AMU) students. A sample of paragraphs was collected for two years from students ranging from freshmen to graduating level. The sampled compositions were then coded, described, and explained using error analysis method. Both…

  8. Diffraction analysis of sidelobe characteristics of optical elements with ripple error

    NASA Astrophysics Data System (ADS)

    Zhao, Lei; Luo, Yupeng; Bai, Jian; Zhou, Xiangdong; Du, Juan; Liu, Qun; Luo, Yujie

    2018-03-01

    The ripple errors of the lens lead to optical damage in high energy laser system. The analysis of sidelobe on the focal plane, caused by ripple error, provides a reference to evaluate the error and the imaging quality. In this paper, we analyze the diffraction characteristics of sidelobe of optical elements with ripple errors. First, we analyze the characteristics of ripple error and build relationship between ripple error and sidelobe. The sidelobe results from the diffraction of ripple errors. The ripple error tends to be periodic due to fabrication method on the optical surface. The simulated experiments are carried out based on angular spectrum method by characterizing ripple error as rotationally symmetric periodic structures. The influence of two major parameter of ripple including spatial frequency and peak-to-valley value to sidelobe is discussed. The results indicate that spatial frequency and peak-to-valley value both impact sidelobe at the image plane. The peak-tovalley value is the major factor to affect the energy proportion of the sidelobe. The spatial frequency is the major factor to affect the distribution of the sidelobe at the image plane.

  9. Error modeling and sensitivity analysis of a parallel robot with SCARA(selective compliance assembly robot arm) motions

    NASA Astrophysics Data System (ADS)

    Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua

    2014-07-01

    Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.

  10. SUS Source Level Error Analysis

    DTIC Science & Technology

    1978-01-20

    RIECIP1IEN’ CATALOG NUMBER * ITLE (and SubaltIe) S. TYP aof REPORT & _V9RCO SUS~ SOURCE LEVEL ERROR ANALYSIS & Fia 1.r,. -. pAURWORONTIUMm N (s)$S...Fourier Transform (FFTl) SUS Signal model ___ 10 TRA&C (CeEOINIMII1& ro"* *140O tidat n9#*#*Y a"d 0e~ntiff 6T 69*.4 apbt The report provides an analysis ...of major terms which contribute to signal analysis error in a proposed experiment to c-librate sourr - I levels of SUS (Signal Underwater Sound). A

  11. Exploration of Incarcerated Men’s and Women’s Attitudes of Smoking in the Presence of Children and Pregnant Women: Is There a Disparity Between Smoking Attitudes and Smoking Behavior?

    PubMed Central

    Roberts, Mary B.; van den Berg, Jacob J.; Bock, Beth; Stein, Lyn A. R.; Martin, Rosemarie A.; Clarke, Jennifer G.

    2016-01-01

    Abstract Introduction: A major health challenge facing persons who are incarcerated is tobacco smoking. Upon reentry to the community, concerns regarding smoking cessation may be less likely to receive needed attention. Many individuals have partners who are pregnant and/or reside in households where children and pregnant women live. We explored incarcerated adults’ attitudes of smoking in the presence of children and pregnant women and how post-release smoking behaviors are influenced by their attitudes. Methods: Two hundred forty-seven incarcerated adults participated in a smoking cessation randomized clinical trial in a tobacco-free prison. An instrument was developed to examine smoking attitudes and behaviors around children and pregnant women. Moderating effects of smoking factors on post-release abstinence were examined by evaluating interactions between smoking factors and treatment group. Results: Four factors were defined using factor analysis: smoking around children; impact of smoking on child’s health; awareness of environmental tobacco smoke (ETS) risk for pregnant women; and importance of smoking avoidance during pregnancy. We found moderation effects of smoking factors on smoking outcomes which included: treatment group by smoking behavior around children (β = 0.8085; standard error [ SE ] = 0.4002; P = .04); treatment group by impact of smoking on child’s health (β = 1.2390; SE = 0.5632; P = .03) and for those smoking 50% fewer cigarettes post-release, treatment group by smoking impact on child’s health (β = 1.2356; SE = 0.4436; P < .01). Conclusions: Concern for smoking around children and pregnant women and awareness of ETS risk for pregnant women was not found to be significantly associated with smoking outcomes and requires additional investigation. Among individuals who continue to smoke post-release, effective ETS interventions are needed aimed at protecting children and pregnant women with whom they live. PMID:26014453

  12. Combining task analysis and fault tree analysis for accident and incident analysis: a case study from Bulgaria.

    PubMed

    Doytchev, Doytchin E; Szwillus, Gerd

    2009-11-01

    Understanding the reasons for incident and accident occurrence is important for an organization's safety. Different methods have been developed to achieve this goal. To better understand the human behaviour in incident occurrence we propose an analysis concept that combines Fault Tree Analysis (FTA) and Task Analysis (TA). The former method identifies the root causes of an accident/incident, while the latter analyses the way people perform the tasks in their work environment and how they interact with machines or colleagues. These methods were complemented with the use of the Human Error Identification in System Tools (HEIST) methodology and the concept of Performance Shaping Factors (PSF) to deepen the insight into the error modes of an operator's behaviour. HEIST shows the external error modes that caused the human error and the factors that prompted the human to err. To show the validity of the approach, a case study at a Bulgarian Hydro power plant was carried out. An incident - the flooding of the plant's basement - was analysed by combining the afore-mentioned methods. The case study shows that Task Analysis in combination with other methods can be applied successfully to human error analysis, revealing details about erroneous actions in a realistic situation.

  13. Analyzing communication errors in an air medical transport service.

    PubMed

    Dalto, Joseph D; Weir, Charlene; Thomas, Frank

    2013-01-01

    Poor communication can result in adverse events. Presently, no standards exist for classifying and analyzing air medical communication errors. This study sought to determine the frequency and types of communication errors reported within an air medical quality and safety assurance reporting system. Of 825 quality assurance reports submitted in 2009, 278 were randomly selected and analyzed for communication errors. Each communication error was classified and mapped to Clark's communication level hierarchy (ie, levels 1-4). Descriptive statistics were performed, and comparisons were evaluated using chi-square analysis. Sixty-four communication errors were identified in 58 reports (21% of 278). Of the 64 identified communication errors, only 18 (28%) were classified by the staff to be communication errors. Communication errors occurred most often at level 1 (n = 42/64, 66%) followed by level 4 (21/64, 33%). Level 2 and 3 communication failures were rare (, 1%). Communication errors were found in a fifth of quality and safety assurance reports. The reporting staff identified less than a third of these errors. Nearly all communication errors (99%) occurred at either the lowest level of communication (level 1, 66%) or the highest level (level 4, 33%). An air medical communication ontology is necessary to improve the recognition and analysis of communication errors. Copyright © 2013 Air Medical Journal Associates. Published by Elsevier Inc. All rights reserved.

  14. Error analysis and algorithm implementation for an improved optical-electric tracking device based on MEMS

    NASA Astrophysics Data System (ADS)

    Sun, Hong; Wu, Qian-zhong

    2013-09-01

    In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.

  15. Continued investigation of potential application of Omega navigation to civil aviation

    NASA Technical Reports Server (NTRS)

    Baxa, E. G., Jr.

    1978-01-01

    Major attention is given to an analysis of receiver repeatability in measuring OMEGA phase data. Repeatability is defined as the ability of two like receivers which are co-located to achieve the same LOP phase readings. Specific data analysis is presented. A propagation model is described which has been used in the analysis of propagation anomalies. Composite OMEGA analysis is presented in terms of carrier phase correlation analysis and the determination of carrier phase weighting coefficients for minimizing composite phase variation. Differential OMEGA error analysis is presented for receiver separations. Three frequency analysis includes LOP error and position error based on three and four OMEGA transmissions. Results of phase amplitude correlation studies are presented.

  16. Application Program Interface for the Orion Aerodynamics Database

    NASA Technical Reports Server (NTRS)

    Robinson, Philip E.; Thompson, James

    2013-01-01

    The Application Programming Interface (API) for the Crew Exploration Vehicle (CEV) Aerodynamic Database has been developed to provide the developers of software an easily implemented, fully self-contained method of accessing the CEV Aerodynamic Database for use in their analysis and simulation tools. The API is programmed in C and provides a series of functions to interact with the database, such as initialization, selecting various options, and calculating the aerodynamic data. No special functions (file read/write, table lookup) are required on the host system other than those included with a standard ANSI C installation. It reads one or more files of aero data tables. Previous releases of aerodynamic databases for space vehicles have only included data tables and a document of the algorithm and equations to combine them for the total aerodynamic forces and moments. This process required each software tool to have a unique implementation of the database code. Errors or omissions in the documentation, or errors in the implementation, led to a lengthy and burdensome process of having to debug each instance of the code. Additionally, input file formats differ for each space vehicle simulation tool, requiring the aero database tables to be reformatted to meet the tool s input file structure requirements. Finally, the capabilities for built-in table lookup routines vary for each simulation tool. Implementation of a new database may require an update to and verification of the table lookup routines. This may be required if the number of dimensions of a data table exceeds the capability of the simulation tools built-in lookup routines. A single software solution was created to provide an aerodynamics software model that could be integrated into other simulation and analysis tools. The highly complex Orion aerodynamics model can then be quickly included in a wide variety of tools. The API code is written in ANSI C for ease of portability to a wide variety of systems. The input data files are in standard formatted ASCII, also for improved portability. The API contains its own implementation of multidimensional table reading and lookup routines. The same aerodynamics input file can be used without modification on all implementations. The turnaround time from aerodynamics model release to a working implementation is significantly reduced

  17. In vitro-in vivo correlation for nevirapine extended release tablets.

    PubMed

    Macha, Sreeraj; Yong, Chan-Loi; Darrington, Todd; Davis, Mark S; MacGregor, Thomas R; Castles, Mark; Krill, Steven L

    2009-12-01

    An in vitro-in vivo correlation (IVIVC) for four nevirapine extended release tablets with varying polymer contents was developed. The pharmacokinetics of extended release formulations were assessed in a parallel group study with healthy volunteers and compared with corresponding in vitro dissolution data obtained using a USP apparatus type 1. In vitro samples were analysed using HPLC with UV detection and in vivo samples were analysed using a HPLC-MS/MS assay; the IVIVC analyses comparing the two results were performed using WinNonlin. A Double Weibull model optimally fits the in vitro data. A unit impulse response (UIR) was assessed using the fastest ER formulation as a reference. The deconvolution of the in vivo concentration time data was performed using the UIR to estimate an in vivo drug release profile. A linear model with a time-scaling factor clarified the relationship between in vitro and in vivo data. The predictability of the final model was consistent based on internal validation. Average percent prediction errors for pharmacokinetic parameters were <10% and individual values for all formulations were <15%. Therefore, a Level A IVIVC was developed and validated for nevirapine extended release formulations providing robust predictions of in vivo profiles based on in vitro dissolution profiles. Copyright 2009 John Wiley & Sons, Ltd.

  18. Differentially Private Histogram Publication For Dynamic Datasets: An Adaptive Sampling Approach

    PubMed Central

    Li, Haoran; Jiang, Xiaoqian; Xiong, Li; Liu, Jinfei

    2016-01-01

    Differential privacy has recently become a de facto standard for private statistical data release. Many algorithms have been proposed to generate differentially private histograms or synthetic data. However, most of them focus on “one-time” release of a static dataset and do not adequately address the increasing need of releasing series of dynamic datasets in real time. A straightforward application of existing histogram methods on each snapshot of such dynamic datasets will incur high accumulated error due to the composibility of differential privacy and correlations or overlapping users between the snapshots. In this paper, we address the problem of releasing series of dynamic datasets in real time with differential privacy, using a novel adaptive distance-based sampling approach. Our first method, DSFT, uses a fixed distance threshold and releases a differentially private histogram only when the current snapshot is sufficiently different from the previous one, i.e., with a distance greater than a predefined threshold. Our second method, DSAT, further improves DSFT and uses a dynamic threshold adaptively adjusted by a feedback control mechanism to capture the data dynamics. Extensive experiments on real and synthetic datasets demonstrate that our approach achieves better utility than baseline methods and existing state-of-the-art methods. PMID:26973795

  19. Development of in vitro-in vivo correlation for extended-release niacin after administration of hypromellose-based matrix formulations to healthy volunteers.

    PubMed

    Kesisoglou, Filippos; Rossenu, Stefaan; Farrell, Colm; Van Den Heuvel, Michiel; Prohn, Marita; Fitzpatrick, Shaun; De Kam, Pieter-Jan; Vargo, Ryan

    2014-11-01

    Development of in vitro-in vivo correlations (IVIVCs) for extended-release (ER) products is commonly pursued during pharmaceutical development to increase product understanding, set release specifications, and support biowaivers. This manuscript details the development of Level C and Level A IVIVCs for ER formulations of niacin, a highly variable and extensively metabolized compound. Three ER formulations were screened in a cross-over study against immediate-release niacin. A Multiple Level C IVIVC was established for both niacin and its primary metabolite nicotinuric acid (NUA) as well as total niacin metabolites urinary excretion. For NUA, but not for niacin, Level A IVIVC models with acceptable prediction errors were achievable via a modified IVIVC rather than a traditional deconvolution/convolution approach. Hence, this is in contradiction with current regulatory guidelines that suggest that when a Multiple Level C IVIVC is established, Level A models should also be readily achievable. We demonstrate that for a highly variable, highly metabolized compound such as niacin, development of a Level A IVIVC model fully validated according to agency guidelines may be challenging. However, Multiple Level C models are achievable and could be used to guide release specifications and formulation/manufacturing changes. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  20. 40 CFR 68.28 - Alternative release scenario analysis.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 15 2011-07-01 2011-07-01 false Alternative release scenario analysis... scenario analysis. (a) The number of scenarios. The owner or operator shall identify and analyze at least... release scenario under § 68.25; and (ii) That will reach an endpoint offsite, unless no such scenario...

  1. Error Analysis for High Resolution Topography with Bi-Static Single-Pass SAR Interferometry

    NASA Technical Reports Server (NTRS)

    Muellerschoen, Ronald J.; Chen, Curtis W.; Hensley, Scott; Rodriguez, Ernesto

    2006-01-01

    We present a flow down error analysis from the radar system to topographic height errors for bi-static single pass SAR interferometry for a satellite tandem pair. Because of orbital dynamics the baseline length and baseline orientation evolve spatially and temporally, the height accuracy of the system is modeled as a function of the spacecraft position and ground location. Vector sensitivity equations of height and the planar error components due to metrology, media effects, and radar system errors are derived and evaluated globally for a baseline mission. Included in the model are terrain effects that contribute to layover and shadow and slope effects on height errors. The analysis also accounts for nonoverlapping spectra and the non-overlapping bandwidth due to differences between the two platforms' viewing geometries. The model is applied to a 514 km altitude 97.4 degree inclination tandem satellite mission with a 300 m baseline separation and X-band SAR. Results from our model indicate that global DTED level 3 can be achieved.

  2. Baseline Error Analysis and Experimental Validation for Height Measurement of Formation Insar Satellite

    NASA Astrophysics Data System (ADS)

    Gao, X.; Li, T.; Zhang, X.; Geng, X.

    2018-04-01

    In this paper, we proposed the stochastic model of InSAR height measurement by considering the interferometric geometry of InSAR height measurement. The model directly described the relationship between baseline error and height measurement error. Then the simulation analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of baseline error to height measurement. Furthermore, the whole emulation validation of InSAR stochastic model was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were fully evaluated.

  3. Measuring the Lense-Thirring precession using a second Lageos satellite

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Ciufolini, I.

    1989-01-01

    A complete numerical simulation and error analysis was performed for the proposed experiment with the objective of establishing an accurate assessment of the feasibility and the potential accuracy of the measurement of the Lense-Thirring precession. Consideration was given to identifying the error sources which limit the accuracy of the experiment and proposing procedures for eliminating or reducing the effect of these errors. Analytic investigations were conducted to study the effects of major error sources with the objective of providing error bounds on the experiment. The analysis of realistic simulated data is used to demonstrate that satellite laser ranging of two Lageos satellites, orbiting with supplemental inclinations, collected for a period of 3 years or more, can be used to verify the Lense-Thirring precession. A comprehensive covariance analysis for the solution was also developed.

  4. Linguistic pattern analysis of misspellings of typically developing writers in grades 1-9.

    PubMed

    Bahr, Ruth Huntley; Sillian, Elaine R; Berninger, Virginia W; Dow, Michael

    2012-12-01

    A mixed-methods approach, evaluating triple word-form theory, was used to describe linguistic patterns of misspellings. Spelling errors were taken from narrative and expository writing samples provided by 888 typically developing students in Grades 1-9. Errors were coded by category (phonological, orthographic, and morphological) and specific linguistic feature affected. Grade-level effects were analyzed with trend analysis. Qualitative analyses determined frequent error types and how use of specific linguistic features varied across grades. Phonological, orthographic, and morphological errors were noted across all grades, but orthographic errors predominated. Linear trends revealed developmental shifts in error proportions for the orthographic and morphological categories between Grades 4 and 5. Similar error types were noted across age groups, but the nature of linguistic feature error changed with age. Triple word-form theory was supported. By Grade 1, orthographic errors predominated, and phonological and morphological error patterns were evident. Morphological errors increased in relative frequency in older students, probably due to a combination of word-formation issues and vocabulary growth. These patterns suggest that normal spelling development reflects nonlinear growth and that it takes a long time to develop a robust orthographic lexicon that coordinates phonology, orthography, and morphology and supports word-specific, conventional spelling.

  5. Wavefront error budget and optical manufacturing tolerance analysis for 1.8m telescope system

    NASA Astrophysics Data System (ADS)

    Wei, Kai; Zhang, Xuejun; Xian, Hao; Rao, Changhui; Zhang, Yudong

    2010-05-01

    We present the wavefront error budget and optical manufacturing tolerance analysis for 1.8m telescope. The error budget accounts for aberrations induced by optical design residual, manufacturing error, mounting effects, and misalignments. The initial error budget has been generated from the top-down. There will also be an ongoing effort to track the errors from the bottom-up. This will aid in identifying critical areas of concern. The resolution of conflicts will involve a continual process of review and comparison of the top-down and bottom-up approaches, modifying both as needed to meet the top level requirements in the end. As we all know, the adaptive optical system will correct for some of the telescope system imperfections but it cannot be assumed that all errors will be corrected. Therefore, two kinds of error budgets will be presented, one is non-AO top-down error budget and the other is with-AO system error budget. The main advantage of the method is that at the same time it describes the final performance of the telescope, and gives to the optical manufacturer the maximum freedom to define and possibly modify its own manufacturing error budget.

  6. Assessing the Responses of Streamflow to Pollution Release in South Carolina

    NASA Astrophysics Data System (ADS)

    Maze, G.; Chovancak, N. A.; Samadi, S. Z.

    2017-12-01

    The purpose of this investigation was to examine the effects of various stream flows on the transport of a pollutant downstream and to evaluate the uncertainty associated with using a single stream flow value when the true flow is unknown in the model. The area used for this study was Horse Creek in South Carolina where a chlorine pollutant spill has occurred in the past resulting from a train derailment in Graniteville, SC. In the example scenario used, the chlorine gas pollutant was released into the environment, where it killed plants, infected groundwater, and caused evacuation of the city. Tracking the movement and concentrations at various points downstream in the river system is crucial to understanding how a single accidental pollutant release can affect the surrounding areas. As a result of the lack of real-time data available this emergency response model uses historical monthly averages, however, these monthly averages do not reflect how widely the flow can vary within that month. Therefore, the assumption to use the historical monthly average flow data may not be accurate, and this investigation aims at quantifying the uncertainty associated with using a single stream flow value when the true stream flow may vary greatly. For the purpose of this investigation, the event in Graniteville was used as a case study to evaluate the emergency response model. This investigation was conducted by adjusting the STREAM II V7 program developed by Savannah River National Laboratory (SRNL) to model a confluence at the Horse Creek and the Savannah River system. This adjusted program was utilized to track the progress of the chlorine pollutant release and examine how it was transported downstream. By adjusting this program, the concentrations and time taken to reach various points downstream of the release were obtained and can be used not only to analyze this particular pollutant release in Graniteville, but can continue to be adjusted and used as a technical tool for emergency responders in future accidents. Further, the program was run with monthly maximum, minimum, and average advective flows and an uncertainty analysis was conducted to examine the error associated with the input data. These results underscore to profound influence that streamflow magnitudes (maximum, minimum, and average) have on shaping downstream water quality.

  7. A root cause analysis project in a medication safety course.

    PubMed

    Schafer, Jason J

    2012-08-10

    To develop, implement, and evaluate team-based root cause analysis projects as part of a required medication safety course for second-year pharmacy students. Lectures, in-class activities, and out-of-class reading assignments were used to develop students' medication safety skills and introduce them to the culture of medication safety. Students applied these skills within teams by evaluating cases of medication errors using root cause analyses. Teams also developed error prevention strategies and formally presented their findings. Student performance was assessed using a medication errors evaluation rubric. Of the 211 students who completed the course, the majority performed well on root cause analysis assignments and rated them favorably on course evaluations. Medication error evaluation and prevention was successfully introduced in a medication safety course using team-based root cause analysis projects.

  8. Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip

    NASA Astrophysics Data System (ADS)

    Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang

    2016-09-01

    Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.

  9. Analysis of case-only studies accounting for genotyping error.

    PubMed

    Cheng, K F

    2007-03-01

    The case-only design provides one approach to assess possible interactions between genetic and environmental factors. It has been shown that if these factors are conditionally independent, then a case-only analysis is not only valid but also very efficient. However, a drawback of the case-only approach is that its conclusions may be biased by genotyping errors. In this paper, our main aim is to propose a method for analysis of case-only studies when these errors occur. We show that the bias can be adjusted through the use of internal validation data, which are obtained by genotyping some sampled individuals twice. Our analysis is based on a simple and yet highly efficient conditional likelihood approach. Simulation studies considered in this paper confirm that the new method has acceptable performance under genotyping errors.

  10. Exploring human error in military aviation flight safety events using post-incident classification systems.

    PubMed

    Hooper, Brionny J; O'Hare, David P A

    2013-08-01

    Human error classification systems theoretically allow researchers to analyze postaccident data in an objective and consistent manner. The Human Factors Analysis and Classification System (HFACS) framework is one such practical analysis tool that has been widely used to classify human error in aviation. The Cognitive Error Taxonomy (CET) is another. It has been postulated that the focus on interrelationships within HFACS can facilitate the identification of the underlying causes of pilot error. The CET provides increased granularity at the level of unsafe acts. The aim was to analyze the influence of factors at higher organizational levels on the unsafe acts of front-line operators and to compare the errors of fixed-wing and rotary-wing operations. This study analyzed 288 aircraft incidents involving human error from an Australasian military organization occurring between 2001 and 2008. Action errors accounted for almost twice (44%) the proportion of rotary wing compared to fixed wing (23%) incidents. Both classificatory systems showed significant relationships between precursor factors such as the physical environment, mental and physiological states, crew resource management, training and personal readiness, and skill-based, but not decision-based, acts. The CET analysis showed different predisposing factors for different aspects of skill-based behaviors. Skill-based errors in military operations are more prevalent in rotary wing incidents and are related to higher level supervisory processes in the organization. The Cognitive Error Taxonomy provides increased granularity to HFACS analyses of unsafe acts.

  11. Using Fault Trees to Advance Understanding of Diagnostic Errors.

    PubMed

    Rogith, Deevakar; Iyengar, M Sriram; Singh, Hardeep

    2017-11-01

    Diagnostic errors annually affect at least 5% of adults in the outpatient setting in the United States. Formal analytic techniques are only infrequently used to understand them, in part because of the complexity of diagnostic processes and clinical work flows involved. In this article, diagnostic errors were modeled using fault tree analysis (FTA), a form of root cause analysis that has been successfully used in other high-complexity, high-risk contexts. How factors contributing to diagnostic errors can be systematically modeled by FTA to inform error understanding and error prevention is demonstrated. A team of three experts reviewed 10 published cases of diagnostic error and constructed fault trees. The fault trees were modeled according to currently available conceptual frameworks characterizing diagnostic error. The 10 trees were then synthesized into a single fault tree to identify common contributing factors and pathways leading to diagnostic error. FTA is a visual, structured, deductive approach that depicts the temporal sequence of events and their interactions in a formal logical hierarchy. The visual FTA enables easier understanding of causative processes and cognitive and system factors, as well as rapid identification of common pathways and interactions in a unified fashion. In addition, it enables calculation of empirical estimates for causative pathways. Thus, fault trees might provide a useful framework for both quantitative and qualitative analysis of diagnostic errors. Future directions include establishing validity and reliability by modeling a wider range of error cases, conducting quantitative evaluations, and undertaking deeper exploration of other FTA capabilities. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.

  12. A String of Mistakes: The Importance of Cascade Analysis in Describing, Counting, and Preventing Medical Errors

    PubMed Central

    Woolf, Steven H.; Kuzel, Anton J.; Dovey, Susan M.; Phillips, Robert L.

    2004-01-01

    BACKGROUND Notions about the most common errors in medicine currently rest on conjecture and weak epidemiologic evidence. We sought to determine whether cascade analysis is of value in clarifying the epidemiology and causes of errors and whether physician reports are sensitive to the impact of errors on patients. METHODS Eighteen US family physicians participating in a 6-country international study filed 75 anonymous error reports. The narratives were examined to identify the chain of events and the predominant proximal errors. We tabulated the consequences to patients, both reported by physicians and inferred by investigators. RESULTS A chain of errors was documented in 77% of incidents. Although 83% of the errors that ultimately occurred were mistakes in treatment or diagnosis, 2 of 3 were set in motion by errors in communication. Fully 80% of the errors that initiated cascades involved informational or personal miscommunication. Examples of informational miscommunication included communication breakdowns among colleagues and with patients (44%), misinformation in the medical record (21%), mishandling of patients’ requests and messages (18%), inaccessible medical records (12%), and inadequate reminder systems (5%). When asked whether the patient was harmed, physicians answered affirmatively in 43% of cases in which their narratives described harms. Psychological and emotional effects accounted for 17% of physician-reported consequences but 69% of investigator-inferred consequences. CONCLUSIONS Cascade analysis of physicians’ error reports is helpful in understanding the precipitant chain of events, but physicians provide incomplete information about how patients are affected. Miscommunication appears to play an important role in propagating diagnostic and treatment mistakes. PMID:15335130

  13. Patient safety in the clinical laboratory: a longitudinal analysis of specimen identification errors.

    PubMed

    Wagar, Elizabeth A; Tamashiro, Lorraine; Yasin, Bushra; Hilborne, Lee; Bruckner, David A

    2006-11-01

    Patient safety is an increasingly visible and important mission for clinical laboratories. Attention to improving processes related to patient identification and specimen labeling is being paid by accreditation and regulatory organizations because errors in these areas that jeopardize patient safety are common and avoidable through improvement in the total testing process. To assess patient identification and specimen labeling improvement after multiple implementation projects using longitudinal statistical tools. Specimen errors were categorized by a multidisciplinary health care team. Patient identification errors were grouped into 3 categories: (1) specimen/requisition mismatch, (2) unlabeled specimens, and (3) mislabeled specimens. Specimens with these types of identification errors were compared preimplementation and postimplementation for 3 patient safety projects: (1) reorganization of phlebotomy (4 months); (2) introduction of an electronic event reporting system (10 months); and (3) activation of an automated processing system (14 months) for a 24-month period, using trend analysis and Student t test statistics. Of 16,632 total specimen errors, mislabeled specimens, requisition mismatches, and unlabeled specimens represented 1.0%, 6.3%, and 4.6% of errors, respectively. Student t test showed a significant decrease in the most serious error, mislabeled specimens (P < .001) when compared to before implementation of the 3 patient safety projects. Trend analysis demonstrated decreases in all 3 error types for 26 months. Applying performance-improvement strategies that focus longitudinally on specimen labeling errors can significantly reduce errors, therefore improving patient safety. This is an important area in which laboratory professionals, working in interdisciplinary teams, can improve safety and outcomes of care.

  14. Outpatient CPOE orders discontinued due to 'erroneous entry': prospective survey of prescribers' explanations for errors.

    PubMed

    Hickman, Thu-Trang T; Quist, Arbor Jessica Lauren; Salazar, Alejandra; Amato, Mary G; Wright, Adam; Volk, Lynn A; Bates, David W; Schiff, Gordon

    2018-04-01

    Computerised prescriber order entry (CPOE) systems users often discontinue medications because the initial order was erroneous. To elucidate error types by querying prescribers about their reasons for discontinuing outpatient medication orders that they had self-identified as erroneous. During a nearly 3 year retrospective data collection period, we identified 57 972 drugs discontinued with the reason 'Error (erroneous entry)." Because chart reviews revealed limited information about these errors, we prospectively studied consecutive, discontinued erroneous orders by querying prescribers in near-real-time to learn more about the erroneous orders. From January 2014 to April 2014, we prospectively emailed prescribers about outpatient drug orders that they had discontinued due to erroneous initial order entry. Of 2 50 806 medication orders in these 4 months, 1133 (0.45%) of these were discontinued due to error. From these 1133, we emailed 542 unique prescribers to ask about their reason(s) for discontinuing these mediation orders in error. We received 312 responses (58% response rate). We categorised these responses using a previously published taxonomy. The top reasons for these discontinued erroneous orders included: medication ordered for wrong patient (27.8%, n=60); wrong drug ordered (18.5%, n=40); and duplicate order placed (14.4%, n=31). Other common discontinued erroneous orders related to drug dosage and formulation (eg, extended release versus not). Oxycodone (3%) was the most frequent drug discontinued error. Drugs are not infrequently discontinued 'in error.' Wrong patient and wrong drug errors constitute the leading types of erroneous prescriptions recognised and discontinued by prescribers. Data regarding erroneous medication entries represent an important source of intelligence about how CPOE systems are functioning and malfunctioning, providing important insights regarding areas for designing CPOE more safely in the future. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  15. AQMEII3 evaluation of regional NA/EU simulations and ...

    EPA Pesticide Factsheets

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) helping to detect causes of models error, and iii) identifying the processes and scales most urgently requiring dedicated investigations. The analysis is conducted within the framework of the third phase of the Air Quality Model Evaluation International Initiative (AQMEII) and tackles model performance gauging through measurement-to-model comparison, error decomposition and time series analysis of the models biases for several fields (ozone, CO, SO2, NO, NO2, PM10, PM2.5, wind speed, and temperature). The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while apportioning the error to its constituent parts (bias, variance and covariance) can help to assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the former phases of AQMEII. The application of the error apportionment method to the AQMEII Phase 3 simulations provides several key insights. In addition to reaffirming the strong impac

  16. Analysis of Relationships between the Level of Errors in Leg and Monofin Movement and Stroke Parameters in Monofin Swimming

    PubMed Central

    Rejman, Marek

    2013-01-01

    The aim of this study was to analyze the error structure in propulsive movements with regard to its influence on monofin swimming speed. The random cycles performed by six swimmers were filmed during a progressive test (900m). An objective method to estimate errors committed in the area of angular displacement of the feet and monofin segments was employed. The parameters were compared with a previously described model. Mutual dependences between the level of errors, stroke frequency, stroke length and amplitude in relation to swimming velocity were analyzed. The results showed that proper foot movements and the avoidance of errors, arising at the distal part of the fin, ensure the progression of swimming speed. The individual stroke parameters distribution which consists of optimally increasing stroke frequency to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Identification of key elements in the stroke structure based on the analysis of errors committed should aid in improving monofin swimming technique. Key points The monofin swimming technique was evaluated through the prism of objectively defined errors committed by the swimmers. The dependences between the level of errors, stroke rate, stroke length and amplitude in relation to swimming velocity were analyzed. Optimally increasing stroke rate to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Propriety foot movement and the avoidance of errors arising at the distal part of fin, provide for the progression of swimming speed. The key elements improving monofin swimming technique, based on the analysis of errors committed, were designated. PMID:24149742

  17. The Effects of Discrete-Trial Training Commission Errors on Learner Outcomes: An Extension

    ERIC Educational Resources Information Center

    Jenkins, Sarah R.; Hirst, Jason M.; DiGennaro Reed, Florence D.

    2015-01-01

    We conducted a parametric analysis of treatment integrity errors during discrete-trial training and investigated the effects of three integrity conditions (0, 50, or 100 % errors of commission) on performance in the presence and absence of programmed errors. The presence of commission errors impaired acquisition for three of four participants.…

  18. Large-scale contamination of microbial isolate genomes by Illumina PhiX control.

    PubMed

    Mukherjee, Supratim; Huntemann, Marcel; Ivanova, Natalia; Kyrpides, Nikos C; Pati, Amrita

    2015-01-01

    With the rapid growth and development of sequencing technologies, genomes have become the new go-to for exploring solutions to some of the world's biggest challenges such as searching for alternative energy sources and exploration of genomic dark matter. However, progress in sequencing has been accompanied by its share of errors that can occur during template or library preparation, sequencing, imaging or data analysis. In this study we screened over 18,000 publicly available microbial isolate genome sequences in the Integrated Microbial Genomes database and identified more than 1000 genomes that are contaminated with PhiX, a control frequently used during Illumina sequencing runs. Approximately 10% of these genomes have been published in literature and 129 contaminated genomes were sequenced under the Human Microbiome Project. Raw sequence reads are prone to contamination from various sources and are usually eliminated during downstream quality control steps. Detection of PhiX contaminated genomes indicates a lapse in either the application or effectiveness of proper quality control measures. The presence of PhiX contamination in several publicly available isolate genomes can result in additional errors when such data are used in comparative genomics analyses. Such contamination of public databases have far-reaching consequences in the form of erroneous data interpretation and analyses, and necessitates better measures to proofread raw sequences before releasing them to the broader scientific community.

  19. An Introduction to Error Analysis for Quantitative Chemistry

    ERIC Educational Resources Information Center

    Neman, R. L.

    1972-01-01

    Describes two formulas for calculating errors due to instrument limitations which are usually found in gravimetric volumetric analysis and indicates their possible applications to other fields of science. (CC)

  20. Runway safety

    DOT National Transportation Integrated Search

    2010-02-12

    Information provided through analysis of runway incursions is useful in many ways. Analysis of the errors made by pilots, controllers, and vehicle drivers is the first step toward developing error mitigation strategies. Furthermore, successful design...

  1. An Analysis of College Students' Attitudes towards Error Correction in EFL Context

    ERIC Educational Resources Information Center

    Zhu, Honglin

    2010-01-01

    This article is based on a survey on the attitudes towards the error correction by their teachers in the process of teaching and learning and it is intended to improve the language teachers' understanding of the nature of error correction. Based on the analysis, the article expounds some principles and techniques that can be applied in the process…

  2. Error Analysis of p-Version Discontinuous Galerkin Method for Heat Transfer in Built-up Structures

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki; Bey, Kim S.

    2004-01-01

    The purpose of this paper is to provide an error analysis for the p-version of the discontinuous Galerkin finite element method for heat transfer in built-up structures. As a special case of the results in this paper, a theoretical error estimate for the numerical experiments recently conducted by James Tomey is obtained.

  3. Software reliability: Application of a reliability model to requirements error analysis

    NASA Technical Reports Server (NTRS)

    Logan, J.

    1980-01-01

    The application of a software reliability model having a well defined correspondence of computer program properties to requirements error analysis is described. Requirements error categories which can be related to program structural elements are identified and their effect on program execution considered. The model is applied to a hypothetical B-5 requirement specification for a program module.

  4. Accuracy analysis and design of A3 parallel spindle head

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan

    2016-03-01

    As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.

  5. Uncertainty Analysis of Seebeck Coefficient and Electrical Resistivity Characterization

    NASA Technical Reports Server (NTRS)

    Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred

    2014-01-01

    In order to provide a complete description of a materials thermoelectric power factor, in addition to the measured nominal value, an uncertainty interval is required. The uncertainty may contain sources of measurement error including systematic bias error and precision error of a statistical nature. The work focuses specifically on the popular ZEM-3 (Ulvac Technologies) measurement system, but the methods apply to any measurement system. The analysis accounts for sources of systematic error including sample preparation tolerance, measurement probe placement, thermocouple cold-finger effect, and measurement parameters; in addition to including uncertainty of a statistical nature. Complete uncertainty analysis of a measurement system allows for more reliable comparison of measurement data between laboratories.

  6. Ephemeris data and error analysis in support of a Comet Encke intercept mission

    NASA Technical Reports Server (NTRS)

    Yeomans, D. K.

    1974-01-01

    Utilizing an orbit determination based upon 65 observations over the 1961 - 1973 interval, ephemeris data were generated for the 1976-77, 1980-81 and 1983-84 apparitions of short period comet Encke. For the 1980-81 apparition, results from a statistical error analysis are outlined. All ephemeris and error analysis computations include the effects of planetary perturbations as well as the nongravitational accelerations introduced by the outgassing cometary nucleus. In 1980, excellent observing conditions and a close approach of comet Encke to the earth permit relatively small uncertainties in the cometary position errors and provide an excellent opportunity for a close flyby of a physically interesting comet.

  7. Panel positioning error and support mechanism for a 30-m THz radio telescope

    NASA Astrophysics Data System (ADS)

    Yang, De-Hua; Okoh, Daniel; Zhou, Guo-Hua; Li, Ai-Hua; Li, Guo-Ping; Cheng, Jing-Quan

    2011-06-01

    A 30-m TeraHertz (THz) radio telescope is proposed to operate at 200 μm with an active primary surface. This paper presents sensitivity analysis of active surface panel positioning errors with optical performance in terms of the Strehl ratio. Based on Ruze's surface error theory and using a Monte Carlo simulation, the effects of six rigid panel positioning errors, such as piston, tip, tilt, radial, azimuthal and twist displacements, were directly derived. The optical performance of the telescope was then evaluated using the standard Strehl ratio. We graphically illustrated the various panel error effects by presenting simulations of complete ensembles of full reflector surface errors for the six different rigid panel positioning errors. Study of the panel error sensitivity analysis revealed that the piston error and tilt/tip errors are dominant while the other rigid errors are much less important. Furthermore, as indicated by the results, we conceived of an alternative Master-Slave Concept-based (MSC-based) active surface by implementating a special Series-Parallel Concept-based (SPC-based) hexapod as the active panel support mechanism. A new 30-m active reflector based on the two concepts was demonstrated to achieve correction for all the six rigid panel positioning errors in an economically feasible way.

  8. Design and analysis of a sub-aperture scanning machine for the transmittance measurements of large-aperture optical system

    NASA Astrophysics Data System (ADS)

    He, Yingwei; Li, Ping; Feng, Guojin; Cheng, Li; Wang, Yu; Wu, Houping; Liu, Zilong; Zheng, Chundi; Sha, Dingguo

    2010-11-01

    For measuring large-aperture optical system transmittance, a novel sub-aperture scanning machine with double-rotating arms (SSMDA) was designed to obtain sub-aperture beam spot. Optical system full-aperture transmittance measurements can be achieved by applying sub-aperture beam spot scanning technology. The mathematical model of the SSMDA based on a homogeneous coordinate transformation matrix is established to develop a detailed methodology for analyzing the beam spot scanning errors. The error analysis methodology considers two fundamental sources of scanning errors, namely (1) the length systematic errors and (2) the rotational systematic errors. As the systematic errors of the parameters are given beforehand, computational results of scanning errors are between -0.007~0.028mm while scanning radius is not lager than 400.000mm. The results offer theoretical and data basis to the research on transmission characteristics of large optical system.

  9. An analysis of input errors in precipitation-runoff models using regression with errors in the independent variables

    USGS Publications Warehouse

    Troutman, Brent M.

    1982-01-01

    Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.

  10. AQMEII3: the EU and NA regional scale program of the ...

    EPA Pesticide Factsheets

    The presentation builds on the work presented last year at the 14th CMAS meeting and it is applied to the work performed in the context of the AQMEII-HTAP collaboration. The analysis is conducted within the framework of the third phase of AQMEII (Air Quality Model Evaluation International Initiative) and encompasses the gauging of model performance through measurement-to-model comparison, error decomposition and time series analysis of the models biases. Through the comparison of several regional-scale chemistry transport modelling systems applied to simulate meteorology and air quality over two continental areas, this study aims at i) apportioning the error to the responsible processes through time-scale analysis, and ii) help detecting causes of models error, and iii) identify the processes and scales most urgently requiring dedicated investigations. The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while the apportioning of the error into its constituent parts (bias, variance and covariance) can help assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the previous phases of AQMEII. The National Exposure Research Laboratory (NERL) Computational Exposur

  11. Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals

    NASA Astrophysics Data System (ADS)

    Goswami, S.; Flury, J.

    2016-12-01

    In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.

  12. Application of Exactly Linearized Error Transport Equations to AIAA CFD Prediction Workshops

    NASA Technical Reports Server (NTRS)

    Derlaga, Joseph M.; Park, Michael A.; Rallabhandi, Sriram

    2017-01-01

    The computational fluid dynamics (CFD) prediction workshops sponsored by the AIAA have created invaluable opportunities in which to discuss the predictive capabilities of CFD in areas in which it has struggled, e.g., cruise drag, high-lift, and sonic boom pre diction. While there are many factors that contribute to disagreement between simulated and experimental results, such as modeling or discretization error, quantifying the errors contained in a simulation is important for those who make decisions based on the computational results. The linearized error transport equations (ETE) combined with a truncation error estimate is a method to quantify one source of errors. The ETE are implemented with a complex-step method to provide an exact linearization with minimal source code modifications to CFD and multidisciplinary analysis methods. The equivalency of adjoint and linearized ETE functional error correction is demonstrated. Uniformly refined grids from a series of AIAA prediction workshops demonstrate the utility of ETE for multidisciplinary analysis with a connection between estimated discretization error and (resolved or under-resolved) flow features.

  13. Development of a Portable Binary Chlorine Dioxide Generator for Decontamination

    DTIC Science & Technology

    2010-03-01

    chlorine dioxide forms slowly from chlorite solutions through either acid release or a radical chain reaction that we observed at neutral pH. Task 7... Chlorine dioxide and water in methanol - no agent control F. 5.25% Bleach G. Methanol only 3.0 PROCEDURES 3.1 METHOD VALIDATION The reaction...error range in gas chromatography measurements. For the chlorine dioxide containing samples, mass spectra were analyzed to determine potential

  14. The CTBTO/WMO Atmospheric Backtracking Response System and the Data Fusion Exercise 2007

    DTIC Science & Technology

    2008-09-01

    sensitivity of the measurement (sample) towards releases at all points on the globe . For a more comprehensive description, see the presentation from last...localization information, including the error ellipse, is comparatively small. The red spots on the right image mark seismic events that occurred on...hours indicated in the calendar of the PTS post-processing software WEB- GRAPE . 2008 Monitoring Research Review: Ground-Based Nuclear

  15. OBSERVABLE INDICATORS OF THE SENSITIVITY OF PM 2.5 NITRATE TO EMISSION REDUCTIONS, PART II: SENSITIVITY TO ERRORS IN TOTAL AMMONIA AND TOTAL NITRATE OF THE CMAQ-PREDICTED NONLINEAR EFFECT OF SO 2 EMISSION REDUCTIONS

    EPA Science Inventory

    The inorganic aerosol system of sulfate, nitrate, and ammonium can respond nonlinearly to changes in precursor sulfur dioxide (SO2) emissions. The potential increase in nitrate, when sulfate is reduced and the associated ammonia is released, can negate the sulfate mass...

  16. Measuring Compartment Size and Gas Solubility in Marine Mammals

    DTIC Science & Technology

    2014-09-30

    analyzed by gas chromatography . Injection of the sample into the gas chromatograph is done using a sample loop to minimize volume injection error. We...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Measuring Compartment Size and Gas Solubility in Marine...study is to develop methods to estimate marine mammal tissue compartment sizes, and tissue gas solubility. We aim to improve the data available for

  17. Characterizing and Implementing Efficient Primitives for Privacy-Preserving Computation

    DTIC Science & Technology

    2015-07-01

    the mobile device. From this, the mobile will detect any tampering from the malicious party by a discrepancy in these returned values, eliminating...the need for an output MAC. If no tampering is detected , the mobile device then decrypts the output of computation. APPROVED FOR PUBLIC RELEASE...useful error messages when the compiler detects a problem with an application, making debugging the application significantly easier than with other

  18. Computation and Validation of the Dynamic Response Index (DRI)

    DTIC Science & Technology

    2013-08-06

    matplotlib plotting library. • Executed from command line. • Allows several optional arguments. • Runs on Windows, Linux, UNIX, and Mac OS X. 10... vs . Time: Triangular pulse input data with given time duration and peak acceleration: Time (s) EARTH Code: Motivation • Error Assessment of...public release • ARC provided electrothermal battery model example: • Test vs . simulation data for terminal voltage. • EARTH input parameters

  19. Metal Ion Sensor with Catalytic DNA in a Nanofluidic Intelligent Processor

    DTIC Science & Technology

    2011-12-01

    attributed to decreased diffusion and less active DNAzyme complex because of pore constraints. Uncleavable Alexa546 intensity is shown in gray ...is shown in gray , cleavable fluorescein in green, and the ratio of Fl/Alexa in red. Error bars represent one standard deviation of four independent...higher concentrations inhibiting cleaved fragment release. Uncleavable Alexa 546 intensity is shown in gray , cleavable fluorescein in green, and the

  20. Error analysis for relay type satellite-aided search and rescue systems

    NASA Technical Reports Server (NTRS)

    Marini, J. W.

    1977-01-01

    An analysis was made of the errors in the determination of the position of an emergency transmitter in a satellite aided search and rescue system. The satellite was assumed to be at a height of 820 km in a near circular near polar orbit. Short data spans of four minutes or less were used. The error sources considered were measurement noise, transmitter frequency drift, ionospheric effects and error in the assumed height of the transmitter. The errors were calculated for several different transmitter positions, data rates and data spans. The only transmitter frequency used was 406 MHz, but the results can be scaled to different frequencies. In a typical case, in which four Doppler measurements were taken over a span of two minutes, the position error was about 1.2 km.

  1. Grinding Method and Error Analysis of Eccentric Shaft Parts

    NASA Astrophysics Data System (ADS)

    Wang, Zhiming; Han, Qiushi; Li, Qiguang; Peng, Baoying; Li, Weihua

    2017-12-01

    RV reducer and various mechanical transmission parts are widely used in eccentric shaft parts, The demand of precision grinding technology for eccentric shaft parts now, In this paper, the model of X-C linkage relation of eccentric shaft grinding is studied; By inversion method, the contour curve of the wheel envelope is deduced, and the distance from the center of eccentric circle is constant. The simulation software of eccentric shaft grinding is developed, the correctness of the model is proved, the influence of the X-axis feed error, the C-axis feed error and the wheel radius error on the grinding process is analyzed, and the corresponding error calculation model is proposed. The simulation analysis is carried out to provide the basis for the contour error compensation.

  2. Encoder fault analysis system based on Moire fringe error signal

    NASA Astrophysics Data System (ADS)

    Gao, Xu; Chen, Wei; Wan, Qiu-hua; Lu, Xin-ran; Xie, Chun-yu

    2018-02-01

    Aiming at the problem of any fault and wrong code in the practical application of photoelectric shaft encoder, a fast and accurate encoder fault analysis system is researched from the aspect of Moire fringe photoelectric signal processing. DSP28335 is selected as the core processor and high speed serial A/D converter acquisition card is used. And temperature measuring circuit using AD7420 is designed. Discrete data of Moire fringe error signal is collected at different temperatures and it is sent to the host computer through wireless transmission. The error signal quality index and fault type is displayed on the host computer based on the error signal identification method. The error signal quality can be used to diagnosis the state of error code through the human-machine interface.

  3. Assessment of Subyearling Chinook Salmon Survival through the Federal Hydropower Projects in the Main-Stem Columbia River

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skalski, J. R.; Eppard, M. B.; Ploskey, Gene R.

    2014-07-11

    High survival through hydropower projects is an essential element in the recovery of salmonid populations in the Columbia River. It is also a regulatory requirement under the 2008 Federal Columbia River Power System (FCRPS) Biological Opinion (BiOp) established under the Endangered Species Act. It requires dam passage survival to be ≥0.96 and ≥0.93 for spring and summer outmigrating juvenile salmonids, respectively, and estimated with a standard error ≤ 0.015. An innovative virtual/paired-release design was used to estimate dam passage survival, defined as survival from the face of a dam to the tailrace mixing zone. A coordinated four-dam study was conductedmore » during the 2012 summer outmigration using 14,026 run-of-river subyearling Chinook salmon surgically implanted with acoustic micro-transmitter (AMT) tags released at 9 different locations, and monitored on 14 different detection arrays. Each of the four estimates of dam passage survival exceeded BiOp requirements with values ranging from 0.9414 to 0.9747 and standard errors, 0.0031 to 0.0114. Two consecutive years of survival estimates must meet BiOp standards in order for a hydropower project to be in compliance with recovery requirements for a fish stock.« less

  4. RMP Guidance for Warehouses - Chapter 4: Offsite Consequence Analysis

    EPA Pesticide Factsheets

    Offsite consequence analysis (OCA) informs government and the public about potential consequences of an accidental toxic or flammable chemical release at your facility, and consists of a worst-case release scenario and alternative release scenarios.

  5. Medical students' experiences with medical errors: an analysis of medical student essays.

    PubMed

    Martinez, William; Lo, Bernard

    2008-07-01

    This study aimed to examine medical students' experiences with medical errors. In 2001 and 2002, 172 fourth-year medical students wrote an anonymous description of a significant medical error they had witnessed or committed during their clinical clerkships. The assignment represented part of a required medical ethics course. We analysed 147 of these essays using thematic content analysis. Many medical students made or observed significant errors. In either situation, some students experienced distress that seemingly went unaddressed. Furthermore, this distress was sometimes severe and persisted after the initial event. Some students also experienced considerable uncertainty as to whether an error had occurred and how to prevent future errors. Many errors may not have been disclosed to patients, and some students who desired to discuss or disclose errors were apparently discouraged from doing so by senior doctors. Some students criticised senior doctors who attempted to hide errors or avoid responsibility. By contrast, students who witnessed senior doctors take responsibility for errors and candidly disclose errors to patients appeared to recognise the importance of honesty and integrity and said they aspired to these standards. There are many missed opportunities to teach students how to respond to and learn from errors. Some faculty members and housestaff may at times respond to errors in ways that appear to contradict professional standards. Medical educators should increase exposure to exemplary responses to errors and help students to learn from and cope with errors.

  6. Technology and medication errors: impact in nursing homes.

    PubMed

    Baril, Chantal; Gascon, Viviane; St-Pierre, Liette; Lagacé, Denis

    2014-01-01

    The purpose of this paper is to study a medication distribution technology's (MDT) impact on medication errors reported in public nursing homes in Québec Province. The work was carried out in six nursing homes (800 patients). Medication error data were collected from nursing staff through a voluntary reporting process before and after MDT was implemented. The errors were analysed using: totals errors; medication error type; severity and patient consequences. A statistical analysis verified whether there was a significant difference between the variables before and after introducing MDT. The results show that the MDT detected medication errors. The authors' analysis also indicates that errors are detected more rapidly resulting in less severe consequences for patients. MDT is a step towards safer and more efficient medication processes. Our findings should convince healthcare administrators to implement technology such as electronic prescriber or bar code medication administration systems to improve medication processes and to provide better healthcare to patients. Few studies have been carried out in long-term healthcare facilities such as nursing homes. The authors' study extends what is known about MDT's impact on medication errors in nursing homes.

  7. Analysis of quantum error correction with symmetric hypergraph states

    NASA Astrophysics Data System (ADS)

    Wagner, T.; Kampermann, H.; Bruß, D.

    2018-03-01

    Graph states have been used to construct quantum error correction codes for independent errors. Hypergraph states generalize graph states, and symmetric hypergraph states have been shown to allow for the correction of correlated errors. In this paper, it is shown that symmetric hypergraph states are not useful for the correction of independent errors, at least for up to 30 qubits. Furthermore, error correction for error models with protected qubits is explored. A class of known graph codes for this scenario is generalized to hypergraph codes.

  8. The distribution and public health consequences of releases of chemicals intended for pool use in 17 states, 2001-2009.

    PubMed

    Anderson, Ayana R; Welles, Wanda Lizak; Drew, James; Orr, Maureen F

    2014-05-01

    To keep swimming pool water clean and clear, consumers purchase, transport, store, use, and dispose of large amounts of potentially hazardous chemicals. Data about incidents due to the use of these chemicals and the resultant public health impacts are limited. The authors analyzed pool chemical release data from 17 states that participated in the Agency for Toxic Substances and Disease Registry's chemical event surveillance system during 2001-2009. In 400 pool chemical incidents, 60% resulted in injuries. Of the 732 injured persons, 67% were members of the public and 50% were under 18 years old. Incidents occurred most frequently in private residences (39%), but incidents with the most injured persons (34%) occurred at recreational facilities. Human error (71.9%) was the most frequent primary contributing factor, followed by equipment failure (22.8%). Interventions designed to mitigate the public health impact associated with pool chemical releases should target both private pool owners and public pool operators.

  9. VizieR Online Data Catalog: LAMOST DR2 catalogs (Luo+, 2016)

    NASA Astrophysics Data System (ADS)

    Luo, A.-L.; Zhao, Y.-H.; Zhao, G.; Deng, L.-C.; Liu, X.-W.; Jing, Y.-P.; Wang, G.; Zhang, H.-T.; Shi, J.-R.; Cui, X.-Q.; Chu, Y.-Q.; Li, G.-P.; Bai, Z.-R.; Wu, Y.; Cai, Y.; Cao, S.-Y.; Cao, Z.-H.; Carlin, J. L.; Chen, H.-Y.; Chen, J.-J.; Chen, K.-X.; Chen, L.; Chen, X.-L.; Chen, X.-Y.; Chen, Y.; Christlieb, N.; Chu, J.-R.; Cui, C.-Z.; Dong, Y.-Q.; Du, B.; Fan, D.-W.; Feng, L.; Fu, J.-N.; Gao, P.; Gong, X.-F.; Gu, B.-Z.; Guo, Y.-X.; Han, Z.-W.; He, B.-L.; Hou, J.-L.; Hou, Y.-H.; Hou, W.; Hu, H.-Z.; Hu, N.-S.; Hu, Z.-W.; Huo, Z.-Y.; Jia, L.; Jiang, F.-H.; Jiang, X.; Jiang, Z.-B.; Jin, G.; Kong, X.; Kong, X.; Lei, Y.-J.; Li, A.-H.; Li, C.-H.; Li, G.-W.; Li, H.-N.; Li, J.; Li, Q.; Li, S.; Li, S.-S.; Li, X.-N.; Li, Y.; Li, Y.-B.; Li, Y.-P.; Liang, Y.; Lin, C.-C.; Liu, C.; Liu, G.-R.; Liu, G.-Q.; Liu, Z.-G.; Lu, W.-Z.; Luo, Y.; Mao, Y.-D.; Newberg, H.; Ni, J.-J.; Qi, Z.-X.; Qi, Y.-J.; Shen, S.-Y.; Shi, H.-M.; Song, J.; Song, Y.-H.; Su, D.-Q.; Su, H.-J.; Tang, Z.-H.; Tao, Q.-S.; Tian, Y.; Wang, D.; Wang, D.-Q.; Wang, F.-F.; Wang, G.-M.; Wang, H.; Wang, H.-C.; Wang, J.; Wang, J.-N.; Wang, J.-L.; Wang, J.-P.; Wang, J.-X.; Wang, L.; Wang, M.-X.; Wang, S.-G.; Wang, S.-Q.; Wang, X.; Wang, Y.-N.; Wang, Y.; Wang, Y.-F.; Wang, Y.-F.; Wei, P.; Wei, M.-Z.; Wu, H.; Wu, K.-F.; Wu, X.-B.; Wu, Y.-Z.; Xing, X.-Z.; Xu, L.-Z.; Xu, X.-Q.; Xu, Y.; Yan, T.-S.; Yang, D.-H.; Yang, H.-F.; Yang, H.-Q.; Yang, M.; Yao, Z.-Q.; Yu, Y.; Yuan, H.; Yuan, H.-B.; Yuan, H.-L.; Yuan, W.-M.; Zhai, C.; Zhang, E.-P.; Zhang, H.-W.; Zhang, J.-N.; Zhang, L.-P.; Zhang, W.; Zhang, Y.; Zhang, Y.-X.; Zhang, Z.-C.; Zhao, M.; Zhou, F.; Zhou, X.; Zhu, J.; Zhu, Y.-T.; Zou, S.-C.; Zuo, F.

    2016-11-01

    There are a couple of corrections been made in this releasing: Recalculated all the errors of Teff, Logg, Fe/H and rv in the AFGK catalog. Refer to the DR2 paper (in preparation) for details. Compare to the previous internal releasing, some extra spectra has been added into this version of releasing: STAR from 3,779,597 to 3,843,597, increased 63,923; GALAXY from 37,665 to 47,036, increased 9,371; QSO from 8,633 to 13,262, increased 4,629. The major contribution to this increasing is that we applied a new method to reduce the data which previously was abandoned due to lack of standard stars with high enough S/N. Refer to the paper 'LAMOST Spectrograph Response Curves: Stability and Application to flux calibration' (in preparation) for details. There are also a small amount of increasing is due to the correction of fiber flag, and to the eye check work, etc. (5 data files).

  10. Error analysis and correction in wavefront reconstruction from the transport-of-intensity equation

    PubMed Central

    Barbero, Sergio; Thibos, Larry N.

    2007-01-01

    Wavefront reconstruction from the transport-of-intensity equation (TIE) is a well-posed inverse problem given smooth signals and appropriate boundary conditions. However, in practice experimental errors lead to an ill-condition problem. A quantitative analysis of the effects of experimental errors is presented in simulations and experimental tests. The relative importance of numerical, misalignment, quantization, and photodetection errors are shown. It is proved that reduction of photodetection noise by wavelet filtering significantly improves the accuracy of wavefront reconstruction from simulated and experimental data. PMID:20052302

  11. Error analysis of multi-needle Langmuir probe measurement technique.

    PubMed

    Barjatya, Aroh; Merritt, William

    2018-04-01

    Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.

  12. Error analysis of multi-needle Langmuir probe measurement technique

    NASA Astrophysics Data System (ADS)

    Barjatya, Aroh; Merritt, William

    2018-04-01

    Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.

  13. Quotation accuracy in medical journal articles—a systematic review and meta-analysis

    PubMed Central

    Jergas, Hannah

    2015-01-01

    Background. Quotations and references are an indispensable element of scientific communication. They should support what authors claim or provide important background information for readers. Studies indicate, however, that quotations not serving their purpose—quotation errors—may be prevalent. Methods. We carried out a systematic review, meta-analysis and meta-regression of quotation errors, taking account of differences between studies in error ascertainment. Results. Out of 559 studies screened we included 28 in the main analysis, and estimated major, minor and total quotation error rates of 11,9%, 95% CI [8.4, 16.6] 11.5% [8.3, 15.7], and 25.4% [19.5, 32.4]. While heterogeneity was substantial, even the lowest estimate of total quotation errors was considerable (6.7%). Indirect references accounted for less than one sixth of all quotation problems. The findings remained robust in a number of sensitivity and subgroup analyses (including risk of bias analysis) and in meta-regression. There was no indication of publication bias. Conclusions. Readers of medical journal articles should be aware of the fact that quotation errors are common. Measures against quotation errors include spot checks by editors and reviewers, correct placement of citations in the text, and declarations by authors that they have checked cited material. Future research should elucidate if and to what degree quotation errors are detrimental to scientific progress. PMID:26528420

  14. Orbit-determination performance of Doppler data for interplanetary cruise trajectories. Part 1: Error analysis methodology

    NASA Technical Reports Server (NTRS)

    Ulvestad, J. S.; Thurman, S. W.

    1992-01-01

    An error covariance analysis methodology is used to investigate different weighting schemes for two-way (coherent) Doppler data in the presence of transmission-media and observing-platform calibration errors. The analysis focuses on orbit-determination performance in the interplanetary cruise phase of deep-space missions. Analytical models for the Doppler observable and for transmission-media and observing-platform calibration errors are presented, drawn primarily from previous work. Previously published analytical models were improved upon by the following: (1) considering the effects of errors in the calibration of radio signal propagation through the troposphere and ionosphere as well as station-location errors; (2) modelling the spacecraft state transition matrix using a more accurate piecewise-linear approximation to represent the evolution of the spacecraft trajectory; and (3) incorporating Doppler data weighting functions that are functions of elevation angle, which reduce the sensitivity of the estimated spacecraft trajectory to troposphere and ionosphere calibration errors. The analysis is motivated by the need to develop suitable weighting functions for two-way Doppler data acquired at 8.4 GHz (X-band) and 32 GHz (Ka-band). This weighting is likely to be different from that in the weighting functions currently in use; the current functions were constructed originally for use with 2.3 GHz (S-band) Doppler data, which are affected much more strongly by the ionosphere than are the higher frequency data.

  15. Instructions to “push as hard as you can” improve average chest compression depth in dispatcher-assisted Cardiopulmonary Resuscitation

    PubMed Central

    Mirza, Muzna; Brown, Todd B.; Saini, Devashish; Pepper, Tracy L; Nandigam, Hari Krishna; Kaza, Niroop; Cofield, Stacey S.

    2008-01-01

    Background and Objective Cardiopulmonary Resuscitation (CPR) with adequate chest compression depth appears to improve first shock success in cardiac arrest. We evaluate the effect of simplification of chest compression instructions on compression depth in dispatcher-assisted CPR protocol. Methods Data from two randomized, double-blinded, controlled trials with identical methodology were combined to obtain 332 records for this analysis. Subjects were randomized to either modified Medical Priority Dispatch System (MPDS) v11.2 protocol or a new simplified protocol. The main difference between the protocols was the instruction to “push as hard as you can” in the simplified protocol, compared to “push down firmly 2 inches (5cm)” in MPDS. Data were recorded via a Laerdal® ResusciAnne® SkillReporter™ manikin. Primary outcome measures included: chest compression depth, proportion of compressions without error, with adequate depth and with total release. Results Instructions to “push as hard as you can”, compared to “push down firmly 2 inches (5cm)”, resulted in improved chest compression depth (36.4 vs 29.7 mm, p<0.0001), and improved median proportion of chest compressions done to the correct depth (32% vs <1%, p<0.0001). No significant difference in median proportion of compressions with total release (100% for both) and average compression rate (99.7 vs 97.5 per min, p<0.56) was found. Conclusions Modifying dispatcher-assisted CPR instructions by changing “push down firmly 2 inches (5cm)” to “push as hard as you can” achieved improvement in chest compression depth at no cost to total release or average chest compression rate. PMID:18635306

  16. A comparative analysis of errors in long-term econometric forecasts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tepel, R.

    1986-04-01

    The growing body of literature that documents forecast accuracy falls generally into two parts. The first is prescriptive and is carried out by modelers who use simulation analysis as a tool for model improvement. These studies are ex post, that is, they make use of known values for exogenous variables and generate an error measure wholly attributable to the model. The second type of analysis is descriptive and seeks to measure errors, identify patterns among errors and variables and compare forecasts from different sources. Most descriptive studies use an ex ante approach, that is, they evaluate model outputs based onmore » estimated (or forecasted) exogenous variables. In this case, it is the forecasting process, rather than the model, that is under scrutiny. This paper uses an ex ante approach to measure errors in forecast series prepared by Data Resources Incorporated (DRI), Wharton Econometric Forecasting Associates (Wharton), and Chase Econometrics (Chase) and to determine if systematic patterns of errors can be discerned between services, types of variables (by degree of aggregation), length of forecast and time at which the forecast is made. Errors are measured as the percent difference between actual and forecasted values for the historical period of 1971 to 1983.« less

  17. Error analysis of mechanical system and wavelength calibration of monochromator

    NASA Astrophysics Data System (ADS)

    Zhang, Fudong; Chen, Chen; Liu, Jie; Wang, Zhihong

    2018-02-01

    This study focuses on improving the accuracy of a grating monochromator on the basis of the grating diffraction equation in combination with an analysis of the mechanical transmission relationship between the grating, the sine bar, and the screw of the scanning mechanism. First, the relationship between the mechanical error in the monochromator with the sine drive and the wavelength error is analyzed. Second, a mathematical model of the wavelength error and mechanical error is developed, and an accurate wavelength calibration method based on the sine bar's length adjustment and error compensation is proposed. Based on the mathematical model and calibration method, experiments using a standard light source with known spectral lines and a pre-adjusted sine bar length are conducted. The model parameter equations are solved, and subsequent parameter optimization simulations are performed to determine the optimal length ratio. Lastly, the length of the sine bar is adjusted. The experimental results indicate that the wavelength accuracy is ±0.3 nm, which is better than the original accuracy of ±2.6 nm. The results confirm the validity of the error analysis of the mechanical system of the monochromator as well as the validity of the calibration method.

  18. Single Event Upset Analysis: On-orbit performance of the Alpha Magnetic Spectrometer Digital Signal Processor Memory aboard the International Space Station

    NASA Astrophysics Data System (ADS)

    Li, Jiaqiang; Choutko, Vitaly; Xiao, Liyi

    2018-03-01

    Based on the collection of error data from the Alpha Magnetic Spectrometer (AMS) Digital Signal Processors (DSP), on-orbit Single Event Upsets (SEUs) of the DSP program memory are analyzed. The daily error distribution and time intervals between errors are calculated to evaluate the reliability of the system. The particle density distribution of International Space Station (ISS) orbit is presented and the effects from the South Atlantic Anomaly (SAA) and the geomagnetic poles are analyzed. The impact of solar events on the DSP program memory is carried out combining data analysis and Monte Carlo simulation (MC). From the analysis and simulation results, it is concluded that the area corresponding to the SAA is the main source of errors on the ISS orbit. Solar events can also cause errors on DSP program memory, but the effect depends on the on-orbit particle density.

  19. Analysis of Medication Error Reports

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitney, Paul D.; Young, Jonathan; Santell, John

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison,more » and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.« less

  20. Geolocation and Pointing Accuracy Analysis for the WindSat Sensor

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank J.; Purdy, William E.; Gaiser, Peter W.; Poe, Gene; Uliana, Enzo A.

    2006-01-01

    Geolocation and pointing accuracy analyses of the WindSat flight data are presented. The two topics were intertwined in the flight data analysis and will be addressed together. WindSat has no unusual geolocation requirements relative to other sensors, but its beam pointing knowledge accuracy is especially critical to support accurate polarimetric radiometry. Pointing accuracy was improved and verified using geolocation analysis in conjunction with scan bias analysis. nvo methods were needed to properly identify and differentiate between data time tagging and pointing knowledge errors. Matchups comparing coastlines indicated in imagery data with their known geographic locations were used to identify geolocation errors. These coastline matchups showed possible pointing errors with ambiguities as to the true source of the errors. Scan bias analysis of U, the third Stokes parameter, and of vertical and horizontal polarizations provided measurement of pointing offsets resolving ambiguities in the coastline matchup analysis. Several geolocation and pointing bias sources were incfementally eliminated resulting in pointing knowledge and geolocation accuracy that met all design requirements.

  1. On-Error Training (Book Excerpt).

    ERIC Educational Resources Information Center

    Fukuda, Ryuji

    1985-01-01

    This excerpt from "Managerial Engineering: Techniques for Improving Quality and Productivity in the Workplace" describes the development, objectives, and use of On-Error Training (OET), a method which trains workers to learn from their errors. Also described is New Joharry's Window, a performance-error data analysis technique used in…

  2. C-band radar pulse Doppler error: Its discovery, modeling, and elimination

    NASA Technical Reports Server (NTRS)

    Krabill, W. B.; Dempsey, D. J.

    1978-01-01

    The discovery of a C Band radar pulse Doppler error is discussed and use of the GEOS 3 satellite's coherent transponder to isolate the error source is described. An analysis of the pulse Doppler tracking loop is presented and a mathematical model for the error was developed. Error correction techniques were developed and are described including implementation details.

  3. Skills, rules and knowledge in aircraft maintenance: errors in context

    NASA Technical Reports Server (NTRS)

    Hobbs, Alan; Williamson, Ann

    2002-01-01

    Automatic or skill-based behaviour is generally considered to be less prone to error than behaviour directed by conscious control. However, researchers who have applied Rasmussen's skill-rule-knowledge human error framework to accidents and incidents have sometimes found that skill-based errors appear in significant numbers. It is proposed that this is largely a reflection of the opportunities for error which workplaces present and does not indicate that skill-based behaviour is intrinsically unreliable. In the current study, 99 errors reported by 72 aircraft mechanics were examined in the light of a task analysis based on observations of the work of 25 aircraft mechanics. The task analysis identified the opportunities for error presented at various stages of maintenance work packages and by the job as a whole. Once the frequency of each error type was normalized in terms of the opportunities for error, it became apparent that skill-based performance is more reliable than rule-based performance, which is in turn more reliable than knowledge-based performance. The results reinforce the belief that industrial safety interventions designed to reduce errors would best be directed at those aspects of jobs that involve rule- and knowledge-based performance.

  4. Scout trajectory error propagation computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1982-01-01

    Since 1969, flight experience has been used as the basis for predicting Scout orbital accuracy. The data used for calculating the accuracy consists of errors in the trajectory parameters (altitude, velocity, etc.) at stage burnout as observed on Scout flights. Approximately 50 sets of errors are used in Monte Carlo analysis to generate error statistics in the trajectory parameters. A covariance matrix is formed which may be propagated in time. The mechanization of this process resulted in computer program Scout Trajectory Error Propagation (STEP) and is described herein. Computer program STEP may be used in conjunction with the Statistical Orbital Analysis Routine to generate accuracy in the orbit parameters (apogee, perigee, inclination, etc.) based upon flight experience.

  5. Analysis of dam-passage survival of yearling and subyearling Chinook salmon and juvenile steelhead at The Dalles Dam, Oregon, 2010

    USGS Publications Warehouse

    Beeman, John W.; Kock, Tobias J.; Perry, Russell W.; Smith, Steven G.

    2011-01-01

    We performed a series of analyses of mark-recapture data from a study at The Dalles Dam during 2010 to determine if model assumptions for estimation of juvenile salmonid dam-passage survival were met and if results were similar to those using the University of Washington's newly developed ATLAS software. The study was conducted by the Pacific Northwest National Laboratory and used acoustic telemetry of yearling Chinook salmon, juvenile steelhead, and subyearling Chinook salmon released at three sites according to the new virtual/paired-release statistical model. This was the first field application of the new model, and the results were used to measure compliance with minimum survival standards set forth in a recent Biological Opinion. Our analyses indicated that most model assumptions were met. The fish groups mixed in time and space, and no euthanized tagged fish were detected. Estimates of reach-specific survival were similar in fish tagged by each of the six taggers during the spring, but not in the summer. Tagger effort was unevenly allocated temporally during tagging of subyearling Chinook salmon in the summer; the difference in survival estimates among taggers was more likely a result of a temporal trend in actual survival than of tagger effects. The reach-specific survival of fish released at the three sites was not equal in the reaches they had in common for juvenile steelhead or subyearling Chinook salmon, violating one model assumption. This violation did not affect the estimate of dam-passage survival, because data from the common reaches were not used in its calculation. Contrary to expectation, precision of survival estimates was not improved by using the most parsimonious model of recapture probabilities instead of the fully parameterized model. Adjusting survival estimates for differences in fish travel times and tag lives increased the dam-passage survival estimate for yearling Chinook salmon by 0.0001 and for juvenile steelhead by 0.0004. The estimate was unchanged for subyearling Chinook salmon. The tag-life-adjusted dam-passage survival estimates from our analyses were 0.9641 (standard error [SE] 0.0096) for yearling Chinook salmon, 0.9534 (SE 0.0097) for juvenile steelhead, and 0.9404 (SE 0.0091) for subyearling Chinook salmon. These were within 0.0001 of estimates made by the University of Washington using the ATLAS software. Contrary to the intent of the virtual/paired-release model to adjust estimates of the paired-release model downward in order to account for differential handling mortality rates between release groups, random variation in survival estimates may result in an upward adjustment of survival relative to estimates from the paired-release model. Further investigation of this property of the virtual/paired-release model likely would prove beneficial. In addition, we suggest that differential selective pressures near release sites of the two control groups could bias estimates of dam-passage survival from the virtual/paired-release model.

  6. System reliability and recovery.

    DOT National Transportation Integrated Search

    1971-06-01

    The paper exhibits a variety of reliability techniques applicable to future ATC data processing systems. Presently envisioned schemes for error detection, error interrupt and error analysis are considered, along with methods of retry, reconfiguration...

  7. Permanence analysis of a concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Lin, S.; Kasami, T.

    1983-01-01

    A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however, the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for the planetary program, is analyzed.

  8. Reliability of drivers in urban intersections.

    PubMed

    Gstalter, Herbert; Fastenmeier, Wolfgang

    2010-01-01

    The concept of human reliability has been widely used in industrial settings by human factors experts to optimise the person-task fit. Reliability is estimated by the probability that a task will successfully be completed by personnel in a given stage of system operation. Human Reliability Analysis (HRA) is a technique used to calculate human error probabilities as the ratio of errors committed to the number of opportunities for that error. To transfer this notion to the measurement of car driver reliability the following components are necessary: a taxonomy of driving tasks, a definition of correct behaviour in each of these tasks, a list of errors as deviations from the correct actions and an adequate observation method to register errors and opportunities for these errors. Use of the SAFE-task analysis procedure recently made it possible to derive driver errors directly from the normative analysis of behavioural requirements. Driver reliability estimates could be used to compare groups of tasks (e.g. different types of intersections with their respective regulations) as well as groups of drivers' or individual drivers' aptitudes. This approach was tested in a field study with 62 drivers of different age groups. The subjects drove an instrumented car and had to complete an urban test route, the main features of which were 18 intersections representing six different driving tasks. The subjects were accompanied by two trained observers who recorded driver errors using standardized observation sheets. Results indicate that error indices often vary between both the age group of drivers and the type of driving task. The highest error indices occurred in the non-signalised intersection tasks and the roundabout, which exactly equals the corresponding ratings of task complexity from the SAFE analysis. A comparison of age groups clearly shows the disadvantage of older drivers, whose error indices in nearly all tasks are significantly higher than those of the other groups. The vast majority of these errors could be explained by high task load in the intersections, as they represent difficult tasks. The discussion shows how reliability estimates can be used in a constructive way to propose changes in car design, intersection layout and regulation as well as driver training.

  9. Toxic release consequence analysis tool (TORCAT) for inherently safer design plant.

    PubMed

    Shariff, Azmi Mohd; Zaini, Dzulkarnain

    2010-10-15

    Many major accidents due to toxic release in the past have caused many fatalities such as the tragedy of MIC release in Bhopal, India (1984). One of the approaches is to use inherently safer design technique that utilizes inherent safety principle to eliminate or minimize accidents rather than to control the hazard. This technique is best implemented in preliminary design stage where the consequence of toxic release can be evaluated and necessary design improvements can be implemented to eliminate or minimize the accidents to as low as reasonably practicable (ALARP) without resorting to costly protective system. However, currently there is no commercial tool available that has such capability. This paper reports on the preliminary findings on the development of a prototype tool for consequence analysis and design improvement via inherent safety principle by utilizing an integrated process design simulator with toxic release consequence analysis model. The consequence analysis based on the worst-case scenarios during process flowsheeting stage were conducted as case studies. The preliminary finding shows that toxic release consequences analysis tool (TORCAT) has capability to eliminate or minimize the potential toxic release accidents by adopting the inherent safety principle early in preliminary design stage. 2010 Elsevier B.V. All rights reserved.

  10. Analysis of Errors and Misconceptions in the Learning of Calculus by Undergraduate Students

    ERIC Educational Resources Information Center

    Muzangwa, Jonatan; Chifamba, Peter

    2012-01-01

    This paper is going to analyse errors and misconceptions in an undergraduate course in Calculus. The study will be based on a group of 10 BEd. Mathematics students at Great Zimbabwe University. Data is gathered through use of two exercises on Calculus 1&2.The analysis of the results from the tests showed that a majority of the errors were due…

  11. The impact of manufacturing variables on in vitro release of clobetasol 17-propionate from pilot scale cream formulations.

    PubMed

    Fauzee, Ayeshah Fateemah Beebee; Khamanga, Sandile Maswazi; Walker, Roderick Bryan

    2014-12-01

    The purpose of the study was to evaluate the effect of different homogenization speeds and times, anchor speeds and cooling times on the viscosity and cumulative % clobetasol 17-propionate released per unit area at 72 h from pilot scale cream formulations. A 2(4) full factorial central composite design for four independent variables were investigated. Thirty pilot scale batches of cream formulations were manufactured using a Wintech® cream/ointment plant. The viscosity and in vitro release of CP were monitored and compared to an innovator product that is commercially available on the South African market, namely, Dermovate® cream. Contour and three-dimensional response surface plots were produced and the viscosity and cumulative % CP released per unit area at 72 h were found to be primarily dependent on the homogenization and anchor speeds. An increase in the homogenization and anchor speeds appeared to exhibit a synergistic effect on the resultant viscosity of the cream whereas an antagonistic effect was observed for the in vitro release of CP from the experimental cream formulations. The in vitro release profiles were best fitted to a Higuchi model and diffusion proved to be the dominant mechanism of drug release that was confirmed by use of the Korsmeyer-Peppas model. The research was further validated and confirmed by the high prognostic ability of response surface methodology (RSM) with a resultant mean percentage error of (±SD) 0.17 ± 0.093 suggesting that RSM may be an efficient tool for the development and optimization of topical formulations.

  12. Diabetes Insipidus—Difficulties in Diagnosis and Treatment; Use of Synthetic Lysine-8 Vasopressin in Patients Intolerant of Other Therapy

    PubMed Central

    Eisenberg, Eugene

    1965-01-01

    Frequent errors in the diagnosis of diabetes insipidus arise from (1) failure to produce an adequate stimulus for release of antidiuretic hormone, and (2) failure to appreciate acute or chronic changes in renal function that may obscure test results. Properly timed determination of body weight, urine volume and serum and urine osmolarity during the course of water deprivation, and comparison of these values with those obtained after administration of exogenous vasopressin, eliminates most diagnostic errors. In four patients who had experienced local and systemic reactions to other exogenous forms of vasopressin, diabetes insipidus was satisfactorily controlled by administration of synthetic lysine-8 vasopressin in nasal spray. A fifth patient was also treated satisfactorily with this preparation. PMID:14290932

  13. Astrometry for New Reductions: The ANR method

    NASA Astrophysics Data System (ADS)

    Robert, Vincent; Le Poncin-Lafitte, Christophe

    2018-04-01

    Accurate positional measurements of planets and satellites are used to improve our knowledge of their orbits and dynamics, and to infer the accuracy of the planet and satellite ephemerides. With the arrival of the Gaia-DR1 reference star catalog and its complete release afterward, the methods for ground-based astrometry become outdated in terms of their formal accuracy compared to the catalog's which is used. Systematic and zonal errors of the reference stars are eliminated, and the astrometric process now dominates in the error budget. We present a set of algorithms for computing the apparent directions of planets, satellites and stars on any date to micro-arcsecond precision. The expressions are consistent with the ICRS reference system, and define the transformation between theoretical reference data, and ground-based astrometric observables.

  14. Testing the Accuracy of Data-driven MHD Simulations of Active Region Evolution and Eruption

    NASA Astrophysics Data System (ADS)

    Leake, J. E.; Linton, M.; Schuck, P. W.

    2017-12-01

    Models for the evolution of the solar coronal magnetic field are vital for understanding solar activity, yet the best measurements of the magnetic field lie at the photosphere, necessitating the recent development of coronal models which are "data-driven" at the photosphere. Using magnetohydrodynamic simulations of active region formation and our recently created validation framework we investigate the source of errors in data-driven models that use surface measurements of the magnetic field, and derived MHD quantities, to model the coronal magnetic field. The primary sources of errors in these studies are the temporal and spatial resolution of the surface measurements. We will discuss the implications of theses studies for accurately modeling the build up and release of coronal magnetic energy based on photospheric magnetic field observations.

  15. Earth Gravitational Model 2020

    NASA Astrophysics Data System (ADS)

    Barnes, Daniel; Holmes, Simon; Factor, John; Ingalls, Sarah; Presicci, Manny; Beale, James

    2017-04-01

    The National Geospatial-Intelligence Agency [NGA], in conjunction with its U.S. and international partners, has begun preliminary work on its next Earth Gravitational Model, to replace EGM2008. The new 'Earth Gravitational Model 2020' [EGM2020] has an expected public release date of 2020, and will likely retain the same harmonic basis and resolution as EGM2008. As such, EGM2020 will be essentially an ellipsoidal harmonic model up to degree (n) and order (m) 2159, but will be released as a spherical harmonic model to degree 2190 and order 2159. EGM2020 will benefit from new data sources and procedures. Updated satellite gravity information from the GOCE and GRACE mission, will better support the lower harmonics, globally. Multiple new acquisitions (terrestrial, airborne and ship borne) of gravimetric data over specific geographical areas, will provide improved global coverage and resolution over the land, as well as for coastal and some ocean areas. Ongoing accumulation of satellite altimetry data as well as improvements in the treatment of this data, will better define the marine gravity field, most notably in polar and near-coastal regions. NGA and partners are evaluating different approaches for optimally combining the new GOCE/GRACE satellite gravity models with the terrestrial data. These include the latest methods employing a full covariance adjustment. NGA is also working to assess systematically the quality of its entire gravimetry database, towards correcting biases and other egregious errors where possible, and generating improved error models that will inform the final combination with the latest satellite gravity models. Outdated data gridding procedures have been replaced with improved approaches. For EGM2020, NGA intends to extract maximum value from the proprietary data that overlaps geographically with unrestricted data, whilst also making sure to respect and honor its proprietary agreements with its data-sharing partners. Approved for Public Release, 15-564

  16. Preparation of delayed release tablet dosage forms by compression coating: effect of coating material on theophylline release.

    PubMed

    El-Malah, Yasser; Nazzal, Sami

    2010-06-01

    In this study, compression-coated tablets were prepared and examined by real-time swelling/erosion analysis and dissolution studies. Of the coating materials, PVP showed no swelling behavior and had no impact on theophylline release. Polyox(®) exhibited swelling behavior of an entangled polymer, which was reflected in its > 14-hour delayed-release profile. Hydroxypropyl methylcellulose (HPMC), which revealed the characteristics of a disentangled polymer, caused a 2-h delay in theophylline release. Based on preliminary texture analysis data, Polyox(®)/PVP blends were used as coating materials to manipulate the onset of drug release from the compression-coated tablets. Of the blends, at a 1:1 ratio, for example, resulted in a burst release after 10 h, which demonstrated the feasibility of preparing delayed release dosage forms by compression coating. Furthermore, it was feasible to predict the dissolution behavior of polymers from their swelling/erosion data, which were generated from texture analysis.

  17. How Monte Carlo heuristics aid to identify the physical processes of drug release kinetics.

    PubMed

    Lecca, Paola

    2018-01-01

    We implement a Monte Carlo heuristic algorithm to model drug release from a solid dosage form. We show that with Monte Carlo simulations it is possible to identify and explain the causes of the unsatisfactory predictive power of current drug release models. It is well known that the power-law, the exponential models, as well as those derived from or inspired by them accurately reproduce only the first 60% of the release curve of a drug from a dosage form. In this study, by using Monte Carlo simulation approaches, we show that these models fit quite accurately almost the entire release profile when the release kinetics is not governed by the coexistence of different physico-chemical mechanisms. We show that the accuracy of the traditional models are comparable with those of Monte Carlo heuristics when these heuristics approximate and oversimply the phenomenology of drug release. This observation suggests to develop and use novel Monte Carlo simulation heuristics able to describe the complexity of the release kinetics, and consequently to generate data more similar to those observed in real experiments. Implementing Monte Carlo simulation heuristics of the drug release phenomenology may be much straightforward and efficient than hypothesizing and implementing from scratch complex mathematical models of the physical processes involved in drug release. Identifying and understanding through simulation heuristics what processes of this phenomenology reproduce the observed data and then formalize them in mathematics may allow avoiding time-consuming, trial-error based regression procedures. Three bullet points, highlighting the customization of the procedure. •An efficient heuristics based on Monte Carlo methods for simulating drug release from solid dosage form encodes is presented. It specifies the model of the physical process in a simple but accurate way in the formula of the Monte Carlo Micro Step (MCS) time interval.•Given the experimentally observed curve of drug release, we point out how Monte Carlo heuristics can be integrated in an evolutionary algorithmic approach to infer the mode of MCS best fitting the observed data, and thus the observed release kinetics.•The software implementing the method is written in R language, the free most used language in the bioinformaticians community.

  18. Linguistic Pattern Analysis of Misspellings of Typically Developing Writers in Grades 1 to 9

    PubMed Central

    Bahr, Ruth Huntley; Silliman, Elaine R.; Berninger, Virginia W.; Dow, Michael

    2012-01-01

    Purpose A mixed methods approach, evaluating triple word form theory, was used to describe linguistic patterns of misspellings. Method Spelling errors were taken from narrative and expository writing samples provided by 888 typically developing students in grades 1–9. Errors were coded by category (phonological, orthographic, and morphological) and specific linguistic feature affected. Grade level effects were analyzed with trend analysis. Qualitative analyses determined frequent error types and how use of specific linguistic features varied across grades. Results Phonological, orthographic, and morphological errors were noted across all grades, but orthographic errors predominated. Linear trends revealed developmental shifts in error proportions for the orthographic and morphological categories between grades 4–5. Similar error types were noted across age groups but the nature of linguistic feature error changed with age. Conclusions Triple word-form theory was supported. By grade 1, orthographic errors predominated and phonological and morphological error patterns were evident. Morphological errors increased in relative frequency in older students, probably due to a combination of word-formation issues and vocabulary growth. These patterns suggest that normal spelling development reflects non-linear growth and that it takes a long time to develop a robust orthographic lexicon that coordinates phonology, orthography, and morphology and supports word-specific, conventional spelling. PMID:22473834

  19. Mathematical Writing Errors in Expository Writings of College Mathematics Students

    ERIC Educational Resources Information Center

    Guce, Ivee K.

    2017-01-01

    Despite the efforts to confirm the effectiveness of writing in learning mathematics, analysis on common errors in mathematical writings has not received sufficient attention. This study aimed to provide an account of the students' procedural explanations in terms of their commonly committed errors in mathematical writing. Nine errors in…

  20. Xenotransplantation of neonatal porcine liver cells.

    PubMed

    Garkavenko, O; Emerich, D F; Muzina, M; Muzina, Z; Vasconcellos, A V; Ferguson, A B; Cooper, I J; Elliott, R B

    2005-01-01

    Xenotransplantation of porcine liver cell types may provide a means of overcoming the shortage of suitable donor tissues to treat hepatic diseases characterized by inherited inborn errors of metabolism or protein production. Here we report the successful isolation, culture, and xenotransplantation of liver cells harvested from 7- to 10-day-old piglets. Liver cells were isolated and cultured immediately after harvesting. Cell viability was excellent (>90%) over the duration of the in vitro studies (3 weeks) and the cultured cells continued to significantly proliferate. These cells also retained their normal secretory and metabolic capabilities as determined by continued release of albumin, factor 8, and indocyanin green (ICG) uptake. After 3 weeks in culture, porcine liver cells were loaded into immunoisolatory macro devices (Theracyte devices) and placed into the intraperitoneal cavity of immunocompetant CD1 mice. Eight weeks later, the devices were retrieved and the cells analyzed for posttransplant determinations of survival and function. Post mortem analysis confirmed that the cell-loaded devices were biocompatible, and were well-tolerated without inducing any notable inflammatory reaction in the tissues immediately surrounding the encapsulated cells. Finally, the encapsulated liver cells remained viable and functional as determined by histologic analyses and ICG uptake/release. The successful harvesting, culturing, and xenotransplantation of functional neonatal pig liver cells support the continued development of this approach for treating a range of currently undertreated or intractable hepatic diseases.

  1. Comparison Between Path Lengths Traveled by Solar Electrons and Ions in Ground-Level Enhancement Events

    NASA Technical Reports Server (NTRS)

    Tan, Lun C.; Malandraki, Olga E.; Reames, Donald; NG, Chee K.; Wang, Linghua; Patsou, Ioanna; Papaioannou, Athanasios

    2013-01-01

    We have examined the Wind/3DP/SST electron and Wind/EPACT/LEMT ion data to investigate the path length difference between solar electrons and ions in the ground-level enhancement (GLE) events in solar cycle 23. Assuming that the onset time of metric type II or decameter-hectometric (DH) type III radio bursts is the solar release time of non-relativistic electrons, we have found that within an error range of plus or minus 10% the deduced path length of low-energy (approximately 27 keV) electrons from their release site near the Sun to the 1 AU observer is consistent with the ion path length deduced by Reames from the onset time analysis. In addition, the solar longitude distribution and IMF topology of the GLE events examined are in favor of the coronal mass ejection-driven shock acceleration origin of observed non-relativistic electrons.We have also found an increase of electron path lengths with increasing electron energies. The increasing rate of path lengths is correlated with the pitch angle distribution (PAD) of peak electron intensities locally measured, with a higher rate corresponding to a broader PAD. The correlation indicates that the path length enhancement is due to the interplanetary scattering experienced by first arriving electrons. The observed path length consistency implies that the maximum stable time of magnetic flux tubes, along which particles transport, could reach 4.8 hr.

  2. Freeze-dried, mucoadhesive system for vaginal delivery of the HIV microbicide, dapivirine: optimisation by an artificial neural network.

    PubMed

    Woolfson, A David; Umrethia, Manish L; Kett, Victoria L; Malcolm, R Karl

    2010-03-30

    Dapivirine mucoadhesive gels and freeze-dried tablets were prepared using a 3x3x2 factorial design. An artificial neural network (ANN) with multi-layer perception was used to investigate the effect of hydroxypropyl-methylcellulose (HPMC): polyvinylpyrrolidone (PVP) ratio (X1), mucoadhesive concentration (X2) and delivery system (gel or freeze-dried mucoadhesive tablet, X3) on response variables; cumulative release of dapivirine at 24h (Q(24)), mucoadhesive force (F(max)) and zero-rate viscosity. Optimisation was performed by minimising the error between the experimental and predicted values of responses by ANN. The method was validated using check point analysis by preparing six formulations of gels and their corresponding freeze-dried tablets randomly selected from within the design space of contour plots. Experimental and predicted values of response variables were not significantly different (p>0.05, two-sided paired t-test). For gels, Q(24) values were higher than their corresponding freeze-dried tablets. F(max) values for freeze-dried tablets were significantly different (2-4 times greater, p>0.05, two-sided paired t-test) compared to equivalent gels. Freeze-dried tablets having lower values for X1 and higher values for X2 components offered the best compromise between effective dapivirine release, mucoadhesion and viscosity such that increased vaginal residence time was likely to be achieved. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  3. Human error analysis of commercial aviation accidents using the human factors analysis and classification system (HFACS)

    DOT National Transportation Integrated Search

    2001-02-01

    The Human Factors Analysis and Classification System (HFACS) is a general human error framework : originally developed and tested within the U.S. military as a tool for investigating and analyzing the human : causes of aviation accidents. Based upon ...

  4. Estimating the Earthquake Source Time Function by Markov Chain Monte Carlo Sampling

    NASA Astrophysics Data System (ADS)

    Dȩbski, Wojciech

    2008-07-01

    Many aspects of earthquake source dynamics like dynamic stress drop, rupture velocity and directivity, etc. are currently inferred from the source time functions obtained by a deconvolution of the propagation and recording effects from seismograms. The question of the accuracy of obtained results remains open. In this paper we address this issue by considering two aspects of the source time function deconvolution. First, we propose a new pseudo-spectral parameterization of the sought function which explicitly takes into account the physical constraints imposed on the sought functions. Such parameterization automatically excludes non-physical solutions and so improves the stability and uniqueness of the deconvolution. Secondly, we demonstrate that the Bayesian approach to the inverse problem at hand, combined with an efficient Markov Chain Monte Carlo sampling technique, is a method which allows efficient estimation of the source time function uncertainties. The key point of the approach is the description of the solution of the inverse problem by the a posteriori probability density function constructed according to the Bayesian (probabilistic) theory. Next, the Markov Chain Monte Carlo sampling technique is used to sample this function so the statistical estimator of a posteriori errors can be easily obtained with minimal additional computational effort with respect to modern inversion (optimization) algorithms. The methodological considerations are illustrated by a case study of the mining-induced seismic event of the magnitude M L ≈3.1 that occurred at Rudna (Poland) copper mine. The seismic P-wave records were inverted for the source time functions, using the proposed algorithm and the empirical Green function technique to approximate Green functions. The obtained solutions seem to suggest some complexity of the rupture process with double pulses of energy release. However, the error analysis shows that the hypothesis of source complexity is not justified at the 95% confidence level. On the basis of the analyzed event we also show that the separation of the source inversion into two steps introduces limitations on the completeness of the a posteriori error analysis.

  5. An electrophysiological signal that precisely tracks the emergence of error awareness

    PubMed Central

    Murphy, Peter R.; Robertson, Ian H.; Allen, Darren; Hester, Robert; O'Connell, Redmond G.

    2012-01-01

    Recent electrophysiological research has sought to elucidate the neural mechanisms necessary for the conscious awareness of action errors. Much of this work has focused on the error positivity (Pe), a neural signal that is specifically elicited by errors that have been consciously perceived. While awareness appears to be an essential prerequisite for eliciting the Pe, the precise functional role of this component has not been identified. Twenty-nine participants performed a novel variant of the Go/No-go Error Awareness Task (EAT) in which awareness of commission errors was indicated via a separate speeded manual response. Independent component analysis (ICA) was used to isolate the Pe from other stimulus- and response-evoked signals. Single-trial analysis revealed that Pe peak latency was highly correlated with the latency at which awareness was indicated. Furthermore, the Pe was more closely related to the timing of awareness than it was to the initial erroneous response. This finding was confirmed in a separate study which derived IC weights from a control condition in which no indication of awareness was required, thus ruling out motor confounds. A receiver-operating-characteristic (ROC) curve analysis showed that the Pe could reliably predict whether an error would be consciously perceived up to 400 ms before the average awareness response. Finally, Pe latency and amplitude were found to be significantly correlated with overall error awareness levels between subjects. Our data show for the first time that the temporal dynamics of the Pe trace the emergence of error awareness. These findings have important implications for interpreting the results of clinical EEG studies of error processing. PMID:22470332

  6. Considerations for analysis of time-to-event outcomes measured with error: Bias and correction with SIMEX.

    PubMed

    Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A

    2018-04-15

    For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Identification and assessment of common errors in the admission process of patients in Isfahan Fertility and Infertility Center based on "failure modes and effects analysis".

    PubMed

    Dehghan, Ashraf; Abumasoudi, Rouhollah Sheikh; Ehsanpour, Soheila

    2016-01-01

    Infertility and errors in the process of its treatment have a negative impact on infertile couples. The present study was aimed to identify and assess the common errors in the reception process by applying the approach of "failure modes and effects analysis" (FMEA). In this descriptive cross-sectional study, the admission process of fertility and infertility center of Isfahan was selected for evaluation of its errors based on the team members' decision. At first, the admission process was charted through observations and interviewing employees, holding multiple panels, and using FMEA worksheet, which has been used in many researches all over the world and also in Iran. Its validity was evaluated through content and face validity, and its reliability was evaluated through reviewing and confirmation of the obtained information by the FMEA team, and eventually possible errors, causes, and three indicators of severity of effect, probability of occurrence, and probability of detection were determined and corrective actions were proposed. Data analysis was determined by the number of risk priority (RPN) which is calculated by multiplying the severity of effect, probability of occurrence, and probability of detection. Twenty-five errors with RPN ≥ 125 was detected through the admission process, in which six cases of error had high priority in terms of severity and occurrence probability and were identified as high-risk errors. The team-oriented method of FMEA could be useful for assessment of errors and also to reduce the occurrence probability of errors.

  8. ESIprot: a universal tool for charge state determination and molecular weight calculation of proteins from electrospray ionization mass spectrometry data.

    PubMed

    Winkler, Robert

    2010-02-01

    Electrospray ionization (ESI) ion trap mass spectrometers with relatively low resolution are frequently used for the analysis of natural products and peptides. Although ESI spectra of multiply charged protein molecules also can be measured on this type of devices, only average spectra are produced for the majority of naturally occurring proteins. Evaluating such ESI protein spectra would provide valuable information about the native state of investigated proteins. However, no suitable and freely available software could be found which allows the charge state determination and molecular weight calculation of single proteins from average ESI-MS data. Therefore, an algorithm based on standard deviation optimization (scatter minimization) was implemented for the analysis of protein ESI-MS data. The resulting software ESIprot was tested with ESI-MS data of six intact reference proteins between 12.4 and 66.7 kDa. In all cases, the correct charge states could be determined. The obtained absolute mass errors were in a range between -0.2 and 1.2 Da, the relative errors below 30 ppm. The possible mass accuracy allows for valid conclusions about the actual condition of proteins. Moreover, the ESIprot algorithm demonstrates an extraordinary robustness and allows spectral interpretation from as little as two peaks, given sufficient quality of the provided m/z data, without the necessity for peak intensity data. ESIprot is independent from the raw data format and the computer platform, making it a versatile tool for mass spectrometrists. The program code was released under the open-source GPLv3 license to support future developments of mass spectrometry software. Copyright 2010 John Wiley & Sons, Ltd.

  9. The SDSS-DR12 large-scale cross-correlation of damped Lyman alpha systems with the Lyman alpha forest

    NASA Astrophysics Data System (ADS)

    Pérez-Ràfols, Ignasi; Font-Ribera, Andreu; Miralda-Escudé, Jordi; Blomqvist, Michael; Bird, Simeon; Busca, Nicolás; du Mas des Bourboux, Hélion; Mas-Ribas, Lluís; Noterdaeme, Pasquier; Petitjean, Patrick; Rich, James; Schneider, Donald P.

    2018-01-01

    We present a measurement of the damped Ly α absorber (DLA) mean bias from the cross-correlation of DLAs and the Ly α forest, updating earlier results of Font-Ribera et al. (2012) with the final Baryon Oscillations Spectroscopic Survey data release and an improved method to address continuum fitting corrections. Our cross-correlation is well fitted by linear theory with the standard ΛCDM model, with a DLA bias of bDLA = 1.99 ± 0.11; a more conservative analysis, which removes DLA in the Ly β forest and uses only the cross-correlation at r > 10 h-1 Mpc, yields bDLA = 2.00 ± 0.19. This assumes the cosmological model from Planck Collaboration (2016) and the Ly α forest bias factors of Bautista et al. (2017) and includes only statistical errors obtained from bootstrap analysis. The main systematic errors arise from possible impurities and selection effects in the DLA catalogue and from uncertainties in the determination of the Ly α forest bias factors and a correction for effects of high column density absorbers. We find no dependence of the DLA bias on column density or redshift. The measured bias value corresponds to a host halo mass ∼4 × 1011 h-1 M⊙ if all DLAs were hosted in haloes of a similar mass. In a realistic model where host haloes over a broad mass range have a DLA cross-section Σ (M_h) ∝ M_h^{α } down to Mh > Mmin = 108.5 h-1 M⊙, we find that α > 1 is required to have bDLA > 1.7, implying a steeper relation or higher value of Mmin than is generally predicted in numerical simulations of galaxy formation.

  10. Compliance Monitoring of Juvenile Yearling Chinook Salmon and Steelhead Survival and Passage at The Dalles Dam, Spring 2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, Thomas J.; Skalski, John R.

    2010-10-01

    The purpose of this compliance study was to estimate dam passage survival of yearling Chinook salmon and steelhead smolts at The Dalles Dam during spring 2010. Under the 2008 Federal Columbia River Power System (FCRPS) Biological Opinion (BiOp), dam passage survival should be greater than or equal to 0.96 and estimated with a standard error (SE) less than or equal 0.015. The study also estimated smolt passage survival from the forebay boat-restricted zone (BRZ) to the tailrace BRZ at The Dalles Dam, as well as the forebay residence time, tailrace egress, and spill passage efficiency (SPE), as required in themore » Columbia Basin Fish Accords. A virtual/paired-release design was used to estimate dam passage survival at The Dalles Dam. The approach included releases of acoustic-tagged smolts above John Day Dam that contributed to the formation of a virtual release at the face of The Dalles Dam. A survival estimate from this release was adjusted by a paired release below The Dalles Dam. A total of 4,298 yearling Chinook salmon and 4,309 steelhead smolts were tagged and released in the investigation. The Juvenile Salmon Acoustic Telemetry System (JSATS) tag model number ATS-156dB, weighing 0.438 g in air, was used in this investigation. The dam passage survival results are summarized as follows: Yearling Chinook Salmon 0.9641 (SE = 0.0096) and Steelhead 0.9535 (SE = 0.0097).« less

  11. Quantization error of CCD cameras and their influence on phase calculation in fringe pattern analysis.

    PubMed

    Skydan, Oleksandr A; Lilley, Francis; Lalor, Michael J; Burton, David R

    2003-09-10

    We present an investigation into the phase errors that occur in fringe pattern analysis that are caused by quantization effects. When acquisition devices with a limited value of camera bit depth are used, there are a limited number of quantization levels available to record the signal. This may adversely affect the recorded signal and adds a potential source of instrumental error to the measurement system. Quantization effects also determine the accuracy that may be achieved by acquisition devices in a measurement system. We used the Fourier fringe analysis measurement technique. However, the principles can be applied equally well for other phase measuring techniques to yield a phase error distribution that is caused by the camera bit depth.

  12. The presence of English and Spanish dyslexia in the Web

    NASA Astrophysics Data System (ADS)

    Rello, Luz; Baeza-Yates, Ricardo

    2012-09-01

    In this study we present a lower bound of the prevalence of dyslexia in the Web for English and Spanish. On the basis of analysis of corpora written by dyslexic people, we propose a classification of the different kinds of dyslexic errors. A representative data set of dyslexic words is used to calculate this lower bound in web pages containing English and Spanish dyslexic errors. We also present an analysis of dyslexic errors in major Internet domains, social media sites, and throughout English- and Spanish-speaking countries. To show the independence of our estimations from the presence of other kinds of errors, we compare them with the overall lexical quality of the Web and with the error rate of noncorrected corpora. The presence of dyslexic errors in the Web motivates work in web accessibility for dyslexic users.

  13. Flux control coefficients determined by inhibitor titration: the design and analysis of experiments to minimize errors.

    PubMed Central

    Small, J R

    1993-01-01

    This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434

  14. Analysis of the load selection on the error of source characteristics identification for an engine exhaust system

    NASA Astrophysics Data System (ADS)

    Zheng, Sifa; Liu, Haitao; Dan, Jiabi; Lian, Xiaomin

    2015-05-01

    Linear time-invariant assumption for the determination of acoustic source characteristics, the source strength and the source impedance in the frequency domain has been proved reasonable in the design of an exhaust system. Different methods have been proposed to its identification and the multi-load method is widely used for its convenience by varying the load number and impedance. Theoretical error analysis has rarely been referred to and previous results have shown an overdetermined set of open pipes can reduce the identification error. This paper contributes a theoretical error analysis for the load selection. The relationships between the error in the identification of source characteristics and the load selection were analysed. A general linear time-invariant model was built based on the four-load method. To analyse the error of the source impedance, an error estimation function was proposed. The dispersion of the source pressure was obtained by an inverse calculation as an indicator to detect the accuracy of the results. It was found that for a certain load length, the load resistance at the frequency points of one-quarter wavelength of odd multiples results in peaks and in the maximum error for source impedance identification. Therefore, the load impedance of frequency range within the one-quarter wavelength of odd multiples should not be used for source impedance identification. If the selected loads have more similar resistance values (i.e., the same order of magnitude), the identification error of the source impedance could be effectively reduced.

  15. Learning mechanisms to limit medication administration errors.

    PubMed

    Drach-Zahavy, Anat; Pud, Dorit

    2010-04-01

    This paper is a report of a study conducted to identify and test the effectiveness of learning mechanisms applied by the nursing staff of hospital wards as a means of limiting medication administration errors. Since the influential report ;To Err Is Human', research has emphasized the role of team learning in reducing medication administration errors. Nevertheless, little is known about the mechanisms underlying team learning. Thirty-two hospital wards were randomly recruited. Data were collected during 2006 in Israel by a multi-method (observations, interviews and administrative data), multi-source (head nurses, bedside nurses) approach. Medication administration error was defined as any deviation from procedures, policies and/or best practices for medication administration, and was identified using semi-structured observations of nurses administering medication. Organizational learning was measured using semi-structured interviews with head nurses, and the previous year's reported medication administration errors were assessed using administrative data. The interview data revealed four learning mechanism patterns employed in an attempt to learn from medication administration errors: integrated, non-integrated, supervisory and patchy learning. Regression analysis results demonstrated that whereas the integrated pattern of learning mechanisms was associated with decreased errors, the non-integrated pattern was associated with increased errors. Supervisory and patchy learning mechanisms were not associated with errors. Superior learning mechanisms are those that represent the whole cycle of team learning, are enacted by nurses who administer medications to patients, and emphasize a system approach to data analysis instead of analysis of individual cases.

  16. Source term identification in atmospheric modelling via sparse optimization

    NASA Astrophysics Data System (ADS)

    Adam, Lukas; Branda, Martin; Hamburger, Thomas

    2015-04-01

    Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the concept of sparsity. In the paper, we summarize several optimization techniques which are used for finding sparse solutions and propose their modifications to handle selected constraints such as nonnegativity constraints and simple linear constraints, for example the minimal or maximal amount of total release. These techniques range from successive convex approximations to solution of one nonconvex problem. On simple examples, we explain these techniques and compare them from the point of implementation simplicity, approximation capability and convergence properties. Finally, these methods will be applied on the European Tracer Experiment (ETEX) data and the results will be compared with the current state of arts techniques such as regularized least squares or Bayesian approach. The obtained results show the surprisingly good results of these techniques. This research is supported by EEA/Norwegian Financial Mechanism under project 7F14287 STRADI.

  17. Selection of neural network structure for system error correction of electro-optical tracker system with horizontal gimbal

    NASA Astrophysics Data System (ADS)

    Liu, Xing-fa; Cen, Ming

    2007-12-01

    Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.

  18. Comparison of MERRA-2 and ECCO-v4 ocean surface heat fluxes: Consequences of different forcing feedbacks on ocean circulation and implications for climate data assimilation.

    NASA Astrophysics Data System (ADS)

    Strobach, E.; Molod, A.; Menemenlis, D.; Forget, G.; Hill, C. N.; Campin, J. M.; Heimbach, P.

    2017-12-01

    Forcing ocean models with reanalysis data is a common practice in ocean modeling. As part of this practice, prescribed atmospheric state variables and interactive ocean SST are used to calculate fluxes between the ocean and the atmosphere. When forcing an ocean model with reanalysis fields, errors in the reanalysis data, errors in the ocean model and errors in the forcing formulation will generate a different solution compared to other ocean reanalysis solutions (which also have their own errors). As a first step towards a consistent coupled ocean-atmosphere reanalysis, we compare surface heat fluxes from a state-of-the-art atmospheric reanalysis, the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2), to heat fluxes from a state-of-the-art oceanic reanalysis, the Estimating the Circulation and Climate of the Ocean Version 4, Release 2 (ECCO-v4). Then, we investigate the errors associated with the MITgcm ocean model in its ECCO-v4 ocean reanalysis configuration (1992-2011) when it is forced with MERRA-2 atmospheric reanalysis fields instead of with the ECCO-v4 adjoint optimized ERA-interim state variables. This is done by forcing ECCO-v4 ocean with and without feedbacks from MERRA-2 related to turbulent fluxes of heat and moisture and the outgoing long wave radiation. In addition, we introduce an intermediate forcing method that includes only the feedback from the interactive outgoing long wave radiation. The resulting ocean circulation is compared with ECCO-v4 reanalysis and in-situ observations. We show that, without feedbacks, imbalances in the energy and the hydrological cycles of MERRA-2 (which are directly related to the fact it was created without interactive ocean) result in considerable SST drifts and a large reduction in sea level. The bulk formulae and interactive outgoing long wave radiation, although providing air-sea feedbacks and reducing model-data misfit, strongly relax the ocean to observed SST and may result in unwanted features such as large change in the water budget. These features have implications in on desired forcing recipe to be used. The results strongly and unambiguously argue for next generation data assimilation climate studies to involve fully coupled systems.

  19. Determining relative error bounds for the CVBEM

    USGS Publications Warehouse

    Hromadka, T.V.

    1985-01-01

    The Complex Variable Boundary Element Methods provides a measure of relative error which can be utilized to subsequently reduce the error or provide information for further modeling analysis. By maximizing the relative error norm on each boundary element, a bound on the total relative error for each boundary element can be evaluated. This bound can be utilized to test CVBEM convergence, to analyze the effects of additional boundary nodal points in reducing the modeling error, and to evaluate the sensitivity of resulting modeling error within a boundary element from the error produced in another boundary element as a function of geometric distance. ?? 1985.

  20. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    NASA Astrophysics Data System (ADS)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.

  1. Ontological analysis of SNOMED CT.

    PubMed

    Héja, Gergely; Surján, György; Varga, Péter

    2008-10-27

    SNOMED CT is the most comprehensive medical terminology. However, its use for intelligent services based on formal reasoning is questionable. The analysis of the structure of SNOMED CT is based on the formal top-level ontology DOLCE. The analysis revealed several ontological and knowledge-engineering errors, the most important are errors in the hierarchy (mostly from an ontological point of view, but also regarding medical aspects) and the mixing of subsumption relations with other types (mostly 'part of'). The found errors impede formal reasoning. The paper presents a possible way to correct these problems.

  2. Using Microcomputers for Assessment and Error Analysis. Monograph #23.

    ERIC Educational Resources Information Center

    Hasselbring, Ted S.; And Others

    This monograph provides an overview of computer-based assessment and error analysis in the instruction of elementary students with complex medical, learning, and/or behavioral problems. Information on generating and scoring tests using the microcomputer is offered, as are ideas for using computers in the analysis of mathematical strategies and…

  3. Airborne photography of chemical releases and analysis of twilight sky brightness data, phases 1 and 2

    NASA Technical Reports Server (NTRS)

    Bedinger, J. F.; Constantinides, E.

    1976-01-01

    The photography from aboard an aircraft of chemical releases is reported. The equipment installation on the aircraft is described, and photographs of the releases are included. An extensive analysis of twilight sky photographs is presented.

  4. Corrections of clinical chemistry test results in a laboratory information system.

    PubMed

    Wang, Sihe; Ho, Virginia

    2004-08-01

    The recently released reports by the Institute of Medicine, To Err Is Human and Patient Safety, have received national attention because of their focus on the problem of medical errors. Although a small number of studies have reported on errors in general clinical laboratories, there are, to our knowledge, no reported studies that focus on errors in pediatric clinical laboratory testing. To characterize the errors that have caused corrections to have to be made in pediatric clinical chemistry results in the laboratory information system, Misys. To provide initial data on the errors detected in pediatric clinical chemistry laboratories in order to improve patient safety in pediatric health care. All clinical chemistry staff members were informed of the study and were requested to report in writing when a correction was made in the laboratory information system, Misys. Errors were detected either by the clinicians (the results did not fit the patients' clinical conditions) or by the laboratory technologists (the results were double-checked, and the worksheets were carefully examined twice a day). No incident that was discovered before or during the final validation was included. On each Monday of the study, we generated a report from Misys that listed all of the corrections made during the previous week. We then categorized the corrections according to the types and stages of the incidents that led to the corrections. A total of 187 incidents were detected during the 10-month study, representing a 0.26% error detection rate per requisition. The distribution of the detected incidents included 31 (17%) preanalytic incidents, 46 (25%) analytic incidents, and 110 (59%) postanalytic incidents. The errors related to noninterfaced tests accounted for 50% of the total incidents and for 37% of the affected tests and orderable panels, while the noninterfaced tests and panels accounted for 17% of the total test volume in our laboratory. This pilot study provided the rate and categories of errors detected in a pediatric clinical chemistry laboratory based on the corrections of results in the laboratory information system. A direct interface of the instruments to the laboratory information system showed that it had favorable effects on reducing laboratory errors.

  5. Statistical model to perform error analysis of curve fits of wind tunnel test data using the techniques of analysis of variance and regression analysis

    NASA Technical Reports Server (NTRS)

    Alston, D. W.

    1981-01-01

    The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.

  6. Analysis of uncertainties and convergence of the statistical quantities in turbulent wall-bounded flows by means of a physically based criterion

    NASA Astrophysics Data System (ADS)

    Andrade, João Rodrigo; Martins, Ramon Silva; Thompson, Roney Leon; Mompean, Gilmar; da Silveira Neto, Aristeu

    2018-04-01

    The present paper provides an analysis of the statistical uncertainties associated with direct numerical simulation (DNS) results and experimental data for turbulent channel and pipe flows, showing a new physically based quantification of these errors, to improve the determination of the statistical deviations between DNSs and experiments. The analysis is carried out using a recently proposed criterion by Thompson et al. ["A methodology to evaluate statistical errors in DNS data of plane channel flows," Comput. Fluids 130, 1-7 (2016)] for fully turbulent plane channel flows, where the mean velocity error is estimated by considering the Reynolds stress tensor, and using the balance of the mean force equation. It also presents how the residual error evolves in time for a DNS of a plane channel flow, and the influence of the Reynolds number on its convergence rate. The root mean square of the residual error is shown in order to capture a single quantitative value of the error associated with the dimensionless averaging time. The evolution in time of the error norm is compared with the final error provided by DNS data of similar Reynolds numbers available in the literature. A direct consequence of this approach is that it was possible to compare different numerical results and experimental data, providing an improved understanding of the convergence of the statistical quantities in turbulent wall-bounded flows.

  7. Lexico-Semantic Errors of the Learners of English: A Survey of Standard Seven Keiyo-Speaking Primary School Pupils in Keiyo District, Kenya

    ERIC Educational Resources Information Center

    Jeptarus, Kipsamo E.; Ngene, Patrick K.

    2016-01-01

    The purpose of this research was to study the Lexico-semantic errors of the Keiyo-speaking standard seven primary school learners of English as a Second Language (ESL) in Keiyo District, Kenya. This study was guided by two related theories: Error Analysis Theory/Approach by Corder (1971) which approaches L2 learning through a detailed analysis of…

  8. Demonstration of spectral calibration for stellar interferometry

    NASA Technical Reports Server (NTRS)

    Demers, Richard T.; An, Xin; Tang, Hong; Rud, Mayer; Wayne, Leonard; Kissil, Andrew; Kwack, Eug-Yun

    2006-01-01

    A breadboard is under development to demonstrate the calibration of spectral errors in microarcsecond stellar interferometers. Analysis shows that thermally and mechanically stable hardware in addition to careful optical design can reduce the wavelength dependent error to tens of nanometers. Calibration of the hardware can further reduce the error to the level of picometers. The results of thermal, mechanical and optical analysis supporting the breadboard design will be shown.

  9. Uncertainty modelling and analysis of volume calculations based on a regular grid digital elevation model (DEM)

    NASA Astrophysics Data System (ADS)

    Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi

    2018-05-01

    The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.

  10. Theory, Image Simulation, and Data Analysis of Chemical Release Experiments

    NASA Technical Reports Server (NTRS)

    Wescott, Eugene M.

    1994-01-01

    The final phase of Grant NAG6-1 involved analysis of physics of chemical releases in the upper atmosphere and analysis of data obtained on previous NASA sponsored chemical release rocket experiments. Several lines of investigation of past chemical release experiments and computer simulations have been proceeding in parallel. This report summarizes the work performed and the resulting publications. The following topics are addressed: analysis of the 1987 Greenland rocket experiments; calculation of emission rates for barium, strontium, and calcium; the CRIT 1 and 2 experiments (Collisional Ionization Cross Section experiments); image calibration using background stars; rapid ray motions in ionospheric plasma clouds; and the NOONCUSP rocket experiments.

  11. Gravity Bias in Young and Adult Chimpanzees ("Pan Troglodytes"): Tests with a Modified Opaque-Tubes Task

    ERIC Educational Resources Information Center

    Tomonaga, Masaki; Imura, Tomoko; Mizuno, Yuu; Tanaka, Masayuki

    2007-01-01

    Young human children at around 2 years of age fail to predict the correct location of an object when it is dropped from the top of an S-shape opaque tube. They search in the location just below the releasing point (Hood, 1995). This type of error, called a "gravity bias", has recently been reported in dogs and monkeys. In the present study, we…

  12. The Role of Data and Feedback Error in Inference and Prediction

    DTIC Science & Technology

    1998-06-01

    O’Connor Bowling Green State University Research and Advanced Concepts Office Michael Drillings, Chief This Document Contains Missing Page/s...Bowling Green State University Technical Review by Michael Drillings, ARI NOTICES DISTRIBUTION: This Research Note has been cleared for release to...0601102A 2O161102B74F TA 1012 WU C06 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Bowling Green State University , 120 Mcfall Center, Research

  13. Measurement of radon concentration in water using the portable radon survey meter.

    PubMed

    Yokoyama, S; Mori, N; Shimo, M; Fukushi, M; Ohnuma, S

    2011-07-01

    A measurement method for measuring radon in water using the portable radon survey meter (RnSM) was developed. The container with propeller was used to stir the water samples and release radon from the water into the air in a sample box of the RnSM. In this method, the measurement of error would be <20 %, when the radon concentration in the mineral water was >20 Bq l(-1).

  14. Shades of Gray: Releasing the Cognitive Binds that Blind Us

    DTIC Science & Technology

    2016-09-01

    The availability heuristic is the cognitive process of problem solving based on learning and experience. This intuitive thinking process requires...describe a person’s systematic but flawed patterns of response to both judgment and decision problems .2 Research on the effects of cognitive bias on the...errors made. The ICArUS sensemaking model currently being developed could provide the IC with software that has the ability to mirror human cognitive

  15. A Method for Eliminating Beam Steering Error for the Modulated Absorption-Emission Thermometry Technique

    DTIC Science & Technology

    2015-01-01

    emissivity and the radiative intensity of the gas over a spectral band. The temperature is then calculated from the Planck function. The technique does not...pressure budget for cooling channels reduces pump horsepower and turbine inlet temperature DISTRIBUTION STATEMENT A – Approved for public release...distribution unlimited 4 Status of Modeling and Simulation • Existing data set for film cooling effectiveness consists of wall heat flux measurements • CFD

  16. Using nurses and office staff to report prescribing errors in primary care.

    PubMed

    Kennedy, Amanda G; Littenberg, Benjamin; Senders, John W

    2008-08-01

    To implement a prescribing-error reporting system in primary care offices and analyze the reports. Descriptive analysis of a voluntary prescribing-error-reporting system Seven primary care offices in Vermont, USA. One hundred and three prescribers, managers, nurses and office staff. Nurses and office staff were asked to report all communications with community pharmacists regarding prescription problems. All reports were classified by severity category, setting, error mode, prescription domain and error-producing conditions. All practices submitted reports, although reporting decreased by 3.6 reports per month (95% CI, -2.7 to -4.4, P<0.001, by linear regression analysis). Two hundred and sixteen reports were submitted. Nearly 90% (142/165) of errors were severity Category B (errors that did not reach the patient) according to the National Coordinating Council for Medication Error Reporting and Prevention Index for Categorizing Medication Errors. Nineteen errors reached the patient without causing harm (Category C); and 4 errors caused temporary harm requiring intervention (Category E). Errors involving strength were found in 30% of reports, including 23 prescriptions written for strengths not commercially available. Antidepressants, narcotics and antihypertensives were the most frequent drug classes reported. Participants completed an exit survey with a response rate of 84.5% (87/103). Nearly 90% (77/87) of respondents were willing to continue reporting after the study ended, however none of the participants currently submit reports. Nurses and office staff are a valuable resource for reporting prescribing errors. However, without ongoing reminders, the reporting system is not sustainable.

  17. Evaluating the prevalence and impact of examiner errors on the Wechsler scales of intelligence: A meta-analysis.

    PubMed

    Styck, Kara M; Walsh, Shana M

    2016-01-01

    The purpose of the present investigation was to conduct a meta-analysis of the literature on examiner errors for the Wechsler scales of intelligence. Results indicate that a mean of 99.7% of protocols contained at least 1 examiner error when studies that included a failure to record examinee responses as an error were combined and a mean of 41.2% of protocols contained at least 1 examiner error when studies that ignored errors of omission were combined. Furthermore, graduate student examiners were significantly more likely to make at least 1 error on Wechsler intelligence test protocols than psychologists. However, psychologists made significantly more errors per protocol than graduate student examiners regardless of the inclusion or exclusion of failure to record examinee responses as errors. On average, 73.1% of Full-Scale IQ (FSIQ) scores changed as a result of examiner errors, whereas 15.8%-77.3% of scores on the Verbal Comprehension Index (VCI), Perceptual Reasoning Index (PRI), Working Memory Index (WMI), and Processing Speed Index changed as a result of examiner errors. In addition, results suggest that examiners tend to overestimate FSIQ scores and underestimate VCI scores. However, no strong pattern emerged for the PRI and WMI. It can be concluded that examiner errors occur frequently and impact index and FSIQ scores. Consequently, current estimates for the standard error of measurement of popular IQ tests may not adequately capture the variance due to the examiner. (c) 2016 APA, all rights reserved).

  18. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    PubMed

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-05-01

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Comparative interpretations of renormalization inversion technique for reconstructing unknown emissions from measured atmospheric concentrations

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory

    2017-04-01

    The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.

  20. The Deference Due the Oracle: Computerized Text Analysis in a Basic Writing Class.

    ERIC Educational Resources Information Center

    Otte, George

    1989-01-01

    Describes how a computerized text analysis program can help students discover error patterns in their writing, and notes how students' responses to analyses can reduce errors and improve their writing. (MM)

Top