Tsai, Christopher C; Tsai, Sarai H; Zeng-Treitler, Qing; Liang, Bryan A
2007-10-11
The quality of user-generated health information on consumer health social networking websites has not been studied. We collected a set of postings related to Diabetes Mellitus Type I from three such sites and classified them based on accuracy, error type, and clinical significance of error. We found 48% of postings contained medical content, and 54% of these were either incomplete or contained errors. About 85% of the incomplete and erroneous messages were potentially clinically significant.
Curated eutherian third party data gene data sets.
Premzl, Marko
2016-03-01
The free available eutherian genomic sequence data sets advanced scientific field of genomics. Of note, future revisions of gene data sets were expected, due to incompleteness of public eutherian genomic sequence assemblies and potential genomic sequence errors. The eutherian comparative genomic analysis protocol was proposed as guidance in protection against potential genomic sequence errors in public eutherian genomic sequences. The protocol was applicable in updates of 7 major eutherian gene data sets, including 812 complete coding sequences deposited in European Nucleotide Archive as curated third party data gene data sets.
Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W
2013-08-01
Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.
NASA Astrophysics Data System (ADS)
Marreiros, Filipe M. M.; Wang, Chunliang; Rossitti, Sandro; Smedby, Örjan
2016-03-01
In this study we present a non-rigid point set registration for 3D curves (composed by 3D set of points). The method was evaluated in the task of registration of 3D superficial vessels of the brain where it was used to match vessel centerline points. It consists of a combination of the Coherent Point Drift (CPD) and the Thin-Plate Spline (TPS) semilandmarks. The CPD is used to perform the initial matching of centerline 3D points, while the semilandmark method iteratively relaxes/slides the points. For the evaluation, a Magnetic Resonance Angiography (MRA) dataset was used. Deformations were applied to the extracted vessels centerlines to simulate brain bulging and sinking, using a TPS deformation where a few control points were manipulated to obtain the desired transformation (T1). Once the correspondences are known, the corresponding points are used to define a new TPS deformation(T2). The errors are measured in the deformed space, by transforming the original points using T1 and T2 and measuring the distance between them. To simulate cases where the deformed vessel data is incomplete, parts of the reference vessels were cut and then deformed. Furthermore, anisotropic normally distributed noise was added. The results show that the error estimates (root mean square error and mean error) are below 1 mm, even in the presence of noise and incomplete data.
The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates
NASA Technical Reports Server (NTRS)
Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2008-01-01
We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.
Performance Metrics, Error Modeling, and Uncertainty Quantification
NASA Technical Reports Server (NTRS)
Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling
2016-01-01
A common set of statistical metrics has been used to summarize the performance of models or measurements- the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying uncertainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling methodology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kersten, J. A. F., E-mail: jennifer.kersten@cantab.net; Alavi, Ali, E-mail: a.alavi@fkf.mpg.de; Max Planck Institute for Solid State Research, Heisenbergstraße 1, 70569 Stuttgart
2016-08-07
The Full Configuration Interaction Quantum Monte Carlo (FCIQMC) method has proved able to provide near-exact solutions to the electronic Schrödinger equation within a finite orbital basis set, without relying on an expansion about a reference state. However, a drawback to the approach is that being based on an expansion of Slater determinants, the FCIQMC method suffers from a basis set incompleteness error that decays very slowly with the size of the employed single particle basis. The FCIQMC results obtained in a small basis set can be improved significantly with explicitly correlated techniques. Here, we present a study that assesses andmore » compares two contrasting “universal” explicitly correlated approaches that fit into the FCIQMC framework: the [2]{sub R12} method of Kong and Valeev [J. Chem. Phys. 135, 214105 (2011)] and the explicitly correlated canonical transcorrelation approach of Yanai and Shiozaki [J. Chem. Phys. 136, 084107 (2012)]. The former is an a posteriori internally contracted perturbative approach, while the latter transforms the Hamiltonian prior to the FCIQMC simulation. These comparisons are made across the 55 molecules of the G1 standard set. We found that both methods consistently reduce the basis set incompleteness, for accurate atomization energies in small basis sets, reducing the error from 28 mE{sub h} to 3-4 mE{sub h}. While many of the conclusions hold in general for any combination of multireference approaches with these methodologies, we also consider FCIQMC-specific advantages of each approach.« less
CCSDT calculations of molecular equilibrium geometries
NASA Astrophysics Data System (ADS)
Halkier, Asger; Jørgensen, Poul; Gauss, Jürgen; Helgaker, Trygve
1997-08-01
CCSDT equilibrium geometries of CO, CH 2, F 2, HF, H 2O and N 2 have been calculated using the correlation-consistent cc-pVXZ basis sets. Similar calculations have been performed for SCF, CCSD and CCSD(T). In general, bond lengths decrease when improving the basis set and increase when improving the N-electron treatment. CCSD(T) provides an excellent approximation to CCSDT for bond lengths as the largest difference between CCSDT and CCSD(T) is 0.06 pm. At the CCSDT/cc-pVQZ level, basis set deficiencies, neglect of higher-order excitations, and incomplete treatment of core-correlation all give rise to errors of a few tenths of a pm, but to a large extent, these errors cancel. The CCSDT/cc-pVQZ bond lengths deviate on average only by 0.11 pm from experiment.
An alternative data filling approach for prediction of missing data in soft sets (ADFIS).
Sadiq Khan, Muhammad; Al-Garadi, Mohammed Ali; Wahab, Ainuddin Wahid Abdul; Herawan, Tutut
2016-01-01
Soft set theory is a mathematical approach that provides solution for dealing with uncertain data. As a standard soft set, it can be represented as a Boolean-valued information system, and hence it has been used in hundreds of useful applications. Meanwhile, these applications become worthless if the Boolean information system contains missing data due to error, security or mishandling. Few researches exist that focused on handling partially incomplete soft set and none of them has high accuracy rate in prediction performance of handling missing data. It is shown that the data filling approach for incomplete soft set (DFIS) has the best performance among all previous approaches. However, in reviewing DFIS, accuracy is still its main problem. In this paper, we propose an alternative data filling approach for prediction of missing data in soft sets, namely ADFIS. The novelty of ADFIS is that, unlike the previous approach that used probability, we focus more on reliability of association among parameters in soft set. Experimental results on small, 04 UCI benchmark data and causality workbench lung cancer (LUCAP2) data shows that ADFIS performs better accuracy as compared to DFIS.
Selection of noisy measurement locations for error reduction in static parameter identification
NASA Astrophysics Data System (ADS)
Sanayei, Masoud; Onipede, Oladipo; Babu, Suresh R.
1992-09-01
An incomplete set of noisy static force and displacement measurements is used for parameter identification of structures at the element level. Measurement location and the level of accuracy in the measured data can drastically affect the accuracy of the identified parameters. A heuristic method is presented to select a limited number of degrees of freedom (DOF) to perform a successful parameter identification and to reduce the impact of measurement errors on the identified parameters. This pretest simulation uses an error sensitivity analysis to determine the effect of measurement errors on the parameter estimates. The selected DOF can be used for nondestructive testing and health monitoring of structures. Two numerical examples, one for a truss and one for a frame, are presented to demonstrate that using the measurements at the selected subset of DOF can limit the error in the parameter estimates.
Guidance for Avoiding Incomplete Premanufacture Notices or Bona Fides in the New Chemicals Program
This page contains documents to help you avoid having an incomplete Premanufacture notice or Bona Fide . The documents go over the chemical identity requirements and common errors that result in incompletes.
Effects of errors and gaps in spatial data sets on assessment of conservation progress.
Visconti, P; Di Marco, M; Álvarez-Romero, J G; Januchowski-Hartley, S R; Pressey, R L; Weeks, R; Rondinini, C
2013-10-01
Data on the location and extent of protected areas, ecosystems, and species' distributions are essential for determining gaps in biodiversity protection and identifying future conservation priorities. However, these data sets always come with errors in the maps and associated metadata. Errors are often overlooked in conservation studies, despite their potential negative effects on the reported extent of protection of species and ecosystems. We used 3 case studies to illustrate the implications of 3 sources of errors in reporting progress toward conservation objectives: protected areas with unknown boundaries that are replaced by buffered centroids, propagation of multiple errors in spatial data, and incomplete protected-area data sets. As of 2010, the frequency of protected areas with unknown boundaries in the World Database on Protected Areas (WDPA) caused the estimated extent of protection of 37.1% of the terrestrial Neotropical mammals to be overestimated by an average 402.8% and of 62.6% of species to be underestimated by an average 10.9%. Estimated level of protection of the world's coral reefs was 25% higher when using recent finer-resolution data on coral reefs as opposed to globally available coarse-resolution data. Accounting for additional data sets not yet incorporated into WDPA contributed up to 6.7% of additional protection to marine ecosystems in the Philippines. We suggest ways for data providers to reduce the errors in spatial and ancillary data and ways for data users to mitigate the effects of these errors on biodiversity assessments. © 2013 Society for Conservation Biology.
Death certificate completion skills of hospital physicians in a developing country.
Haque, Ahmed Suleman; Shamim, Kanza; Siddiqui, Najm Hasan; Irfan, Muhammad; Khan, Javaid Ahmed
2013-06-06
Death certificates (DC) can provide valuable health status data regarding disease incidence, prevalence and mortality in a community. It can guide local health policy and help in setting priorities. Incomplete and inaccurate DC data, on the other hand, can significantly impair the precision of a national health information database. In this study we evaluated the accuracy of death certificates at a tertiary care teaching hospital in a Karachi, Pakistan. A retrospective study conducted at Aga Khan University Hospital, Karachi, Pakistan for a period of six months. Medical records and death certificates of all patients who died under adult medical service were studied. The demographic characteristics, administrative details, co-morbidities and cause of death from death certificates were collected using an approved standardized form. Accuracy of this information was validated using their medical records. Errors in the death certificates were classified into six categories, from 0 to 5 according to increasing severity; a grade 0 was assigned if no errors were identified, and 5, if an incorrect cause of death was attributed or placed in an improper sequence. 223 deaths occurred during the study period. 9 certificates were not accessible and 12 patients had incomplete medical records. 202 certificates were finally analyzed. Most frequent errors pertaining to patients' demographics (92%) and cause/s of death (87%) were identified. 156 (77%) certificates had 3 or more errors and 124 (62%) certificates had a combination of errors that significantly changed the death certificate interpretation. Only 1% certificates were error free. A very high rate of errors was identified in death certificates completed at our academic institution. There is a pressing need for appropriate intervention/s to resolve this important issue.
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
Recent advances in electronic structure theory and the availability of high speed vector processors have substantially increased the accuracy of ab initio potential energy surfaces. The recently developed atomic natural orbital approach for basis set contraction has reduced both the basis set incompleteness and superposition errors in molecular calculations. Furthermore, full CI calculations can often be used to calibrate a CASSCF/MRCI approach that quantitatively accounts for the valence correlation energy. These computational advances also provide a vehicle for systematically improving the calculations and for estimating the residual error in the calculations. Calculations on selected diatomic and triatomic systems will be used to illustrate the accuracy that currently can be achieved for molecular systems. In particular, the F + H2 yields HF + H potential energy hypersurface is used to illustrate the impact of these computational advances on the calculation of potential energy surfaces.
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1988-01-01
Recent advances in electronic structure theory and the availability of high speed vector processors have substantially increased the accuracy of ab initio potential energy surfaces. The recently developed atomic natural orbital approach for basis set contraction has reduced both the basis set incompleteness and superposition errors in molecular calculations. Furthermore, full CI calculations can often be used to calibrate a CASSCF/MRCI approach that quantitatively accounts for the valence correlation energy. These computational advances also provide a vehicle for systematically improving the calculations and for estimating the residual error in the calculations. Calculations on selected diatomic and triatomic systems will be used to illustrate the accuracy that currently can be achieved for molecular systems. In particular, the F+H2 yields HF+H potential energy hypersurface is used to illustrate the impact of these computational advances on the calculation of potential energy surfaces.
Asner, Gregory P; Joseph, Shijo
2015-01-01
Conservation and monitoring of tropical forests requires accurate information on their extent and change dynamics. Cloud cover, sensor errors and technical barriers associated with satellite remote sensing data continue to prevent many national and sub-national REDD+ initiatives from developing their reference deforestation and forest degradation emission levels. Here we present a framework for large-scale historical forest cover change analysis using free multispectral satellite imagery in an extremely cloudy tropical forest region. The CLASlite approach provided highly automated mapping of tropical forest cover, deforestation and degradation from Landsat satellite imagery. Critically, the fractional cover of forest photosynthetic vegetation, non-photosynthetic vegetation, and bare substrates calculated by CLASlite provided scene-invariant quantities for forest cover, allowing for systematic mosaicking of incomplete satellite data coverage. A synthesized satellite-based data set of forest cover was thereby created, reducing image incompleteness caused by clouds, shadows or sensor errors. This approach can readily be implemented by single operators with highly constrained budgets. We test this framework on tropical forests of the Colombian Pacific Coast (Chocó) – one of the cloudiest regions on Earth, with successful comparison to the Colombian government’s deforestation map and a global deforestation map. PMID:25678933
Spectral Regularization Algorithms for Learning Large Incomplete Matrices.
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-03-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-01-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465
A Systems Modeling Approach for Risk Management of Command File Errors
NASA Technical Reports Server (NTRS)
Meshkat, Leila
2012-01-01
The main cause of commanding errors is often (but not always) due to procedures. Either lack of maturity in the processes, incompleteness of requirements or lack of compliance to these procedures. Other causes of commanding errors include lack of understanding of system states, inadequate communication, and making hasty changes in standard procedures in response to an unexpected event. In general, it's important to look at the big picture prior to making corrective actions. In the case of errors traced back to procedures, considering the reliability of the process as a metric during its' design may help to reduce risk. This metric is obtained by using data from Nuclear Industry regarding human reliability. A structured method for the collection of anomaly data will help the operator think systematically about the anomaly and facilitate risk management. Formal models can be used for risk based design and risk management. A generic set of models can be customized for a broad range of missions.
Hill, J Grant
2013-09-30
Auxiliary basis sets (ABS) specifically matched to the cc-pwCVnZ-PP and aug-cc-pwCVnZ-PP orbital basis sets (OBS) have been developed and optimized for the 4d elements Y-Pd at the second-order Møller-Plesset perturbation theory level. Calculation of the core-valence electron correlation energies for small to medium sized transition metal complexes demonstrates that the error due to the use of these new sets in density fitting is three to four orders of magnitude smaller than that due to the OBS incompleteness, and hence is considered negligible. Utilizing the ABSs in the resolution-of-the-identity component of explicitly correlated calculations is also investigated, where it is shown that i-type functions are important to produce well-controlled errors in both integrals and correlation energy. Benchmarking at the explicitly correlated coupled cluster with single, double, and perturbative triple excitations level indicates impressive convergence with respect to basis set size for the spectroscopic constants of 4d monofluorides; explicitly correlated double-ζ calculations produce results close to conventional quadruple-ζ, and triple-ζ is within chemical accuracy of the complete basis set limit. Copyright © 2013 Wiley Periodicals, Inc.
Normalization of relative and incomplete temporal expressions in clinical narratives.
Sun, Weiyi; Rumshisky, Anna; Uzuner, Ozlem
2015-09-01
To improve the normalization of relative and incomplete temporal expressions (RI-TIMEXes) in clinical narratives. We analyzed the RI-TIMEXes in temporally annotated corpora and propose two hypotheses regarding the normalization of RI-TIMEXes in the clinical narrative domain: the anchor point hypothesis and the anchor relation hypothesis. We annotated the RI-TIMEXes in three corpora to study the characteristics of RI-TMEXes in different domains. This informed the design of our RI-TIMEX normalization system for the clinical domain, which consists of an anchor point classifier, an anchor relation classifier, and a rule-based RI-TIMEX text span parser. We experimented with different feature sets and performed an error analysis for each system component. The annotation confirmed the hypotheses that we can simplify the RI-TIMEXes normalization task using two multi-label classifiers. Our system achieves anchor point classification, anchor relation classification, and rule-based parsing accuracy of 74.68%, 87.71%, and 57.2% (82.09% under relaxed matching criteria), respectively, on the held-out test set of the 2012 i2b2 temporal relation challenge. Experiments with feature sets reveal some interesting findings, such as: the verbal tense feature does not inform the anchor relation classification in clinical narratives as much as the tokens near the RI-TIMEX. Error analysis showed that underrepresented anchor point and anchor relation classes are difficult to detect. We formulate the RI-TIMEX normalization problem as a pair of multi-label classification problems. Considering only RI-TIMEX extraction and normalization, the system achieves statistically significant improvement over the RI-TIMEX results of the best systems in the 2012 i2b2 challenge. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA airborne Doppler lidar program: Data characteristics of 1981
NASA Technical Reports Server (NTRS)
Lee, R. W.
1982-01-01
The first flights of the NASA/Marshall airborne CO2 Doppler lidar wind measuring system were made during the summer of 1981. Successful measurements of two-dimensional flow fields were made to ranges of 15 km from the aircraft track. The characteristics of the data obtained are examined. A study of various artifacts introduced into the data set by incomplete compensation for aircraft dynamics is summarized. Most of these artifacts can be corrected by post processing, which reduces velocity errors in the reconstructed flow field to remarkably low levels.
White, Simon R; Muniz-Terrera, Graciela; Matthews, Fiona E
2018-05-01
Many medical (and ecological) processes involve the change of shape, whereby one trajectory changes into another trajectory at a specific time point. There has been little investigation into the study design needed to investigate these models. We consider the class of fixed effect change-point models with an underlying shape comprised two joined linear segments, also known as broken-stick models. We extend this model to include two sub-groups with different trajectories at the change-point, a change and no change class, and also include a missingness model to account for individuals with incomplete follow-up. Through a simulation study, we consider the relationship of sample size to the estimates of the underlying shape, the existence of a change-point, and the classification-error of sub-group labels. We use a Bayesian framework to account for the missing labels, and the analysis of each simulation is performed using standard Markov chain Monte Carlo techniques. Our simulation study is inspired by cognitive decline as measured by the Mini-Mental State Examination, where our extended model is appropriate due to the commonly observed mixture of individuals within studies who do or do not exhibit accelerated decline. We find that even for studies of modest size ( n = 500, with 50 individuals observed past the change-point) in the fixed effect setting, a change-point can be detected and reliably estimated across a range of observation-errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witte, Jonathon; Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720; Neaton, Jeffrey B.
With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methodsmore » and systems examined, the most complete basis is Jensen’s pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.« less
Learning with incomplete information in the committee machine.
Bergmann, Urs M; Kühn, Reimer; Stamatescu, Ion-Olimpiu
2009-12-01
We study the problem of learning with incomplete information in a student-teacher setup for the committee machine. The learning algorithm combines unsupervised Hebbian learning of a series of associations with a delayed reinforcement step, in which the set of previously learnt associations is partly and indiscriminately unlearnt, to an extent that depends on the success rate of the student on these previously learnt associations. The relevant learning parameter lambda represents the strength of Hebbian learning. A coarse-grained analysis of the system yields a set of differential equations for overlaps of student and teacher weight vectors, whose solutions provide a complete description of the learning behavior. It reveals complicated dynamics showing that perfect generalization can be obtained if the learning parameter exceeds a threshold lambda ( c ), and if the initial value of the overlap between student and teacher weights is non-zero. In case of convergence, the generalization error exhibits a power law decay as a function of the number of examples used in training, with an exponent that depends on the parameter lambda. An investigation of the system flow in a subspace with broken permutation symmetry between hidden units reveals a bifurcation point lambda* above which perfect generalization does not depend on initial conditions. Finally, we demonstrate that cases of a complexity mismatch between student and teacher are optimally resolved in the sense that an over-complex student can emulate a less complex teacher rule, while an under-complex student reaches a state which realizes the minimal generalization error compatible with the complexity mismatch.
European option pricing under the Student's t noise with jumps
NASA Astrophysics Data System (ADS)
Wang, Xiao-Tian; Li, Zhe; Zhuang, Le
2017-03-01
In this paper we present a new approach to price European options under the Student's t noise with jumps. Through the conditional delta hedging strategy and the minimal mean-square-error hedging, a closed-form solution of the European option value is obtained under the incomplete information case. In particular, we propose a Value-at-Risk-type procedure to estimate the volatility parameter σ such that the pricing error is in accord with the risk preferences of investors. In addition, the numerical results of us show that options are not priced in some cases in an incomplete information market.
NASA Astrophysics Data System (ADS)
Witte, Jonathon; Neaton, Jeffrey B.; Head-Gordon, Martin
2016-05-01
With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methods and systems examined, the most complete basis is Jensen's pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.
Medical technology at home: safety-related items in technical documentation.
Hilbers, Ellen S M; de Vries, Claudette G J C A; Geertsma, Robert E
2013-01-01
This study aimed to investigate the technical documentation of manufacturers on issues of safe use of their device in a home setting. Three categories of equipment were selected: infusion pumps, ventilators, and dialysis systems. Risk analyses, instructions for use, labels, and post market surveillance procedures were requested from manufacturers. Additionally, they were asked to fill out a questionnaire on collection of field experience, on incidents, and training activities. Specific risks of device operation by lay users in a home setting were incompletely addressed in the risk analyses. A substantial number of user manuals were designed for professionals, rather than for patients or lay carers. Risk analyses and user information often showed incomplete coherence. Post market surveillance was mainly based on passive collection of field experiences. Manufacturers of infusion pumps, ventilators, and dialysis systems pay insufficient attention to the specific risks of use by lay persons in home settings. It is expected that this conclusion is also applicable for other medical equipment for treatment at home. Manufacturers of medical equipment for home use should pay more attention to use errors, lay use and home-specific risks in design, risk analysis, and user information. Field experiences should be collected more actively. Coherence between risk analysis and user information should be improved. Notified bodies should address these aspects in their assessment. User manuals issued by institutions supervising a specific home therapy should be drawn up in consultation with the manufacturer.
Composite Linear Models | Division of Cancer Prevention
By Stuart G. Baker The composite linear models software is a matrix approach to compute maximum likelihood estimates and asymptotic standard errors for models for incomplete multinomial data. It implements the method described in Baker SG. Composite linear models for incomplete multinomial data. Statistics in Medicine 1994;13:609-622. The software includes a library of thirty
Towards Complete, Geo-Referenced 3d Models from Crowd-Sourced Amateur Images
NASA Astrophysics Data System (ADS)
Hartmann, W.; Havlena, M.; Schindler, K.
2016-06-01
Despite a lot of recent research, photogrammetric reconstruction from crowd-sourced imagery is plagued by a number of recurrent problems. (i) The resulting models are chronically incomplete, because even touristic landmarks are photographed mostly from a few "canonical" viewpoints. (ii) Man-made constructions tend to exhibit repetitive structure and rotational symmetries, which lead to gross errors in the 3D reconstruction and aggravate the problem of incomplete reconstruction. (iii) The models are normally not geo-referenced. In this paper, we investigate the possibility of using sparse GNSS geo-tags from digital cameras to address these issues and push the boundaries of crowd-sourced photogrammetry. A small proportion of the images in Internet collections (≍ 10 %) do possess geo-tags. While the individual geo-tags are very inaccurate, they nevertheless can help to address the problems above. By providing approximate geo-reference for partial reconstructions they make it possible to fuse those pieces into more complete models; the capability to fuse partial reconstruction opens up the possibility to be more restrictive in the matching phase and avoid errors due to repetitive structure; and collectively, the redundant set of low-quality geo-tags can provide reasonably accurate absolute geo-reference. We show that even few, noisy geo-tags can help to improve architectural models, compared to puristic structure-from-motion only based on image correspondence.
Woolf, Steven H.; Kuzel, Anton J.; Dovey, Susan M.; Phillips, Robert L.
2004-01-01
BACKGROUND Notions about the most common errors in medicine currently rest on conjecture and weak epidemiologic evidence. We sought to determine whether cascade analysis is of value in clarifying the epidemiology and causes of errors and whether physician reports are sensitive to the impact of errors on patients. METHODS Eighteen US family physicians participating in a 6-country international study filed 75 anonymous error reports. The narratives were examined to identify the chain of events and the predominant proximal errors. We tabulated the consequences to patients, both reported by physicians and inferred by investigators. RESULTS A chain of errors was documented in 77% of incidents. Although 83% of the errors that ultimately occurred were mistakes in treatment or diagnosis, 2 of 3 were set in motion by errors in communication. Fully 80% of the errors that initiated cascades involved informational or personal miscommunication. Examples of informational miscommunication included communication breakdowns among colleagues and with patients (44%), misinformation in the medical record (21%), mishandling of patients’ requests and messages (18%), inaccessible medical records (12%), and inadequate reminder systems (5%). When asked whether the patient was harmed, physicians answered affirmatively in 43% of cases in which their narratives described harms. Psychological and emotional effects accounted for 17% of physician-reported consequences but 69% of investigator-inferred consequences. CONCLUSIONS Cascade analysis of physicians’ error reports is helpful in understanding the precipitant chain of events, but physicians provide incomplete information about how patients are affected. Miscommunication appears to play an important role in propagating diagnostic and treatment mistakes. PMID:15335130
Mohammadi, Younes; Parsaeian, Mahboubeh; Farzadfar, Farshad; Kasaeian, Amir; Mehdipour, Parinaz; Sheidaei, Ali; Mansouri, Anita; Saeedi Moghaddam, Sahar; Djalalinia, Shirin; Mahmoudi, Mahmood; Khosravi, Ardeshir; Yazdani, Kamran
2014-03-01
Calculation of burden of diseases and risk factors is crucial to set priorities in the health care systems. Nevertheless, the reliable measurement of mortality rates is the main barrier to reach this goal. Unfortunately, in many developing countries the vital registration system (VRS) is either defective or does not exist at all. Consequently, alternative methods have been developed to measure mortality. This study is a subcomponent of NASBOD project, which is currently conducting in Iran. In this study, we aim to calculate incompleteness of the Death Registration System (DRS) and then to estimate levels and trends of child and adult mortality using reliable methods. In order to estimate mortality rates, first, we identify all possible data sources. Then, we calculate incompleteness of child and adult morality separately. For incompleteness of child mortality, we analyze summary birth history data using maternal age cohort and maternal age period methods. Then, we combine these two methods using LOESS regression. However, these estimates are not plausible for some provinces. We use additional information of covariates such as wealth index and years of schooling to make predictions for these provinces using spatio-temporal model. We generate yearly estimates of mortality using Gaussian process regression that covers both sampling and non-sampling errors within uncertainty intervals. By comparing the resulted estimates with mortality rates from DRS, we calculate child mortality incompleteness. For incompleteness of adult mortality, Generalized Growth Balance, Synthetic Extinct Generation and a hybrid of two mentioned methods are used. Afterwards, we combine incompleteness of three methods using GPR, and apply it to correct and adjust the number of deaths. In this study, we develop a conceptual framework to overcome the existing challenges for accurate measuring of mortality rates. The resulting estimates can be used to inform policy-makers about past, current and future mortality rates as a major indicator of health status of a population.
Optimizing Balanced Incomplete Block Designs for Educational Assessments
ERIC Educational Resources Information Center
van der Linden, Wim J.; Veldkamp, Bernard P.; Carlson, James E.
2004-01-01
A popular design in large-scale educational assessments as well as any other type of survey is the balanced incomplete block design. The design is based on an item pool split into a set of blocks of items that are assigned to sets of "assessment booklets." This article shows how the problem of calculating an optimal balanced incomplete block…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malone, Fionn D., E-mail: f.malone13@imperial.ac.uk; Lee, D. K. K.; Foulkes, W. M. C.
The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing ourmore » results to previous work where possible.« less
A national physician survey of diagnostic error in paediatrics.
Perrem, Lucy M; Fanshawe, Thomas R; Sharif, Farhana; Plüddemann, Annette; O'Neill, Michael B
2016-10-01
This cross-sectional survey explored paediatric physician perspectives regarding diagnostic errors. All paediatric consultants and specialist registrars in Ireland were invited to participate in this anonymous online survey. The response rate for the study was 54 % (n = 127). Respondents had a median of 9-year clinical experience (interquartile range (IQR) 4-20 years). A diagnostic error was reported at least monthly by 19 (15.0 %) respondents. Consultants reported significantly less diagnostic errors compared to trainees (p value = 0.01). Cognitive error was the top-ranked contributing factor to diagnostic error, with incomplete history and examination considered to be the principal cognitive error. Seeking a second opinion and close follow-up of patients to ensure that the diagnosis is correct were the highest-ranked, clinician-based solutions to diagnostic error. Inadequate staffing levels and excessive workload were the most highly ranked system-related and situational factors. Increased access to and availability of consultants and experts was the most highly ranked system-based solution to diagnostic error. We found a low level of self-perceived diagnostic error in an experienced group of paediatricians, at variance with the literature and warranting further clarification. The results identify perceptions on the major cognitive, system-related and situational factors contributing to diagnostic error and also key preventative strategies. • Diagnostic errors are an important source of preventable patient harm and have an estimated incidence of 10-15 %. • They are multifactorial in origin and include cognitive, system-related and situational factors. What is New: • We identified a low rate of self-perceived diagnostic error in contrast to the existing literature. • Incomplete history and examination, inadequate staffing levels and excessive workload are cited as the principal contributing factors to diagnostic error in this study.
Estimation of health effects of prenatal methylmercury exposure using structural equation models.
Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe; Weihe, Pal
2002-10-14
Observational studies in epidemiology always involve concerns regarding validity, especially measurement error, confounding, missing data, and other problems that may affect the study outcomes. Widely used standard statistical techniques, such as multiple regression analysis, may to some extent adjust for these shortcomings. However, structural equations may incorporate most of these considerations, thereby providing overall adjusted estimations of associations. This approach was used in a large epidemiological data set from a prospective study of developmental methyl-mercury toxicity. Structural equation models were developed for assessment of the association between biomarkers of prenatal mercury exposure and neuropsychological test scores in 7 year old children. Eleven neurobehavioral outcomes were grouped into motor function and verbally mediated function. Adjustment for local dependence and item bias was necessary for a satisfactory fit of the model, but had little impact on the estimated mercury effects. The mercury effect on the two latent neurobehavioral functions was similar to the strongest effects seen for individual test scores of motor function and verbal skills. Adjustment for contaminant exposure to poly chlorinated biphenyls (PCBs) changed the estimates only marginally, but the mercury effect could be reduced to non-significance by assuming a large measurement error for the PCB biomarker. The structural equation analysis allows correction for measurement error in exposure variables, incorporation of multiple outcomes and incomplete cases. This approach therefore deserves to be applied more frequently in the analysis of complex epidemiological data sets.
Shehata, Zahraa Hassan Abdelrahman; Sabri, Nagwa Ali; Elmelegy, Ahmed Abdelsalam
2016-03-01
This study analyzes reports to the Egyptian medication error (ME) reporting system from June to December 2014. Fifty hospital pharmacists received training on ME reporting using the national reporting system. All received reports were reviewed and analyzed. The pieces of data analyzed were patient age, gender, clinical setting, stage, type, medication(s), outcome, cause(s), and recommendation(s). Over the course of 6 months, 12,000 valid reports were gathered and included in this analysis. The majority (66%) came from inpatient settings, while 23% came from intensive care units, and 11% came from outpatient departments. Prescribing errors were the most common type of MEs (54%), followed by monitoring (25%) and administration errors (16%). The most frequent error was incorrect dose (20%) followed by drug interactions, incorrect drug, and incorrect frequency. Most reports were potential (25%), prevented (11%), or harmless (51%) errors; only 13% of reported errors lead to patient harm. The top three medication classes involved in reported MEs were antibiotics, drugs acting on the central nervous system, and drugs acting on the cardiovascular system. Causes of MEs were mostly lack of knowledge, environmental factors, lack of drug information sources, and incomplete prescribing. Recommendations for addressing MEs were mainly staff training, local ME reporting, and improving work environment. There are common problems among different healthcare systems, so that sharing experiences on the national level is essential to enable learning from MEs. Internationally, there is a great need for standardizing ME terminology, to facilitate knowledge transfer. Underreporting, inaccurate reporting, and a lack of reporter diversity are some limitations of this study. Egypt now has a national database of MEs that allows researchers and decision makers to assess the problem, identify its root causes, and develop preventive strategies. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Reliability Estimation for Aggregated Data: Applications for Organizational Research.
ERIC Educational Resources Information Center
Hart, Roland J.; Bradshaw, Stephen C.
This report provides the statistical tools necessary to measure the extent of error that exists in organizational record data and group survey data. It is felt that traditional methods of measuring error are inappropriate or incomplete when applied to organizational groups, especially in studies of organizational change when the same variables are…
Six reasons why thermospheric measurements and models disagree
NASA Technical Reports Server (NTRS)
Moe, Kenneth
1987-01-01
The differences between thermospheric measurements and models are discussed. Sometimes the model is in error and at other times the measurements are, but it also is possible for both to be correct, yet have the comparison result in an apparent disagreement. These reasons are collected for disagreement, and, whenever possible, methods of reducing or eliminating them are suggested. The six causes of disagreement discussed are: actual errors caused by the limited knowledge of gas-surface interactions and by in-track winds; limitations of the thermospheric general circulation models due to incomplete knowledge of the energy sources and sinks as well as incompleteness of the parameterization which must be employed; and limitations imposed on the empirical models by the conceptual framework and the transient waves.
Wahl, Simone; Boulesteix, Anne-Laure; Zierer, Astrid; Thorand, Barbara; van de Wiel, Mark A
2016-10-26
Missing values are a frequent issue in human studies. In many situations, multiple imputation (MI) is an appropriate missing data handling strategy, whereby missing values are imputed multiple times, the analysis is performed in every imputed data set, and the obtained estimates are pooled. If the aim is to estimate (added) predictive performance measures, such as (change in) the area under the receiver-operating characteristic curve (AUC), internal validation strategies become desirable in order to correct for optimism. It is not fully understood how internal validation should be combined with multiple imputation. In a comprehensive simulation study and in a real data set based on blood markers as predictors for mortality, we compare three combination strategies: Val-MI, internal validation followed by MI on the training and test parts separately, MI-Val, MI on the full data set followed by internal validation, and MI(-y)-Val, MI on the full data set omitting the outcome followed by internal validation. Different validation strategies, including bootstrap und cross-validation, different (added) performance measures, and various data characteristics are considered, and the strategies are evaluated with regard to bias and mean squared error of the obtained performance estimates. In addition, we elaborate on the number of resamples and imputations to be used, and adopt a strategy for confidence interval construction to incomplete data. Internal validation is essential in order to avoid optimism, with the bootstrap 0.632+ estimate representing a reliable method to correct for optimism. While estimates obtained by MI-Val are optimistically biased, those obtained by MI(-y)-Val tend to be pessimistic in the presence of a true underlying effect. Val-MI provides largely unbiased estimates, with a slight pessimistic bias with increasing true effect size, number of covariates and decreasing sample size. In Val-MI, accuracy of the estimate is more strongly improved by increasing the number of bootstrap draws rather than the number of imputations. With a simple integrated approach, valid confidence intervals for performance estimates can be obtained. When prognostic models are developed on incomplete data, Val-MI represents a valid strategy to obtain estimates of predictive performance measures.
Incomplete fuzzy data processing systems using artificial neural network
NASA Technical Reports Server (NTRS)
Patyra, Marek J.
1992-01-01
In this paper, the implementation of a fuzzy data processing system using an artificial neural network (ANN) is discussed. The binary representation of fuzzy data is assumed, where the universe of discourse is decartelized into n equal intervals. The value of a membership function is represented by a binary number. It is proposed that incomplete fuzzy data processing be performed in two stages. The first stage performs the 'retrieval' of incomplete fuzzy data, and the second stage performs the desired operation on the retrieval data. The method of incomplete fuzzy data retrieval is proposed based on the linear approximation of missing values of the membership function. The ANN implementation of the proposed system is presented. The system was computationally verified and showed a relatively small total error.
NASA Astrophysics Data System (ADS)
Wohland, Thorsten
2015-06-01
Single Molecule Detection and Spectroscopy have grown from their first beginnings into mainstream, mature research areas that are widely applied in the biological sciences. However, despite the advances in technology and the application of many single molecule techniques even in in vivo settings, the data analysis of single molecule experiments is complicated by noise, systematic errors, and complex underlying processes that are only incompletely understood. Colomb and Sarkar provide in this issue an overview of single molecule experiments and the accompanying problems in data analysis, which have to be overcome for a proper interpretation of the experiments [1].
A statistical approach to identify, monitor, and manage incomplete curated data sets.
Howe, Douglas G
2018-04-02
Many biological knowledge bases gather data through expert curation of published literature. High data volume, selective partial curation, delays in access, and publication of data prior to the ability to curate it can result in incomplete curation of published data. Knowing which data sets are incomplete and how incomplete they are remains a challenge. Awareness that a data set may be incomplete is important for proper interpretation, to avoiding flawed hypothesis generation, and can justify further exploration of published literature for additional relevant data. Computational methods to assess data set completeness are needed. One such method is presented here. In this work, a multivariate linear regression model was used to identify genes in the Zebrafish Information Network (ZFIN) Database having incomplete curated gene expression data sets. Starting with 36,655 gene records from ZFIN, data aggregation, cleansing, and filtering reduced the set to 9870 gene records suitable for training and testing the model to predict the number of expression experiments per gene. Feature engineering and selection identified the following predictive variables: the number of journal publications; the number of journal publications already attributed for gene expression annotation; the percent of journal publications already attributed for expression data; the gene symbol; and the number of transgenic constructs associated with each gene. Twenty-five percent of the gene records (2483 genes) were used to train the model. The remaining 7387 genes were used to test the model. One hundred and twenty-two and 165 of the 7387 tested genes were identified as missing expression annotations based on their residuals being outside the model lower or upper 95% confidence interval respectively. The model had precision of 0.97 and recall of 0.71 at the negative 95% confidence interval and precision of 0.76 and recall of 0.73 at the positive 95% confidence interval. This method can be used to identify data sets that are incompletely curated, as demonstrated using the gene expression data set from ZFIN. This information can help both database resources and data consumers gauge when it may be useful to look further for published data to augment the existing expertly curated information.
Impact of increased mutagenesis on adaptation to high temperature in bacteriophage Qβ.
Arribas, María; Cabanillas, Laura; Kubota, Kirina; Lázaro, Ester
2016-10-01
RNA viruses replicate with very high error rates, which makes them more sensitive to additional increases in this parameter. This fact has inspired an antiviral strategy named lethal mutagenesis, which is based on the artificial increase of the error rate above a threshold incompatible with virus infectivity. A relevant issue concerning lethal mutagenesis is whether incomplete treatments might enhance the adaptive possibilities of viruses. We have addressed this question by subjecting an RNA virus, the bacteriophage Qβ, to different transmission regimes in the presence or the absence of sublethal concentrations of the mutagenic nucleoside analogue 5-azacytidine (AZC). Populations obtained were subsequently exposed to a non-optimal temperature and analyzed to determine their consensus sequences. Our results show that previously mutagenized populations rapidly fixed a specific set of mutations upon propagation at the new temperature, suggesting that the expansion of the mutant spectrum caused by AZC has an influence on later evolutionary behavior. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Santra, Biswajit; Michaelides, Angelos; Scheffler, Matthias
2007-11-01
The ability of several density-functional theory (DFT) exchange-correlation functionals to describe hydrogen bonds in small water clusters (dimer to pentamer) in their global minimum energy structures is evaluated with reference to second order Møller-Plesset perturbation theory (MP2). Errors from basis set incompleteness have been minimized in both the MP2 reference data and the DFT calculations, thus enabling a consistent systematic evaluation of the true performance of the tested functionals. Among all the functionals considered, the hybrid X3LYP and PBE0 functionals offer the best performance and among the nonhybrid generalized gradient approximation functionals, mPWLYP and PBE1W perform best. The popular BLYP and B3LYP functionals consistently underbind and PBE and PW91 display rather variable performance with cluster size.
Santra, Biswajit; Michaelides, Angelos; Scheffler, Matthias
2007-11-14
The ability of several density-functional theory (DFT) exchange-correlation functionals to describe hydrogen bonds in small water clusters (dimer to pentamer) in their global minimum energy structures is evaluated with reference to second order Moller-Plesset perturbation theory (MP2). Errors from basis set incompleteness have been minimized in both the MP2 reference data and the DFT calculations, thus enabling a consistent systematic evaluation of the true performance of the tested functionals. Among all the functionals considered, the hybrid X3LYP and PBE0 functionals offer the best performance and among the nonhybrid generalized gradient approximation functionals, mPWLYP and PBE1W perform best. The popular BLYP and B3LYP functionals consistently underbind and PBE and PW91 display rather variable performance with cluster size.
NASA Astrophysics Data System (ADS)
Rees, S. J.; Jones, Bryan F.
1992-11-01
Once feature extraction has occurred in a processed image, the recognition problem becomes one of defining a set of features which maps sufficiently well onto one of the defined shape/object models to permit a claimed recognition. This process is usually handled by aggregating features until a large enough weighting is obtained to claim membership, or an adequate number of located features are matched to the reference set. A requirement has existed for an operator or measure capable of a more direct assessment of membership/occupancy between feature sets, particularly where the feature sets may be defective representations. Such feature set errors may be caused by noise, by overlapping of objects, and by partial obscuration of features. These problems occur at the point of acquisition: repairing the data would then assume a priori knowledge of the solution. The technique described in this paper offers a set theoretical measure for partial occupancy defined in terms of the set of minimum additions to permit full occupancy and the set of locations of occupancy if such additions are made. As is shown, this technique permits recognition of partial feature sets with quantifiable degrees of uncertainty. A solution to the problems of obscuration and overlapping is therefore available.
Detection and correction of prescription errors by an emergency department pharmacy service.
Stasiak, Philip; Afilalo, Marc; Castelino, Tanya; Xue, Xiaoqing; Colacone, Antoinette; Soucy, Nathalie; Dankoff, Jerrald
2014-05-01
Emergency departments (EDs) are recognized as a high-risk setting for prescription errors. Pharmacist involvement may be important in reviewing prescriptions to identify and correct errors. The objectives of this study were to describe the frequency and type of prescription errors detected by pharmacists in EDs, determine the proportion of errors that could be corrected, and identify factors associated with prescription errors. This prospective observational study was conducted in a tertiary care teaching ED on 25 consecutive weekdays. Pharmacists reviewed all documented prescriptions and flagged and corrected errors for patients in the ED. We collected information on patient demographics, details on prescription errors, and the pharmacists' recommendations. A total of 3,136 ED prescriptions were reviewed. The proportion of prescriptions in which a pharmacist identified an error was 3.2% (99 of 3,136; 95% confidence interval [CI] 2.5-3.8). The types of identified errors were wrong dose (28 of 99, 28.3%), incomplete prescription (27 of 99, 27.3%), wrong frequency (15 of 99, 15.2%), wrong drug (11 of 99, 11.1%), wrong route (1 of 99, 1.0%), and other (17 of 99, 17.2%). The pharmacy service intervened and corrected 78 (78 of 99, 78.8%) errors. Factors associated with prescription errors were patient age over 65 (odds ratio [OR] 2.34; 95% CI 1.32-4.13), prescriptions with more than one medication (OR 5.03; 95% CI 2.54-9.96), and those written by emergency medicine residents compared to attending emergency physicians (OR 2.21, 95% CI 1.18-4.14). Pharmacists in a tertiary ED are able to correct the majority of prescriptions in which they find errors. Errors are more likely to be identified in prescriptions written for older patients, those containing multiple medication orders, and those prescribed by emergency residents.
Increased instrument intelligence--can it reduce laboratory error?
Jekelis, Albert W
2005-01-01
Recent literature has focused on the reduction of laboratory errors and the potential impact on patient management. This study assessed the intelligent, automated preanalytical process-control abilities in newer generation analyzers as compared with older analyzers and the impact on error reduction. Three generations of immuno-chemistry analyzers were challenged with pooled human serum samples for a 3-week period. One of the three analyzers had an intelligent process of fluidics checks, including bubble detection. Bubbles can cause erroneous results due to incomplete sample aspiration. This variable was chosen because it is the most easily controlled sample defect that can be introduced. Traditionally, lab technicians have had to visually inspect each sample for the presence of bubbles. This is time consuming and introduces the possibility of human error. Instruments with bubble detection may be able to eliminate the human factor and reduce errors associated with the presence of bubbles. Specific samples were vortexed daily to introduce a visible quantity of bubbles, then immediately placed in the daily run. Errors were defined as a reported result greater than three standard deviations below the mean and associated with incomplete sample aspiration of the analyte of the individual analyzer Three standard deviations represented the target limits of proficiency testing. The results of the assays were examined for accuracy and precision. Efficiency, measured as process throughput, was also measured to associate a cost factor and potential impact of the error detection on the overall process. The analyzer performance stratified according to their level of internal process control The older analyzers without bubble detection reported 23 erred results. The newest analyzer with bubble detection reported one specimen incorrectly. The precision and accuracy of the nonvortexed specimens were excellent and acceptable for all three analyzers. No errors were found in the nonvortexed specimens. There were no significant differences in overall process time for any of the analyzers when tests were arranged in an optimal configuration. The analyzer with advanced fluidic intelligence demostrated the greatest ability to appropriately deal with an incomplete aspiration by not processing and reporting a result for the sample. This study suggests that preanalytical process-control capabilities could reduce errors. By association, it implies that similar intelligent process controls could favorably impact the error rate and, in the case of this instrument, do it without negatively impacting process throughput. Other improvements may be realized as a result of having an intelligent error-detection process including further reduction in misreported results, fewer repeats, less operator intervention, and less reagent waste.
Brandenburg, Jan Gerit; Grimme, Stefan
2014-01-01
We present and evaluate dispersion corrected Hartree-Fock (HF) and Density Functional Theory (DFT) based quantum chemical methods for organic crystal structure prediction. The necessity of correcting for missing long-range electron correlation, also known as van der Waals (vdW) interaction, is pointed out and some methodological issues such as inclusion of three-body dispersion terms are discussed. One of the most efficient and widely used methods is the semi-classical dispersion correction D3. Its applicability for the calculation of sublimation energies is investigated for the benchmark set X23 consisting of 23 small organic crystals. For PBE-D3 the mean absolute deviation (MAD) is below the estimated experimental uncertainty of 1.3 kcal/mol. For two larger π-systems, the equilibrium crystal geometry is investigated and very good agreement with experimental data is found. Since these calculations are carried out with huge plane-wave basis sets they are rather time consuming and routinely applicable only to systems with less than about 200 atoms in the unit cell. Aiming at crystal structure prediction, which involves screening of many structures, a pre-sorting with faster methods is mandatory. Small, atom-centered basis sets can speed up the computation significantly but they suffer greatly from basis set errors. We present the recently developed geometrical counterpoise correction gCP. It is a fast semi-empirical method which corrects for most of the inter- and intramolecular basis set superposition error. For HF calculations with nearly minimal basis sets, we additionally correct for short-range basis incompleteness. We combine all three terms in the HF-3c denoted scheme which performs very well for the X23 sublimation energies with an MAD of only 1.5 kcal/mol, which is close to the huge basis set DFT-D3 result.
Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...
2015-08-07
Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less
ERIC Educational Resources Information Center
Raykov, Tenko; Lichtenberg, Peter A.; Paulson, Daniel
2012-01-01
A multiple testing procedure for examining implications of the missing completely at random (MCAR) mechanism in incomplete data sets is discussed. The approach uses the false discovery rate concept and is concerned with testing group differences on a set of variables. The method can be used for ascertaining violations of MCAR and disproving this…
Model and experiments to optimize co-adaptation in a simplified myoelectric control system.
Couraud, M; Cattaert, D; Paclet, F; Oudeyer, P Y; de Rugy, A
2018-04-01
To compensate for a limb lost in an amputation, myoelectric prostheses use surface electromyography (EMG) from the remaining muscles to control the prosthesis. Despite considerable progress, myoelectric controls remain markedly different from the way we normally control movements, and require intense user adaptation. To overcome this, our goal is to explore concurrent machine co-adaptation techniques that are developed in the field of brain-machine interface, and that are beginning to be used in myoelectric controls. We combined a simplified myoelectric control with a perturbation for which human adaptation is well characterized and modeled, in order to explore co-adaptation settings in a principled manner. First, we reproduced results obtained in a classical visuomotor rotation paradigm in our simplified myoelectric context, where we rotate the muscle pulling vectors used to reconstruct wrist force from EMG. Then, a model of human adaptation in response to directional error was used to simulate various co-adaptation settings, where perturbations and machine co-adaptation are both applied on muscle pulling vectors. These simulations established that a relatively low gain of machine co-adaptation that minimizes final errors generates slow and incomplete adaptation, while higher gains increase adaptation rate but also errors by amplifying noise. After experimental verification on real subjects, we tested a variable gain that cumulates the advantages of both, and implemented it with directionally tuned neurons similar to those used to model human adaptation. This enables machine co-adaptation to locally improve myoelectric control, and to absorb more challenging perturbations. The simplified context used here enabled to explore co-adaptation settings in both simulations and experiments, and to raise important considerations such as the need for a variable gain encoded locally. The benefits and limits of extending this approach to more complex and functional myoelectric contexts are discussed.
Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects.
Biesanz, Jeremy C; Falk, Carl F; Savalei, Victoria
2010-08-06
Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses ( Baron & Kenny, 1986 ; Sobel, 1982 ) have in recent years been supplemented by computationally intensive methods such as bootstrapping, the distribution of the product methods, and hierarchical Bayesian Markov chain Monte Carlo (MCMC) methods. These different approaches for assessing mediation are illustrated using data from Dunn, Biesanz, Human, and Finn (2007). However, little is known about how these methods perform relative to each other, particularly in more challenging situations, such as with data that are incomplete and/or nonnormal. This article presents an extensive Monte Carlo simulation evaluating a host of approaches for assessing mediation. We examine Type I error rates, power, and coverage. We study normal and nonnormal data as well as complete and incomplete data. In addition, we adapt a method, recently proposed in statistical literature, that does not rely on confidence intervals (CIs) to test the null hypothesis of no indirect effect. The results suggest that the new inferential method-the partial posterior p value-slightly outperforms existing ones in terms of maintaining Type I error rates while maximizing power, especially with incomplete data. Among confidence interval approaches, the bias-corrected accelerated (BC a ) bootstrapping approach often has inflated Type I error rates and inconsistent coverage and is not recommended; In contrast, the bootstrapped percentile confidence interval and the hierarchical Bayesian MCMC method perform best overall, maintaining Type I error rates, exhibiting reasonable power, and producing stable and accurate coverage rates.
Errors and omissions in hospital prescriptions: a survey of prescription writing in a hospital
Calligaris, Laura; Panzera, Angela; Arnoldo, Luca; Londero, Carla; Quattrin, Rosanna; Troncon, Maria G; Brusaferro, Silvio
2009-01-01
Background The frequency of drug prescription errors is high. Excluding errors in decision making, the remaining are mainly due to order ambiguity, non standard nomenclature and writing illegibility. The aim of this study is to analyse, as a part of a continuous quality improvement program, the quality of prescriptions writing for antibiotics, in an Italian University Hospital as a risk factor for prescription errors. Methods The point prevalence survey, carried out in May 26–30 2008, involved 41 inpatient Units. Every parenteral or oral antibiotic prescription was analysed for legibility (generic or brand drug name, dose, frequency of administration) and completeness (generic or brand name, dose, frequency of administration, route of administration, date of prescription and signature of the prescriber). Eight doctors (residents in Hygiene and Preventive Medicine) and two pharmacists performed the survey by reviewing the clinical records of medical, surgical or intensive care section inpatients. The antibiotics drug category was chosen because its use is widespread in the setting considered. Results Out of 756 inpatients included in the study, 408 antibiotic prescriptions were found in 298 patients (mean prescriptions per patient 1.4; SD ± 0.6). Overall 92.7% (38/41) of the Units had at least one patient with antibiotic prescription. Legibility was in compliance with 78.9% of generic or brand names, 69.4% of doses, 80.1% of frequency of administration, whereas completeness was fulfilled for 95.6% of generic or brand names, 76.7% of doses, 83.6% of frequency of administration, 87% of routes of administration, 43.9% of dates of prescription and 33.3% of physician's signature. Overall 23.9% of prescriptions were illegible and 29.9% of prescriptions were incomplete. Legibility and completeness are higher in unusual drugs prescriptions. Conclusion The Intensive Care Section performed best as far as quality of prescription writing was concerned when compared with the Medical and Surgical Sections. Nevertheless the overall illegibility and incompleteness (above 20%) are unacceptably high. Values need to be improved by enhancing the safety culture and in particular the awareness of the professionals on the consequences that a bad prescription writing can produce. PMID:19439066
Errors and omissions in hospital prescriptions: a survey of prescription writing in a hospital.
Calligaris, Laura; Panzera, Angela; Arnoldo, Luca; Londero, Carla; Quattrin, Rosanna; Troncon, Maria G; Brusaferro, Silvio
2009-05-13
The frequency of drug prescription errors is high. Excluding errors in decision making, the remaining are mainly due to order ambiguity, non standard nomenclature and writing illegibility. The aim of this study is to analyse, as a part of a continuous quality improvement program, the quality of prescriptions writing for antibiotics, in an Italian University Hospital as a risk factor for prescription errors. The point prevalence survey, carried out in May 26-30 2008, involved 41 inpatient Units. Every parenteral or oral antibiotic prescription was analysed for legibility (generic or brand drug name, dose, frequency of administration) and completeness (generic or brand name, dose, frequency of administration, route of administration, date of prescription and signature of the prescriber). Eight doctors (residents in Hygiene and Preventive Medicine) and two pharmacists performed the survey by reviewing the clinical records of medical, surgical or intensive care section inpatients. The antibiotics drug category was chosen because its use is widespread in the setting considered. Out of 756 inpatients included in the study, 408 antibiotic prescriptions were found in 298 patients (mean prescriptions per patient 1.4; SD +/- 0.6). Overall 92.7% (38/41) of the Units had at least one patient with antibiotic prescription. Legibility was in compliance with 78.9% of generic or brand names, 69.4% of doses, 80.1% of frequency of administration, whereas completeness was fulfilled for 95.6% of generic or brand names, 76.7% of doses, 83.6% of frequency of administration, 87% of routes of administration, 43.9% of dates of prescription and 33.3% of physician's signature. Overall 23.9% of prescriptions were illegible and 29.9% of prescriptions were incomplete. Legibility and completeness are higher in unusual drugs prescriptions. The Intensive Care Section performed best as far as quality of prescription writing was concerned when compared with the Medical and Surgical Sections.Nevertheless the overall illegibility and incompleteness (above 20%) are unacceptably high. Values need to be improved by enhancing the safety culture and in particular the awareness of the professionals on the consequences that a bad prescription writing can produce.
RECONSTRUCTING REDSHIFT DISTRIBUTIONS WITH CROSS-CORRELATIONS: TESTS AND AN OPTIMIZED RECIPE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, Daniel J.; Newman, Jeffrey A., E-mail: djm70@pitt.ed, E-mail: janewman@pitt.ed
2010-09-20
Many of the cosmological tests to be performed by planned dark energy experiments will require extremely well-characterized photometric redshift measurements. Current estimates for cosmic shear are that the true mean redshift of the objects in each photo-z bin must be known to better than 0.002(1 + z), and the width of the bin must be known to {approx}0.003(1 + z) if errors in cosmological measurements are not to be degraded significantly. A conventional approach is to calibrate these photometric redshifts with large sets of spectroscopic redshifts. However, at the depths probed by Stage III surveys (such as DES), let alonemore » Stage IV (LSST, JDEM, and Euclid), existing large redshift samples have all been highly (25%-60%) incomplete, with a strong dependence of success rate on both redshift and galaxy properties. A powerful alternative approach is to exploit the clustering of galaxies to perform photometric redshift calibrations. Measuring the two-point angular cross-correlation between objects in some photometric redshift bin and objects with known spectroscopic redshift, as a function of the spectroscopic z, allows the true redshift distribution of a photometric sample to be reconstructed in detail, even if it includes objects too faint for spectroscopy or if spectroscopic samples are highly incomplete. We test this technique using mock DEEP2 Galaxy Redshift survey light cones constructed from the Millennium Simulation semi-analytic galaxy catalogs. From this realistic test, which incorporates the effects of galaxy bias evolution and cosmic variance, we find that the true redshift distribution of a photometric sample can, in fact, be determined accurately with cross-correlation techniques. We also compare the empirical error in the reconstruction of redshift distributions to previous analytic predictions, finding that additional components must be included in error budgets to match the simulation results. This extra error contribution is small for surveys that sample large areas of sky (>{approx}10{sup 0}-100{sup 0}), but dominant for {approx}1 deg{sup 2} fields. We conclude by presenting a step-by-step, optimized recipe for reconstructing redshift distributions from cross-correlation information using standard correlation measurements.« less
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...
The role of technology in clinician-to-clinician communication.
McElroy, Lisa M; Ladner, Daniela P; Holl, Jane L
2013-12-01
Incomplete, fragmented and poorly organised communications contribute to more than half the errors that lead to adverse and sentinel events. Meanwhile, communication software and devices with expanding capabilities are rapidly proliferating and being introduced into the healthcare setting. Clinicians face a large communication burden, which has been exacerbated by the additional challenge of selecting a mode of communication. In addition to specific communication devices, some hospitals have implemented advanced technological systems to assist with communication. However, few studies have provided empirical evidence of the specific advantages and disadvantages of the different devices used for communication. Given the increasing quantities of information transmitted to and by clinicians, evaluations of how communication methods and devices can improve the quality, safety and outcomes of healthcare are needed.
Probabilistic failure assessment with application to solid rocket motors
NASA Technical Reports Server (NTRS)
Jan, Darrell L.; Davidson, Barry D.; Moore, Nicholas R.
1990-01-01
A quantitative methodology is being developed for assessment of risk of failure of solid rocket motors. This probabilistic methodology employs best available engineering models and available information in a stochastic framework. The framework accounts for incomplete knowledge of governing parameters, intrinsic variability, and failure model specification error. Earlier case studies have been conducted on several failure modes of the Space Shuttle Main Engine. Work in progress on application of this probabilistic approach to large solid rocket boosters such as the Advanced Solid Rocket Motor for the Space Shuttle is described. Failure due to debonding has been selected as the first case study for large solid rocket motors (SRMs) since it accounts for a significant number of historical SRM failures. Impact of incomplete knowledge of governing parameters and failure model specification errors is expected to be important.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, J; Qi, H; Wu, S
Purpose: In transmitted X-ray tomography imaging, projections are sometimes incomplete due to a variety of reasons, such as geometry inaccuracy, defective detector cells, etc. To address this issue, we have derived a direct consistency condition based on John’s Equation, and proposed a method to effectively restore incomplete projections based on this consistency condition. Methods: Through parameter substitutions, we have derived a direct consistency condition equation from John’s equation, in which the left side is only projection derivative of view and the right side is projection derivative of other geometrical parameters. Based on this consistency condition, a projection restoration method ismore » proposed, which includes five steps: 1) Forward projecting reconstructed image and using linear interpolation to estimate the incomplete projections as the initial result; 2) Performing Fourier transform on the projections; 3) Restoring the incomplete frequency data using the consistency condition equation; 4) Performing inverse Fourier transform; 5) Repeat step 2)∼4) until our criteria is met to terminate the iteration. Results: A beam-blocking-based scatter correction case and a bad-pixel correction case were used to demonstrate the efficacy and robustness of our restoration method. The mean absolute error (MAE), signal noise ratio (SNR) and mean square error (MSE) were employed as our evaluation metrics of the reconstructed images. For the scatter correction case, the MAE is reduced from 63.3% to 71.7% with 4 iterations. Compared with the existing Patch’s method, the MAE of our method is further reduced by 8.72%. For the bad-pixel case, the SNR of the reconstructed image by our method is increased from 13.49% to 21.48%, with the MSE being decreased by 45.95%, compared with linear interpolation method. Conclusion: Our studies have demonstrated that our restoration method based on the new consistency condition could effectively restore the incomplete projections, especially for their high frequency component.« less
Fairchild, Mallika; Kim, Seung-Jae; Iarkov, Alex; Abbas, James J.; Jung, Ranu
2010-01-01
The long-term objective of this work is to understand the mechanisms by which electrical stimulation based movement therapies may harness neural plasticity to accelerate and enhance sensorimotor recovery after incomplete spinal cord injury (iSCI). An adaptive neuromuscular electrical stimulation (aNMES) paradigm was implemented in adult Long Evans rats with thoracic contusion injury (T8 vertebral level, 155±2 Kdyne). In lengthy sessions with lightly anesthetized animals, hip flexor and extensor muscles were stimulated using an aNMES control system in order to generate desired hip movements. The aNMES control system, which used a pattern generator/pattern shaper structure, adjusted pulse amplitude to modulate muscle force in order to control hip movement. An intermittent stimulation paradigm was used (5-cycles/set; 20-second rest between sets; 100 sets). In each cycle, hip rotation caused the foot plantar surface to contact a stationary brush for appropriately timed cutaneous input. Sessions were repeated over several days while the animals recovered from injury. Results indicated that aNMES automatically and reliably tracked the desired hip trajectory with low error and maintained range of motion with only gradual increase in stimulation during the long sessions. Intermittent aNMES thus accounted for the numerous factors that can influence the response to NMES: electrode stability, excitability of spinal neural circuitry, non-linear muscle recruitment, fatigue, spinal reflexes due to cutaneous input, and the endogenous recovery of the animals. This novel aNMES application in the iSCI rodent model can thus be used in chronic stimulation studies to investigate the mechanisms of neuroplasticity targeted by NMES-based repetitive movement therapy. PMID:20206164
Kamneva, Olga K; Rosenberg, Noah A
2017-01-01
Hybridization events generate reticulate species relationships, giving rise to species networks rather than species trees. We report a comparative study of consensus, maximum parsimony, and maximum likelihood methods of species network reconstruction using gene trees simulated assuming a known species history. We evaluate the role of the divergence time between species involved in a hybridization event, the relative contributions of the hybridizing species, and the error in gene tree estimation. When gene tree discordance is mostly due to hybridization and not due to incomplete lineage sorting (ILS), most of the methods can detect even highly skewed hybridization events between highly divergent species. For recent divergences between hybridizing species, when the influence of ILS is sufficiently high, likelihood methods outperform parsimony and consensus methods, which erroneously identify extra hybridizations. The more sophisticated likelihood methods, however, are affected by gene tree errors to a greater extent than are consensus and parsimony. PMID:28469378
Effects of incomplete adaption and disturbance in adaptive control
NASA Technical Reports Server (NTRS)
Lindorff, D. P.
1972-01-01
This investigation focused attention on the fact that the synthesis of adaptive control systems has often been discussed in the framework of idealizations which may represent over simplifications. A condition for boundedness of the tracking error has been derived for the case in which incomplete adaption and disturbance are present. When using Parks' design it is shown that instability of the adaptive gains can result due to the presence of disturbance. The theory has been applied to a nontrivial example in order to illustrate the concepts involved.
Hemingway, Steve; McCann, Terence; Baxter, Hazel; Smith, George; Burgess-Dawson, Rebecca; Dewhirst, Kate
2015-12-01
The purpose of this study was to investigate perceptions of barriers to safe administration of medicines in mental health settings. A cross-sectional survey was used, and 70 mental health nurses and 41 students were recruited from a mental health trust and a university in Yorkshire, UK. Respondents completed a questionnaire comprising closed- and open-response questions. One item, which contained seven sub-items, addressed barriers to safe administration of medication. Seven themes--five nurse- and prescriber-focused and two service user-focused--were abstracted from the data, depicting a range of barriers to safe administration of medicines. Nurse- and prescriber-focused themes included environmental distractions, insufficient pharmacological knowledge, poorly written and incomplete medication documentation, inability to calculate medication dosage correctly, and work-related pressure. Service user-focused themes comprised poor adherence to medication regimens, and cultural and linguistic communication barriers with service users. Tackling medication administration error is predominantly an organizational rather than individual practitioner responsibility. © 2014 Wiley Publishing Asia Pty Ltd.
Assiri, Ghadah Asaad; Shebl, Nada Atef; Mahmoud, Mansour Adam; Aloudah, Nouf; Grant, Elizabeth; Aljadhey, Hisham; Sheikh, Aziz
2018-05-05
To investigate the epidemiology of medication errors and error-related adverse events in adults in primary care, ambulatory care and patients' homes. Systematic review. Six international databases were searched for publications between 1 January 2006 and 31 December 2015. Two researchers independently extracted data from eligible studies and assessed the quality of these using established instruments. Synthesis of data was informed by an appreciation of the medicines' management process and the conceptual framework from the International Classification for Patient Safety. 60 studies met the inclusion criteria, of which 53 studies focused on medication errors, 3 on error-related adverse events and 4 on risk factors only. The prevalence of prescribing errors was reported in 46 studies: prevalence estimates ranged widely from 2% to 94%. Inappropriate prescribing was the most common type of error reported. Only one study reported the prevalence of monitoring errors, finding that incomplete therapeutic/safety laboratory-test monitoring occurred in 73% of patients. The incidence of preventable adverse drug events (ADEs) was estimated as 15/1000 person-years, the prevalence of drug-drug interaction-related adverse drug reactions as 7% and the prevalence of preventable ADE as 0.4%. A number of patient, healthcare professional and medication-related risk factors were identified, including the number of medications used by the patient, increased patient age, the number of comorbidities, use of anticoagulants, cases where more than one physician was involved in patients' care and care being provided by family physicians/general practitioners. A very wide variation in the medication error and error-related adverse events rates is reported in the studies, this reflecting heterogeneity in the populations studied, study designs employed and outcomes evaluated. This review has identified important limitations and discrepancies in the methodologies used and gaps in the literature on the epidemiology and outcomes of medication errors in community settings. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
ERIC Educational Resources Information Center
Brown, Robert T.; Jackson, Lee A.
1992-01-01
Reviews research on inductive reasoning errors, including seeing patterns or relationships where none exist, neglecting statistical regression, overgeneralizing unrepresentative data, and drawing conclusions based on incomplete decision matrices. Considers "false consensus effect," through which associations with like-minded people lead one to…
Bayesian Inference of Natural Rankings in Incomplete Competition Networks
Park, Juyong; Yook, Soon-Hyung
2014-01-01
Competition between a complex system's constituents and a corresponding reward mechanism based on it have profound influence on the functioning, stability, and evolution of the system. But determining the dominance hierarchy or ranking among the constituent parts from the strongest to the weakest – essential in determining reward and penalty – is frequently an ambiguous task due to the incomplete (partially filled) nature of competition networks. Here we introduce the “Natural Ranking,” an unambiguous ranking method applicable to a round robin tournament, and formulate an analytical model based on the Bayesian formula for inferring the expected mean and error of the natural ranking of nodes from an incomplete network. We investigate its potential and uses in resolving important issues of ranking by applying it to real-world competition networks. PMID:25163528
Bayesian Inference of Natural Rankings in Incomplete Competition Networks
NASA Astrophysics Data System (ADS)
Park, Juyong; Yook, Soon-Hyung
2014-08-01
Competition between a complex system's constituents and a corresponding reward mechanism based on it have profound influence on the functioning, stability, and evolution of the system. But determining the dominance hierarchy or ranking among the constituent parts from the strongest to the weakest - essential in determining reward and penalty - is frequently an ambiguous task due to the incomplete (partially filled) nature of competition networks. Here we introduce the ``Natural Ranking,'' an unambiguous ranking method applicable to a round robin tournament, and formulate an analytical model based on the Bayesian formula for inferring the expected mean and error of the natural ranking of nodes from an incomplete network. We investigate its potential and uses in resolving important issues of ranking by applying it to real-world competition networks.
NASA Astrophysics Data System (ADS)
Döpking, Sandra; Plaisance, Craig P.; Strobusch, Daniel; Reuter, Karsten; Scheurer, Christoph; Matera, Sebastian
2018-01-01
In the last decade, first-principles-based microkinetic modeling has been developed into an important tool for a mechanistic understanding of heterogeneous catalysis. A commonly known, but hitherto barely analyzed issue in this kind of modeling is the presence of sizable errors from the use of approximate Density Functional Theory (DFT). We here address the propagation of these errors to the catalytic turnover frequency (TOF) by global sensitivity and uncertainty analysis. Both analyses require the numerical quadrature of high-dimensional integrals. To achieve this efficiently, we utilize and extend an adaptive sparse grid approach and exploit the confinement of the strongly non-linear behavior of the TOF to local regions of the parameter space. We demonstrate the methodology on a model of the oxygen evolution reaction at the Co3O4 (110)-A surface, using a maximum entropy error model that imposes nothing but reasonable bounds on the errors. For this setting, the DFT errors lead to an absolute uncertainty of several orders of magnitude in the TOF. We nevertheless find that it is still possible to draw conclusions from such uncertain models about the atomistic aspects controlling the reactivity. A comparison with derivative-based local sensitivity analysis instead reveals that this more established approach provides incomplete information. Since the adaptive sparse grids allow for the evaluation of the integrals with only a modest number of function evaluations, this approach opens the way for a global sensitivity analysis of more complex models, for instance, models based on kinetic Monte Carlo simulations.
Extensive Error in the Number of Genes Inferred from Draft Genome Assemblies
Denton, James F.; Lugo-Martinez, Jose; Tucker, Abraham E.; Schrider, Daniel R.; Warren, Wesley C.; Hahn, Matthew W.
2014-01-01
Current sequencing methods produce large amounts of data, but genome assemblies based on these data are often woefully incomplete. These incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. In this paper we investigate the magnitude of the problem, both in terms of total gene number and the number of copies of genes in specific families. To do this, we compare multiple draft assemblies against higher-quality versions of the same genomes, using several new assemblies of the chicken genome based on both traditional and next-generation sequencing technologies, as well as published draft assemblies of chimpanzee. We find that upwards of 40% of all gene families are inferred to have the wrong number of genes in draft assemblies, and that these incorrect assemblies both add and subtract genes. Using simulated genome assemblies of Drosophila melanogaster, we find that the major cause of increased gene numbers in draft genomes is the fragmentation of genes onto multiple individual contigs. Finally, we demonstrate the usefulness of RNA-Seq in improving the gene annotation of draft assemblies, largely by connecting genes that have been fragmented in the assembly process. PMID:25474019
Extensive error in the number of genes inferred from draft genome assemblies.
Denton, James F; Lugo-Martinez, Jose; Tucker, Abraham E; Schrider, Daniel R; Warren, Wesley C; Hahn, Matthew W
2014-12-01
Current sequencing methods produce large amounts of data, but genome assemblies based on these data are often woefully incomplete. These incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. In this paper we investigate the magnitude of the problem, both in terms of total gene number and the number of copies of genes in specific families. To do this, we compare multiple draft assemblies against higher-quality versions of the same genomes, using several new assemblies of the chicken genome based on both traditional and next-generation sequencing technologies, as well as published draft assemblies of chimpanzee. We find that upwards of 40% of all gene families are inferred to have the wrong number of genes in draft assemblies, and that these incorrect assemblies both add and subtract genes. Using simulated genome assemblies of Drosophila melanogaster, we find that the major cause of increased gene numbers in draft genomes is the fragmentation of genes onto multiple individual contigs. Finally, we demonstrate the usefulness of RNA-Seq in improving the gene annotation of draft assemblies, largely by connecting genes that have been fragmented in the assembly process.
GPS Attitude Determination Using Deployable-Mounted Antennas
NASA Technical Reports Server (NTRS)
Osborne, Michael L.; Tolson, Robert H.
1996-01-01
The primary objective of this investigation is to develop a method to solve for spacecraft attitude in the presence of potential incomplete antenna deployment. Most research on the use of the Global Positioning System (GPS) in attitude determination has assumed that the antenna baselines are known to less than 5 centimeters, or one quarter of the GPS signal wavelength. However, if the GPS antennas are mounted on a deployable fixture such as a solar panel, the actual antenna positions will not necessarily be within 5 cm of nominal. Incomplete antenna deployment could cause the baselines to be grossly in error, perhaps by as much as a meter. Overcoming this large uncertainty in order to accurately determine attitude is the focus of this study. To this end, a two-step solution method is proposed. The first step uses a least-squares estimate of the baselines to geometrically calculate the deployment angle errors of the solar panels. For the spacecraft under investigation, the first step determines the baselines to 3-4 cm with 4-8 minutes of data. A Kalman filter is then used to complete the attitude determination process, resulting in typical attitude errors of 0.50.
Increasing patient safety and efficiency in transfusion therapy using formal process definitions.
Henneman, Elizabeth A; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Andrzejewski, Chester; Merrigan, Karen; Cobleigh, Rachel; Frederick, Kimberly; Katz-Bassett, Ethan; Henneman, Philip L
2007-01-01
The administration of blood products is a common, resource-intensive, and potentially problem-prone area that may place patients at elevated risk in the clinical setting. Much of the emphasis in transfusion safety has been targeted toward quality control measures in laboratory settings where blood products are prepared for administration as well as in automation of certain laboratory processes. In contrast, the process of transfusing blood in the clinical setting (ie, at the point of care) has essentially remained unchanged over the past several decades. Many of the currently available methods for improving the quality and safety of blood transfusions in the clinical setting rely on informal process descriptions, such as flow charts and medical algorithms, to describe medical processes. These informal descriptions, although useful in presenting an overview of standard processes, can be ambiguous or incomplete. For example, they often describe only the standard process and leave out how to handle possible failures or exceptions. One alternative to these informal descriptions is to use formal process definitions, which can serve as the basis for a variety of analyses because these formal definitions offer precision in the representation of all possible ways that a process can be carried out in both standard and exceptional situations. Formal process definitions have not previously been used to describe and improve medical processes. The use of such formal definitions to prospectively identify potential error and improve the transfusion process has not previously been reported. The purpose of this article is to introduce the concept of formally defining processes and to describe how formal definitions of blood transfusion processes can be used to detect and correct transfusion process errors in ways not currently possible using existing quality improvement methods.
Haddrath, Oliver; Baker, Allan J
2012-11-22
The origin and timing of the diversification of modern birds remains controversial, primarily because phylogenetic relationships are incompletely resolved and uncertainty persists in molecular estimates of lineage ages. Here, we present a species tree for the major palaeognath lineages using 27 nuclear genes and 27 archaic retroposon insertions. We show that rheas are sister to the kiwis, emu and cassowaries, and confirm ratite paraphyly because tinamous are sister to moas. Divergence dating using 10 genes with broader taxon sampling, including emu, cassowary, ostrich, five kiwis, two rheas, three tinamous, three extinct moas and 15 neognath lineages, suggests that three vicariant events and possibly two dispersals are required to explain their historical biogeography. The age of crown group birds was estimated at 131 Ma (95% highest posterior density 122-138 Ma), similar to previous molecular estimates. Problems associated with gene tree discordance and incomplete lineage sorting in birds will require much larger gene sets to increase species tree accuracy and improve error in divergence times. The relatively rapid branching within neoaves pre-dates the extinction of dinosaurs, suggesting that the genesis of the radiation within this diverse clade of birds was not in response to the Cretaceous-Paleogene extinction event.
Classification and data acquisition with incomplete data
NASA Astrophysics Data System (ADS)
Williams, David P.
In remote-sensing applications, incomplete data can result when only a subset of sensors (e.g., radar, infrared, acoustic) are deployed at certain regions. The limitations of single sensor systems have spurred interest in employing multiple sensor modalities simultaneously. For example, in land mine detection tasks, different sensor modalities are better-suited to capture different aspects of the underlying physics of the mines. Synthetic aperture radar sensors may be better at detecting surface mines, while infrared sensors may be better at detecting buried mines. By employing multiple sensor modalities to address the detection task, the strengths of the disparate sensors can be exploited in a synergistic manner to improve performance beyond that which would be achievable with either single sensor alone. When multi-sensor approaches are employed, however, incomplete data can be manifested. If each sensor is located on a separate platform ( e.g., aircraft), each sensor may interrogate---and hence collect data over---only partially overlapping areas of land. As a result, some data points may be characterized by data (i.e., features) from only a subset of the possible sensors employed in the task. Equivalently, this scenario implies that some data points will be missing features. Increasing focus in the future on using---and fusing data from---multiple sensors will make such incomplete-data problems commonplace. In many applications involving incomplete data, it is possible to acquire the missing data at a cost. In multi-sensor remote-sensing applications, data is acquired by deploying sensors to data points. Acquiring data is usually an expensive, time-consuming task, a fact that necessitates an intelligent data acquisition process. Incomplete data is not limited to remote-sensing applications, but rather, can arise in virtually any data set. In this dissertation, we address the general problem of classification when faced with incomplete data. We also address the closely related problem of active data acquisition, which develops a strategy to acquire missing features and labels that will most benefit the classification task. We first address the general problem of classification with incomplete data, maintaining the view that all data (i.e., information) is valuable. We employ a logistic regression framework within which we formulate a supervised classification algorithm for incomplete data. This principled, yet flexible, framework permits several interesting extensions that allow all available data to be utilized. One extension incorporates labeling error, which permits the usage of potentially imperfectly labeled data in learning a classifier. A second major extension converts the proposed algorithm to a semi-supervised approach by utilizing unlabeled data via graph-based regularization. Finally, the classification algorithm is extended to the case in which (image) data---from which features are extracted---are available from multiple resolutions. Taken together, this family of incomplete-data classification algorithms exploits all available data in a principled manner by avoiding explicit imputation. Instead, missing data is integrated out analytically with the aid of an estimated conditional density function (conditioned on the observed features). This feat is accomplished by invoking only mild assumptions. We also address the problem of active data acquisition by determining which missing data should be acquired to most improve performance. Specifically, we examine this data acquisition task when the data to be acquired can be either labels or features. The proposed approach is based on a criterion that accounts for the expected benefit of the acquisition. This approach, which is applicable for any general missing data problem, exploits the incomplete-data classification framework introduced in the first part of this dissertation. This data acquisition approach allows for the acquisition of both labels and features. Moreover, several types of feature acquisition are permitted, including the acquisition of individual or multiple features for individual or multiple data points, which may be either labeled or unlabeled. Furthermore, if different types of data acquisition are feasible for a given application, the algorithm will automatically determine the most beneficial type of data to acquire. Experimental results on both benchmark machine learning data sets and real (i.e., measured) remote-sensing data demonstrate the advantages of the proposed incomplete-data classification and active data acquisition algorithms.
Model and experiments to optimize co-adaptation in a simplified myoelectric control system
NASA Astrophysics Data System (ADS)
Couraud, M.; Cattaert, D.; Paclet, F.; Oudeyer, P. Y.; de Rugy, A.
2018-04-01
Objective. To compensate for a limb lost in an amputation, myoelectric prostheses use surface electromyography (EMG) from the remaining muscles to control the prosthesis. Despite considerable progress, myoelectric controls remain markedly different from the way we normally control movements, and require intense user adaptation. To overcome this, our goal is to explore concurrent machine co-adaptation techniques that are developed in the field of brain-machine interface, and that are beginning to be used in myoelectric controls. Approach. We combined a simplified myoelectric control with a perturbation for which human adaptation is well characterized and modeled, in order to explore co-adaptation settings in a principled manner. Results. First, we reproduced results obtained in a classical visuomotor rotation paradigm in our simplified myoelectric context, where we rotate the muscle pulling vectors used to reconstruct wrist force from EMG. Then, a model of human adaptation in response to directional error was used to simulate various co-adaptation settings, where perturbations and machine co-adaptation are both applied on muscle pulling vectors. These simulations established that a relatively low gain of machine co-adaptation that minimizes final errors generates slow and incomplete adaptation, while higher gains increase adaptation rate but also errors by amplifying noise. After experimental verification on real subjects, we tested a variable gain that cumulates the advantages of both, and implemented it with directionally tuned neurons similar to those used to model human adaptation. This enables machine co-adaptation to locally improve myoelectric control, and to absorb more challenging perturbations. Significance. The simplified context used here enabled to explore co-adaptation settings in both simulations and experiments, and to raise important considerations such as the need for a variable gain encoded locally. The benefits and limits of extending this approach to more complex and functional myoelectric contexts are discussed.
Updated Magmatic Flux Rate Estimates for the Hawaii Plume
NASA Astrophysics Data System (ADS)
Wessel, P.
2013-12-01
Several studies have estimated the magmatic flux rate along the Hawaiian-Emperor Chain using a variety of methods and arriving at different results. These flux rate estimates have weaknesses because of incomplete data sets and different modeling assumptions, especially for the youngest portion of the chain (<3 Ma). While they generally agree on the 1st order features, there is less agreement on the magnitude and relative size of secondary flux variations. Some of these differences arise from the use of different methodologies, but the significance of this variability is difficult to assess due to a lack of confidence bounds on the estimates obtained with these disparate methods. All methods introduce some error, but to date there has been little or no quantification of error estimates for the inferred melt flux, making an assessment problematic. Here we re-evaluate the melt flux for the Hawaii plume with the latest gridded data sets (SRTM30+ and FAA 21.1) using several methods, including the optimal robust separator (ORS) and directional median filtering techniques (DiM). We also compute realistic confidence limits on the results. In particular, the DiM technique was specifically developed to aid in the estimation of surface loads that are superimposed on wider bathymetric swells and it provides error estimates on the optimal residuals. Confidence bounds are assigned separately for the estimated surface load (obtained from the ORS regional/residual separation techniques) and the inferred subsurface volume (from gravity-constrained isostasy and plate flexure optimizations). These new and robust estimates will allow us to assess which secondary features in the resulting melt flux curve are significant and should be incorporated when correlating melt flux variations with other geophysical and geochemical observations.
Improving the accuracy of Møller-Plesset perturbation theory with neural networks
NASA Astrophysics Data System (ADS)
McGibbon, Robert T.; Taube, Andrew G.; Donchev, Alexander G.; Siva, Karthik; Hernández, Felipe; Hargus, Cory; Law, Ka-Hei; Klepeis, John L.; Shaw, David E.
2017-10-01
Noncovalent interactions are of fundamental importance across the disciplines of chemistry, materials science, and biology. Quantum chemical calculations on noncovalently bound complexes, which allow for the quantification of properties such as binding energies and geometries, play an essential role in advancing our understanding of, and building models for, a vast array of complex processes involving molecular association or self-assembly. Because of its relatively modest computational cost, second-order Møller-Plesset perturbation (MP2) theory is one of the most widely used methods in quantum chemistry for studying noncovalent interactions. MP2 is, however, plagued by serious errors due to its incomplete treatment of electron correlation, especially when modeling van der Waals interactions and π-stacked complexes. Here we present spin-network-scaled MP2 (SNS-MP2), a new semi-empirical MP2-based method for dimer interaction-energy calculations. To correct for errors in MP2, SNS-MP2 uses quantum chemical features of the complex under study in conjunction with a neural network to reweight terms appearing in the total MP2 interaction energy. The method has been trained on a new data set consisting of over 200 000 complete basis set (CBS)-extrapolated coupled-cluster interaction energies, which are considered the gold standard for chemical accuracy. SNS-MP2 predicts gold-standard binding energies of unseen test compounds with a mean absolute error of 0.04 kcal mol-1 (root-mean-square error 0.09 kcal mol-1), a 6- to 7-fold improvement over MP2. To the best of our knowledge, its accuracy exceeds that of all extant density functional theory- and wavefunction-based methods of similar computational cost, and is very close to the intrinsic accuracy of our benchmark coupled-cluster methodology itself. Furthermore, SNS-MP2 provides reliable per-conformation confidence intervals on the predicted interaction energies, a feature not available from any alternative method.
Improving the accuracy of Møller-Plesset perturbation theory with neural networks.
McGibbon, Robert T; Taube, Andrew G; Donchev, Alexander G; Siva, Karthik; Hernández, Felipe; Hargus, Cory; Law, Ka-Hei; Klepeis, John L; Shaw, David E
2017-10-28
Noncovalent interactions are of fundamental importance across the disciplines of chemistry, materials science, and biology. Quantum chemical calculations on noncovalently bound complexes, which allow for the quantification of properties such as binding energies and geometries, play an essential role in advancing our understanding of, and building models for, a vast array of complex processes involving molecular association or self-assembly. Because of its relatively modest computational cost, second-order Møller-Plesset perturbation (MP2) theory is one of the most widely used methods in quantum chemistry for studying noncovalent interactions. MP2 is, however, plagued by serious errors due to its incomplete treatment of electron correlation, especially when modeling van der Waals interactions and π-stacked complexes. Here we present spin-network-scaled MP2 (SNS-MP2), a new semi-empirical MP2-based method for dimer interaction-energy calculations. To correct for errors in MP2, SNS-MP2 uses quantum chemical features of the complex under study in conjunction with a neural network to reweight terms appearing in the total MP2 interaction energy. The method has been trained on a new data set consisting of over 200 000 complete basis set (CBS)-extrapolated coupled-cluster interaction energies, which are considered the gold standard for chemical accuracy. SNS-MP2 predicts gold-standard binding energies of unseen test compounds with a mean absolute error of 0.04 kcal mol -1 (root-mean-square error 0.09 kcal mol -1 ), a 6- to 7-fold improvement over MP2. To the best of our knowledge, its accuracy exceeds that of all extant density functional theory- and wavefunction-based methods of similar computational cost, and is very close to the intrinsic accuracy of our benchmark coupled-cluster methodology itself. Furthermore, SNS-MP2 provides reliable per-conformation confidence intervals on the predicted interaction energies, a feature not available from any alternative method.
Identification and correction of abnormal, incomplete and mispredicted proteins in public databases.
Nagy, Alinda; Hegyi, Hédi; Farkas, Krisztina; Tordai, Hedvig; Kozma, Evelin; Bányai, László; Patthy, László
2008-08-27
Despite significant improvements in computational annotation of genomes, sequences of abnormal, incomplete or incorrectly predicted genes and proteins remain abundant in public databases. Since the majority of incomplete, abnormal or mispredicted entries are not annotated as such, these errors seriously affect the reliability of these databases. Here we describe the MisPred approach that may provide an efficient means for the quality control of databases. The current version of the MisPred approach uses five distinct routines for identifying abnormal, incomplete or mispredicted entries based on the principle that a sequence is likely to be incorrect if some of its features conflict with our current knowledge about protein-coding genes and proteins: (i) conflict between the predicted subcellular localization of proteins and the absence of the corresponding sequence signals; (ii) presence of extracellular and cytoplasmic domains and the absence of transmembrane segments; (iii) co-occurrence of extracellular and nuclear domains; (iv) violation of domain integrity; (v) chimeras encoded by two or more genes located on different chromosomes. Analyses of predicted EnsEMBL protein sequences of nine deuterostome (Homo sapiens, Mus musculus, Rattus norvegicus, Monodelphis domestica, Gallus gallus, Xenopus tropicalis, Fugu rubripes, Danio rerio and Ciona intestinalis) and two protostome species (Caenorhabditis elegans and Drosophila melanogaster) have revealed that the absence of expected signal peptides and violation of domain integrity account for the majority of mispredictions. Analyses of sequences predicted by NCBI's GNOMON annotation pipeline show that the rates of mispredictions are comparable to those of EnsEMBL. Interestingly, even the manually curated UniProtKB/Swiss-Prot dataset is contaminated with mispredicted or abnormal proteins, although to a much lesser extent than UniProtKB/TrEMBL or the EnsEMBL or GNOMON-predicted entries. MisPred works efficiently in identifying errors in predictions generated by the most reliable gene prediction tools such as the EnsEMBL and NCBI's GNOMON pipelines and also guides the correction of errors. We suggest that application of the MisPred approach will significantly improve the quality of gene predictions and the associated databases.
Application of Monte Carlo algorithms to the Bayesian analysis of the Cosmic Microwave Background
NASA Technical Reports Server (NTRS)
Jewell, J.; Levin, S.; Anderson, C. H.
2004-01-01
Power spectrum estimation and evaluation of associated errors in the presence of incomplete sky coverage; nonhomogeneous, correlated instrumental noise; and foreground emission are problems of central importance for the extraction of cosmological information from the cosmic microwave background (CMB).
Bayesian statistics applied to the location of the source of explosions at Stromboli Volcano, Italy
Saccorotti, G.; Chouet, B.; Martini, M.; Scarpa, R.
1998-01-01
We present a method for determining the location and spatial extent of the source of explosions at Stromboli Volcano, Italy, based on a Bayesian inversion of the slowness vector derived from frequency-slowness analyses of array data. The method searches for source locations that minimize the error between the expected and observed slowness vectors. For a given set of model parameters, the conditional probability density function of slowness vectors is approximated by a Gaussian distribution of expected errors. The method is tested with synthetics using a five-layer velocity model derived for the north flank of Stromboli and a smoothed velocity model derived from a power-law approximation of the layered structure. Application to data from Stromboli allows for a detailed examination of uncertainties in source location due to experimental errors and incomplete knowledge of the Earth model. Although the solutions are not constrained in the radial direction, excellent resolution is achieved in both transverse and depth directions. Under the assumption that the horizontal extent of the source does not exceed the crater dimension, the 90% confidence region in the estimate of the explosive source location corresponds to a small volume extending from a depth of about 100 m to a maximum depth of about 300 m beneath the active vents, with a maximum likelihood source region located in the 120- to 180-m-depth interval.
Solar maximum mission fine pointing sun sensor dawn and dusk errors flight data and model analysis
NASA Technical Reports Server (NTRS)
Kulp, D. R.
1988-01-01
SMM flight system control errors occurring at spacecraft dawn and dusk are analyzed. The errors are associated with the fine pointing sun sensor (FPSS), which is a primary component of the SMM attitude control system. It is shown that the source of the FPSS dawn/dusk distortion is the incomplete masking of sunlight reflected off the earth by the optical baffle covering the FPSS sensor heads onboard the SMM during periods of orbit dawn and dusk. For the most part, the modeled behavior of the FPSS under dawn and dusk lighting conditions matches the observed behavior in the SMM flight data.
Ahmed, Rana; Robinson, Ryan; Elsony, Asma; Thomson, Rachael; Squire, S. Bertel; Malmborg, Rasmus; Burney, Peter
2018-01-01
Introduction Data collection using paper-based questionnaires can be time consuming and return errors affect data accuracy, completeness, and information quality in health surveys. We compared smartphone and paper-based data collection systems in the Burden of Obstructive Lung Disease (BOLD) study in rural Sudan. Methods This exploratory pilot study was designed to run in parallel with the cross-sectional household survey. The Open Data Kit was used to programme questionnaires in Arabic into smartphones. We included 100 study participants (83% women; median age = 41.5 ± 16.4 years) from the BOLD study from 3 rural villages in East-Gezira and Kamleen localities of Gezira state, Sudan. Questionnaire data were collected using smartphone and paper-based technologies simultaneously. We used Kappa statistics and inter-rater class coefficient to test agreement between the two methods. Results Symptoms reported included cough (24%), phlegm (15%), wheezing (17%), and shortness of breath (18%). One in five were or had been cigarette smokers. The two data collection methods varied between perfect to slight agreement across the 204 variables evaluated (Kappa varied between 1.00 and 0.02 and inter-rater coefficient between 1.00 and -0.12). Errors were most commonly seen with paper questionnaires (83% of errors seen) vs smartphones (17% of errors seen) administered questionnaires with questions with complex skip-patterns being a major source of errors in paper questionnaires. Automated checks and validations in smartphone-administered questionnaires avoided skip-pattern related errors. Incomplete and inconsistent records were more likely seen on paper questionnaires. Conclusion Compared to paper-based data collection, smartphone technology worked well for data collection in the study, which was conducted in a challenging rural environment in Sudan. This approach provided timely, quality data with fewer errors and inconsistencies compared to paper-based data collection. We recommend this method for future BOLD studies and other population-based studies in similar settings. PMID:29518132
Code of Federal Regulations, 2010 CFR
2010-07-01
... will not present an unreasonable risk of injury to health or the environment, if EPA has listed the... his or her delegate, may inform the submitter that the running of the review period will resume on the...
MacNeil Vroomen, Janet; Eekhout, Iris; Dijkgraaf, Marcel G; van Hout, Hein; de Rooij, Sophia E; Heymans, Martijn W; Bosmans, Judith E
2016-11-01
Cost and effect data often have missing data because economic evaluations are frequently added onto clinical studies where cost data are rarely the primary outcome. The objective of this article was to investigate which multiple imputation strategy is most appropriate to use for missing cost-effectiveness data in a randomized controlled trial. Three incomplete data sets were generated from a complete reference data set with 17, 35 and 50 % missing data in effects and costs. The strategies evaluated included complete case analysis (CCA), multiple imputation with predictive mean matching (MI-PMM), MI-PMM on log-transformed costs (log MI-PMM), and a two-step MI. Mean cost and effect estimates, standard errors and incremental net benefits were compared with the results of the analyses on the complete reference data set. The CCA, MI-PMM, and the two-step MI strategy diverged from the results for the reference data set when the amount of missing data increased. In contrast, the estimates of the Log MI-PMM strategy remained stable irrespective of the amount of missing data. MI provided better estimates than CCA in all scenarios. With low amounts of missing data the MI strategies appeared equivalent but we recommend using the log MI-PMM with missing data greater than 35 %.
Esfahani, Mohammad Shahrokh; Dougherty, Edward R
2015-01-01
Phenotype classification via genomic data is hampered by small sample sizes that negatively impact classifier design. Utilization of prior biological knowledge in conjunction with training data can improve both classifier design and error estimation via the construction of the optimal Bayesian classifier. In the genomic setting, gene/protein signaling pathways provide a key source of biological knowledge. Although these pathways are neither complete, nor regulatory, with no timing associated with them, they are capable of constraining the set of possible models representing the underlying interaction between molecules. The aim of this paper is to provide a framework and the mathematical tools to transform signaling pathways to prior probabilities governing uncertainty classes of feature-label distributions used in classifier design. Structural motifs extracted from the signaling pathways are mapped to a set of constraints on a prior probability on a Multinomial distribution. Being the conjugate prior for the Multinomial distribution, we propose optimization paradigms to estimate the parameters of a Dirichlet distribution in the Bayesian setting. The performance of the proposed methods is tested on two widely studied pathways: mammalian cell cycle and a p53 pathway model.
Randhawa, Amarita S; Babalola, Olakiitan; Henney, Zachary; Miller, Michele; Nelson, Tanya; Oza, Meerat; Patel, Chandni; Randhawa, Anupma S; Riley, Joyce; Snyder, Scott; So, Sherri
2016-05-01
Online drug information compendia (ODIC) are valuable tools that health care professionals (HCPs) and consumers use to educate themselves on pharmaceutical products. Research suggests that these resources, although informative and easily accessible, may contain misinformation, posing risk for product misuse and patient harm. Evaluate drug summaries within ODIC for accuracy and completeness and identify product-specific misinformation. Between August 2014 and January 2015, medical information (MI) specialists from 11 pharmaceutical/biotechnology companies systematically evaluated 270 drug summaries within 5 commonly used ODIC for misinformation. Using a standardized approach, errors were identified; classified as inaccurate, incomplete, or omitted; and categorized per sections of the Full Prescribing Information (FPI). On review of each drug summary, content-correction requests were proposed and supported by the respective product's FPI. Across the 270 drug summaries reviewed within the 5 compendia, the median of the total number of errors identified was 782, with the greatest number of errors occurring in the categories of Dosage and Administration, Patient Education, and Warnings and Precautions. The majority of errors were classified as incomplete, followed by inaccurate and omitted. This analysis demonstrates that ODIC may contain misinformation. HCPs and consumers should be aware of the potential for misinformation and consider more than 1 drug information resource, including the FPI and Medication Guide as well as pharmaceutical/biotechnology companies' MI departments, to obtain unbiased, accurate, and complete product-specific drug information to help support the safe and effective use of prescription drug products. © The Author(s) 2016.
Do missing data influence the accuracy of divergence-time estimation with BEAST?
Zheng, Yuchi; Wiens, John J
2015-04-01
Time-calibrated phylogenies have become essential to evolutionary biology. A recurrent and unresolved question for dating analyses is whether genes with missing data cells should be included or excluded. This issue is particularly unclear for the most widely used dating method, the uncorrelated lognormal approach implemented in BEAST. Here, we test the robustness of this method to missing data. We compare divergence-time estimates from a nearly complete dataset (20 nuclear genes for 32 species of squamate reptiles) to those from subsampled matrices, including those with 5 or 2 complete loci only and those with 5 or 8 incomplete loci added. In general, missing data had little impact on estimated dates (mean error of ∼5Myr per node or less, given an overall age of ∼220Myr in squamates), even when 80% of sampled genes had 75% missing data. Mean errors were somewhat higher when all genes were 75% incomplete (∼17Myr). However, errors increased dramatically when only 2 of 9 fossil calibration points were included (∼40Myr), regardless of missing data. Overall, missing data (and even numbers of genes sampled) may have only minor impacts on the accuracy of divergence dating with BEAST, relative to the dramatic effects of fossil calibrations. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Walker, H. F.
1979-01-01
In many pattern recognition problems, data vectors are classified although one or more of the data vector elements are missing. This problem occurs in remote sensing when the ground is obscured by clouds. Optimal linear discrimination procedures for classifying imcomplete data vectors are discussed.
NASA Technical Reports Server (NTRS)
Gibson, James S.; Barnes, Michael J.; Ostermiller, Daniel L.
1993-01-01
A set of programs was written to test the functionality and performance of the Alsys Ada implementation of the Catalogue of Interface Features and Options (CIFO), a set of optional Ada packages for real-time applications. No problems were found with the task id, preemption control, or shared-data packages. Minor problems were found with the dispatching control, dynamic priority, events, non-waiting entry call, semaphore, and scheduling packages. The Alsys implementation is derived mostly from Release 2 of the CIFO standard, but includes some of the features of Release 3 and some modifications unique to Alsys. Performance measurements show that the semaphore and shared-data features are an order-of-magnitude faster than the same mechanisms using an Ada rendezvous. The non-waiting entry call is slightly faster than a standard rendezvous. The existence of errors in the implementation, the incompleteness of the documentation from the published standard impair the usefulness of this implementation. Despite those short-comings, the Alsys CIFO implementation might be of value in the development of real-time applications.
Barasz, Kate; John, Leslie K; Keenan, Elizabeth A; Norton, Michael I
2017-10-01
Pseudo-set framing-arbitrarily grouping items or tasks together as part of an apparent "set"-motivates people to reach perceived completion points. Pseudo-set framing changes gambling choices (Study 1), effort (Studies 2 and 3), giving behavior (Field Data and Study 4), and purchase decisions (Study 5). These effects persist in the absence of any reward, when a cost must be incurred, and after participants are explicitly informed of the arbitrariness of the set. Drawing on Gestalt psychology, we develop a conceptual account that predicts what will-and will not-act as a pseudo-set, and defines the psychological process through which these pseudo-sets affect behavior: over and above typical reference points, pseudo-set framing alters perceptions of (in)completeness, making intermediate progress seem less complete. In turn, these feelings of incompleteness motivate people to persist until the pseudo-set has been fulfilled. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Safety assessment for In-service Pressure Bending Pipe Containing Incomplete Penetration Defects
NASA Astrophysics Data System (ADS)
Wang, M.; Tang, P.; Xia, J. F.; Ling, Z. W.; Cai, G. Y.
2017-12-01
Incomplete penetration defect is a common defect in the welded joint of pressure pipes. While the safety classification of pressure pipe containing incomplete penetration defects, according to periodical inspection regulations in present, is more conservative. For reducing the repair of incomplete penetration defect, a scientific and applicable safety assessment method for pressure pipe is needed. In this paper, the stress analysis model of the pipe system was established for the in-service pressure bending pipe containing incomplete penetration defects. The local finite element model was set up to analyze the stress distribution of defect location and the stress linearization. And then, the applicability of two assessment methods, simplified assessment and U factor assessment method, to the assessment of incomplete penetration defects located at pressure bending pipe were analyzed. The results can provide some technical supports for the safety assessment of complex pipelines in the future.
NASA Astrophysics Data System (ADS)
Rivera, J. D.; Moraes, B.; Merson, A. I.; Jouvel, S.; Abdalla, F. B.; Abdalla, M. C. B.
2018-07-01
We perform an analysis of photometric redshifts estimated by using a non-representative training sets in magnitude space. We use the ANNz2 and GPz algorithms to estimate the photometric redshift both in simulations and in real data from the Sloan Digital Sky Survey (DR12). We show that for the representative case, the results obtained by using both algorithms have the same quality, using either magnitudes or colours as input. In order to reduce the errors when estimating the redshifts with a non-representative training set, we perform the training in colour space. We estimate the quality of our results by using a mock catalogue which is split samples cuts in the r band between 19.4 < r < 20.8. We obtain slightly better results with GPz on single point z-phot estimates in the complete training set case, however the photometric redshifts estimated with ANNz2 algorithm allows us to obtain mildly better results in deeper r-band cuts when estimating the full redshift distribution of the sample in the incomplete training set case. By using a cumulative distribution function and a Monte Carlo process, we manage to define a photometric estimator which fits well the spectroscopic distribution of galaxies in the mock testing set, but with a larger scatter. To complete this work, we perform an analysis of the impact on the detection of clusters via density of galaxies in a field by using the photometric redshifts obtained with a non-representative training set.
NASA Astrophysics Data System (ADS)
Rivera, J. D.; Moraes, B.; Merson, A. I.; Jouvel, S.; Abdalla, F. B.; Abdalla, M. C. B.
2018-04-01
We perform an analysis of photometric redshifts estimated by using a non-representative training sets in magnitude space. We use the ANNz2 and GPz algorithms to estimate the photometric redshift both in simulations as well as in real data from the Sloan Digital Sky Survey (DR12). We show that for the representative case, the results obtained by using both algorithms have the same quality, either using magnitudes or colours as input. In order to reduce the errors when estimating the redshifts with a non-representative training set, we perform the training in colour space. We estimate the quality of our results by using a mock catalogue which is split samples cuts in the r-band between 19.4 < r < 20.8. We obtain slightly better results with GPz on single point z-phot estimates in the complete training set case, however the photometric redshifts estimated with ANNz2 algorithm allows us to obtain mildly better results in deeper r-band cuts when estimating the full redshift distribution of the sample in the incomplete training set case. By using a cumulative distribution function and a Monte-Carlo process, we manage to define a photometric estimator which fits well the spectroscopic distribution of galaxies in the mock testing set, but with a larger scatter. To complete this work, we perform an analysis of the impact on the detection of clusters via density of galaxies in a field by using the photometric redshifts obtained with a non-representative training set.
Meneco, a Topology-Based Gap-Filling Tool Applicable to Degraded Genome-Wide Metabolic Networks
Prigent, Sylvain; Frioux, Clémence; Dittami, Simon M.; Larhlimi, Abdelhalim; Collet, Guillaume; Gutknecht, Fabien; Got, Jeanne; Eveillard, Damien; Bourdon, Jérémie; Plewniak, Frédéric; Tonon, Thierry; Siegel, Anne
2017-01-01
Increasing amounts of sequence data are becoming available for a wide range of non-model organisms. Investigating and modelling the metabolic behaviour of those organisms is highly relevant to understand their biology and ecology. As sequences are often incomplete and poorly annotated, draft networks of their metabolism largely suffer from incompleteness. Appropriate gap-filling methods to identify and add missing reactions are therefore required to address this issue. However, current tools rely on phenotypic or taxonomic information, or are very sensitive to the stoichiometric balance of metabolic reactions, especially concerning the co-factors. This type of information is often not available or at least prone to errors for newly-explored organisms. Here we introduce Meneco, a tool dedicated to the topological gap-filling of genome-scale draft metabolic networks. Meneco reformulates gap-filling as a qualitative combinatorial optimization problem, omitting constraints raised by the stoichiometry of a metabolic network considered in other methods, and solves this problem using Answer Set Programming. Run on several artificial test sets gathering 10,800 degraded Escherichia coli networks Meneco was able to efficiently identify essential reactions missing in networks at high degradation rates, outperforming the stoichiometry-based tools in scalability. To demonstrate the utility of Meneco we applied it to two case studies. Its application to recent metabolic networks reconstructed for the brown algal model Ectocarpus siliculosus and an associated bacterium Candidatus Phaeomarinobacter ectocarpi revealed several candidate metabolic pathways for algal-bacterial interactions. Then Meneco was used to reconstruct, from transcriptomic and metabolomic data, the first metabolic network for the microalga Euglena mutabilis. These two case studies show that Meneco is a versatile tool to complete draft genome-scale metabolic networks produced from heterogeneous data, and to suggest relevant reactions that explain the metabolic capacity of a biological system. PMID:28129330
Meneco, a Topology-Based Gap-Filling Tool Applicable to Degraded Genome-Wide Metabolic Networks.
Prigent, Sylvain; Frioux, Clémence; Dittami, Simon M; Thiele, Sven; Larhlimi, Abdelhalim; Collet, Guillaume; Gutknecht, Fabien; Got, Jeanne; Eveillard, Damien; Bourdon, Jérémie; Plewniak, Frédéric; Tonon, Thierry; Siegel, Anne
2017-01-01
Increasing amounts of sequence data are becoming available for a wide range of non-model organisms. Investigating and modelling the metabolic behaviour of those organisms is highly relevant to understand their biology and ecology. As sequences are often incomplete and poorly annotated, draft networks of their metabolism largely suffer from incompleteness. Appropriate gap-filling methods to identify and add missing reactions are therefore required to address this issue. However, current tools rely on phenotypic or taxonomic information, or are very sensitive to the stoichiometric balance of metabolic reactions, especially concerning the co-factors. This type of information is often not available or at least prone to errors for newly-explored organisms. Here we introduce Meneco, a tool dedicated to the topological gap-filling of genome-scale draft metabolic networks. Meneco reformulates gap-filling as a qualitative combinatorial optimization problem, omitting constraints raised by the stoichiometry of a metabolic network considered in other methods, and solves this problem using Answer Set Programming. Run on several artificial test sets gathering 10,800 degraded Escherichia coli networks Meneco was able to efficiently identify essential reactions missing in networks at high degradation rates, outperforming the stoichiometry-based tools in scalability. To demonstrate the utility of Meneco we applied it to two case studies. Its application to recent metabolic networks reconstructed for the brown algal model Ectocarpus siliculosus and an associated bacterium Candidatus Phaeomarinobacter ectocarpi revealed several candidate metabolic pathways for algal-bacterial interactions. Then Meneco was used to reconstruct, from transcriptomic and metabolomic data, the first metabolic network for the microalga Euglena mutabilis. These two case studies show that Meneco is a versatile tool to complete draft genome-scale metabolic networks produced from heterogeneous data, and to suggest relevant reactions that explain the metabolic capacity of a biological system.
NASA Technical Reports Server (NTRS)
Bakhshiyan, B. T.; Nazirov, R. R.; Elyasberg, P. E.
1980-01-01
The problem of selecting the optimal algorithm of filtration and the optimal composition of the measurements is examined assuming that the precise values of the mathematical expectancy and the matrix of covariation of errors are unknown. It is demonstrated that the optimal algorithm of filtration may be utilized for making some parameters more precise (for example, the parameters of the gravitational fields) after preliminary determination of the elements of the orbit by a simpler method of processing (for example, the method of least squares).
A relativistic coupled-cluster interaction potential and rovibrational constants for the xenon dimer
NASA Astrophysics Data System (ADS)
Jerabek, Paul; Smits, Odile; Pahl, Elke; Schwerdtfeger, Peter
2018-01-01
An accurate potential energy curve has been derived for the xenon dimer using state-of-the-art relativistic coupled-cluster theory up to quadruple excitations accounting for both basis set superposition and incompleteness errors. The data obtained is fitted to a computationally efficient extended Lennard-Jones potential form and to a modified Tang-Toennies potential function treating the short- and long-range part separately. The vibrational spectrum of Xe2 obtained from a numerical solution of the rovibrational Schrödinger equation and subsequently derived spectroscopic constants are in excellent agreement with experimental values. We further present solid-state calculations for xenon using a static many-body expansion up to fourth-order in the xenon interaction potential including dynamic effects within the Einstein approximation. Again we find very good agreement with the experimental (face-centred cubic) lattice constant and cohesive energy.
Assimilating data into open ocean tidal models
NASA Astrophysics Data System (ADS)
Kivman, Gennady A.
The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.
NASA Astrophysics Data System (ADS)
Moore, Peter K.
2003-07-01
Solving systems of reaction-diffusion equations in three space dimensions can be prohibitively expensive both in terms of storage and CPU time. Herein, I present a new incomplete assembly procedure that is designed to reduce storage requirements. Incomplete assembly is analogous to incomplete factorization in that only a fixed number of nonzero entries are stored per row and a drop tolerance is used to discard small values. The algorithm is incorporated in a finite element method-of-lines code and tested on a set of reaction-diffusion systems. The effect of incomplete assembly on CPU time and storage and on the performance of the temporal integrator DASPK, algebraic solver GMRES and preconditioner ILUT is studied.
NASA Astrophysics Data System (ADS)
Shankar, Praveen
The performance of nonlinear control algorithms such as feedback linearization and dynamic inversion is heavily dependent on the fidelity of the dynamic model being inverted. Incomplete or incorrect knowledge of the dynamics results in reduced performance and may lead to instability. Augmenting the baseline controller with approximators which utilize a parametrization structure that is adapted online reduces the effect of this error between the design model and actual dynamics. However, currently existing parameterizations employ a fixed set of basis functions that do not guarantee arbitrary tracking error performance. To address this problem, we develop a self-organizing parametrization structure that is proven to be stable and can guarantee arbitrary tracking error performance. The training algorithm to grow the network and adapt the parameters is derived from Lyapunov theory. In addition to growing the network of basis functions, a pruning strategy is incorporated to keep the size of the network as small as possible. This algorithm is implemented on a high performance flight vehicle such as F-15 military aircraft. The baseline dynamic inversion controller is augmented with a Self-Organizing Radial Basis Function Network (SORBFN) to minimize the effect of the inversion error which may occur due to imperfect modeling, approximate inversion or sudden changes in aircraft dynamics. The dynamic inversion controller is simulated for different situations including control surface failures, modeling errors and external disturbances with and without the adaptive network. A performance measure of maximum tracking error is specified for both the controllers a priori. Excellent tracking error minimization to a pre-specified level using the adaptive approximation based controller was achieved while the baseline dynamic inversion controller failed to meet this performance specification. The performance of the SORBFN based controller is also compared to a fixed RBF network based adaptive controller. While the fixed RBF network based controller which is tuned to compensate for control surface failures fails to achieve the same performance under modeling uncertainty and disturbances, the SORBFN is able to achieve good tracking convergence under all error conditions.
Uenishi, K; Tokiwa, M; Kato, S; Shiraki, M
2018-05-01
There were two errors in this article. 1. In the section "Ethical considerations", the registration number of the study was incorrectly given as UMIN000024492. The correct number is UMIN0000 20267. 2. The Acknowledgments paragraph was incomplete.
Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data
ERIC Educational Resources Information Center
Savalei, Victoria
2010-01-01
Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…
USDA-ARS?s Scientific Manuscript database
Although draft genomes are available for most agronomically important plant species, the majority are incomplete, highly fragmented, and often riddled with assembly and scaffolding errors. These assembly issues hinder advances in tool development for functional genomics and systems biology. Here we ...
NASA Astrophysics Data System (ADS)
Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto
2017-03-01
Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.
Zhang, Song; Cao, Jing; Ahn, Chul
2017-02-20
We investigate the estimation of intervention effect and sample size determination for experiments where subjects are supposed to contribute paired binary outcomes with some incomplete observations. We propose a hybrid estimator to appropriately account for the mixed nature of observed data: paired outcomes from those who contribute complete pairs of observations and unpaired outcomes from those who contribute either pre-intervention or post-intervention outcomes. We theoretically prove that if incomplete data are evenly distributed between the pre-intervention and post-intervention periods, the proposed estimator will always be more efficient than the traditional estimator. A numerical research shows that when the distribution of incomplete data is unbalanced, the proposed estimator will be superior when there is moderate-to-strong positive within-subject correlation. We further derive a closed-form sample size formula to help researchers determine how many subjects need to be enrolled in such studies. Simulation results suggest that the calculated sample size maintains the empirical power and type I error under various design configurations. We demonstrate the proposed method using a real application example. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Absolute magnitude calibration using trigonometric parallax - Incomplete, spectroscopic samples
NASA Technical Reports Server (NTRS)
Ratnatunga, Kavan U.; Casertano, Stefano
1991-01-01
A new numerical algorithm is used to calibrate the absolute magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum-likelihood estimation, can retrieve unbiased estimates of the intrinsic absolute magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate error estimates are derived for each of the fitted parameters. The algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. The procedure is described in general and applied to both real and simulated data.
Hernández-Romieu, Alfonso C.; Siegler, Aaron; Sullivan, Patrick S.; Crosby, Richard; Rosenberg, Eli S.
2015-01-01
Objectives Compare the occurrence of risk-inducing condom events (condom failures and incomplete use) and the frequency of their antecedents (condom errors, fit/feel problems, and erection problems) between Black and White MSM, and determine the associations between risk-inducing condom events and their antecedents. Methods We studied cross-sectional data of 475 MSM who indicated using a condom as an insertive partner in the previous 6 months enrolled in a cohort study in Atlanta, GA. Results Nearly 40% of Black MSM reported breakage or incomplete use, and they were more likely to report breakage, early removal, and delayed application of a condom than White MSM. Only 31% and 54% of MSM reported correct condom use and suboptimal fit/feel of a condom respectively. The use of oil-based lubricants and suboptimal fit/feel were associated with higher odds of reporting breakage (P = 0.009). Suboptimal fit/feel was also associated with higher odds of incomplete use of condoms (P <0.0001). Conclusions Incomplete use of condoms and condom failures were especially common among Black MSM. Our findings indicate that condoms likely offered them less protection against HIV/STI when compared to White MSM. More interventions are needed, particularly addressing the use of oil-based lubricants and suboptimal fit/feel of condoms. PMID:25080511
Emergency department discharge prescription errors in an academic medical center
Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.
2017-01-01
This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm. PMID:28405061
Smooth extrapolation of unknown anatomy via statistical shape models
NASA Astrophysics Data System (ADS)
Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.
2015-03-01
Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.
A Comparison of Item-Level and Scale-Level Multiple Imputation for Questionnaire Batteries
ERIC Educational Resources Information Center
Gottschall, Amanda C.; West, Stephen G.; Enders, Craig K.
2012-01-01
Behavioral science researchers routinely use scale scores that sum or average a set of questionnaire items to address their substantive questions. A researcher applying multiple imputation to incomplete questionnaire data can either impute the incomplete items prior to computing scale scores or impute the scale scores directly from other scale…
Non-input analysis for incomplete trapping irreversible tracer with PET.
Ohya, Tomoyuki; Kikuchi, Tatsuya; Fukumura, Toshimitsu; Zhang, Ming-Rong; Irie, Toshiaki
2013-07-01
When using metabolic trapping type tracers, the tracers are not always trapped in the target tissue; i.e., some are completely trapped in the target, but others can be eliminated from the target tissue at a measurable rate. The tracers that can be eliminated are termed 'incomplete trapping irreversible tracers'. These incomplete trapping irreversible tracers may be clinically useful when the tracer β-value, the ratio of the tracer (metabolite) elimination rate to the tracer efflux rate, is under approximately 0.1. In this study, we propose a non-input analysis for incomplete trapping irreversible tracers based on the shape analysis (Shape), a non-input analysis used for irreversible tracers. A Monte Carlo simulation study based on experimental monkey data with two actual PET tracers (a complete trapping irreversible tracer [(11)C]MP4A and an incomplete trapping irreversible tracer [(18)F]FEP-4MA) was performed to examine the effects of the environmental error and the tracer elimination rate on the estimation of the k3-parameter (corresponds to metabolic rate) using Shape (original) and modified Shape (M-Shape) analysis. The simulation results were also compared with the experimental results obtained with the two PET tracers. When the tracer β-value was over 0.03, the M-Shape method was superior to the Shape method for the estimation of the k3-parameter. The simulation results were also in reasonable agreement with the experimental ones. M-Shape can be used as the non-input analysis of incomplete trapping irreversible tracers for PET study. Copyright © 2013 Elsevier Inc. All rights reserved.
Pediatric Nurses' Perceptions of Medication Safety and Medication Error: A Mixed Methods Study.
Alomari, Albara; Wilson, Val; Solman, Annette; Bajorek, Beata; Tinsley, Patricia
2018-06-01
This study aims to outline the current workplace culture of medication practice in a pediatric medical ward. The objective is to explore the perceptions of nurses in a pediatric clinical setting as to why medication administration errors occur. As nurses have a central role in the medication process, it is essential to explore nurses' perceptions of the factors influencing the medication process. Without this understanding, it is difficult to develop effective prevention strategies aimed at reducing medication administration errors. Previous studies were limited to exploring a single and specific aspect of medication safety. The methods used in these studies were limited to survey designs which may lead to incomplete or inadequate information being provided. This study is phase 1 on an action research project. Data collection included a direct observation of nurses during medication preparation and administration, audit based on the medication policy, and guidelines and focus groups with nursing staff. A thematic analysis was undertaken by each author independently to analyze the observation notes and focus group transcripts. Simple descriptive statistics were used to analyze the audit data. The study was conducted in a specialized pediatric medical ward. Four key themes were identified from the combined quantitative and qualitative data: (1) understanding medication errors, (2) the busy-ness of nurses, (3) the physical environment, and (4) compliance with medication policy and practice guidelines. Workload, frequent interruptions to process, poor physical environment design, lack of preparation space, and impractical medication policies are identified as barriers to safe medication practice. Overcoming these barriers requires organizations to review medication process policies and engage nurses more in medication safety research and in designing clinical guidelines for their own practice.
On the Nature of Small Planets around the Coolest Kepler Stars
NASA Astrophysics Data System (ADS)
Gaidos, Eric; Fischer, Debra A.; Mann, Andrew W.; Lépine, Sébastien
2012-02-01
We constrain the densities of Earth- to Neptune-size planets around very cool (Te = 3660-4660 K) Kepler stars by comparing 1202 Keck/HIRES radial velocity measurements of 150 nearby stars to a model based on Kepler candidate planet radii and a power-law mass-radius relation. Our analysis is based on the presumption that the planet populations around the two sets of stars are the same. The model can reproduce the observed distribution of radial velocity variation over a range of parameter values, but, for the expected level of Doppler systematic error, the highest Kolmogorov-Smirnov probabilities occur for a power-law index α ≈ 4, indicating that rocky-metal planets dominate the planet population in this size range. A single population of gas-rich, low-density planets with α = 2 is ruled out unless our Doppler errors are >=5 m s-1, i.e., much larger than expected based on observations and stellar chromospheric emission. If small planets are a mix of γ rocky planets (α = 3.85) and 1 - γ gas-rich planets (α = 2), then γ > 0.5 unless Doppler errors are >=4 m s-1. Our comparison also suggests that Kepler's detection efficiency relative to ideal calculations is less than unity. One possible source of incompleteness is target stars that are misclassified subgiants or giants, for which the transits of small planets would be impossible to detect. Our results are robust to systematic effects, and plausible errors in the estimated radii of Kepler stars have only moderate impact. Some data were obtained at the W. M. Keck Observatory, which is operated by the California Institute of Technology, the University of California, and NASA, and made possible by the financial support of the W. M. Keck Foundation.
Property-Based Monitoring of Analog and Mixed-Signal Systems
NASA Astrophysics Data System (ADS)
Havlicek, John; Little, Scott; Maler, Oded; Nickovic, Dejan
In the recent past, there has been a steady growth of the market for consumer embedded devices such as cell phones, GPS and portable multimedia systems. In embedded systems, digital, analog and software components are combined on a single chip, resulting in increasingly complex designs that introduce richer functionality on smaller devices. As a consequence, the potential insertion of errors into a design becomes higher, yielding an increasing need for automated analog and mixed-signal validation tools. In the purely digital setting, formal verification based on properties expressed in industrial specification languages such as PSL and SVA is nowadays successfully integrated in the design flow. On the other hand, the validation of analog and mixed-signal systems still largely depends on simulation-based, ad-hoc methods. In this tutorial, we consider some ingredients of the standard verification methodology that can be successfully exported from digital to analog and mixed-signal setting, in particular property-based monitoring techniques. Property-based monitoring is a lighter approach to the formal verification, where the system is seen as a "black-box" that generates sets of traces, whose correctness is checked against a property, that is its high-level specification. Although incomplete, monitoring is effectively used to catch faults in systems, without guaranteeing their full correctness.
Student Use of Physics to Make Sense of Incomplete but Functional VPython Programs in a Lab Setting
NASA Astrophysics Data System (ADS)
Weatherford, Shawn A.
2011-12-01
Computational activities in Matter & Interactions, an introductory calculus-based physics course, have the instructional goal of providing students with the experience of applying the same set of a small number of fundamental principles to model a wide range of physical systems. However there are significant instructional challenges for students to build computer programs under limited time constraints, especially for students who are unfamiliar with programming languages and concepts. Prior attempts at designing effective computational activities were successful at having students ultimately build working VPython programs under the tutelage of experienced teaching assistants in a studio lab setting. A pilot study revealed that students who completed these computational activities had significant difficultly repeating the exact same tasks and further, had difficulty predicting the animation that would be produced by the example program after interpreting the program code. This study explores the interpretation and prediction tasks as part of an instructional sequence where students are asked to read and comprehend a functional, but incomplete program. Rather than asking students to begin their computational tasks with modifying program code, we explicitly ask students to interpret an existing program that is missing key lines of code. The missing lines of code correspond to the algebraic form of fundamental physics principles or the calculation of forces which would exist between analogous physical objects in the natural world. Students are then asked to draw a prediction of what they would see in the simulation produced by the VPython program and ultimately run the program to evaluate the students' prediction. This study specifically looks at how the participants use physics while interpreting the program code and creating a whiteboard prediction. This study also examines how students evaluate their understanding of the program and modification goals at the beginning of the modification task. While working in groups over the course of a semester, study participants were recorded while they completed three activities using these incomplete programs. Analysis of the video data showed that study participants had little difficulty interpreting physics quantities, generating a prediction, or determining how to modify the incomplete program. Participants did not base their prediction solely from the information from the incomplete program. When participants tried to predict the motion of the objects in the simulation, many turned to their knowledge of how the system would evolve if it represented an analogous real-world physical system. For example, participants attributed the real-world behavior of springs to helix objects even though the program did not include calculations for the spring to exert a force when stretched. Participants rarely interpreted lines of code in the computational loop during the first computational activity, but this changed during latter computational activities with most participants using their physics knowledge to interpret the computational loop. Computational activities in the Matter & Interactions curriculum were revised in light of these findings to include an instructional sequence of tasks to build a comprehension of the example program. The modified activities also ask students to create an additional whiteboard prediction for the time-evolution of the real-world phenomena which the example program will eventually model. This thesis shows how comprehension tasks identified by Palinscar and Brown (1984) as effective in improving reading comprehension are also effective in helping students apply their physics knowledge to interpret a computer program which attempts to model a real-world phenomena and identify errors in their understanding of the use, or omission, of fundamental physics principles in a computational model.
Long-term care physical environments--effect on medication errors.
Mahmood, Atiya; Chaudhury, Habib; Gaumont, Alana; Rust, Tiana
2012-01-01
Few studies examine physical environmental factors and their effects on staff health, effectiveness, work errors and job satisfaction. To address this gap, this study aims to examine environmental features and their role in medication and nursing errors in long-term care facilities. A mixed methodological strategy was used. Data were collected via focus groups, observing medication preparation and administration, and a nursing staff survey in four facilities. The paper reveals that, during the medication preparation phase, physical design, such as medication room layout, is a major source of potential errors. During medication administration, social environment is more likely to contribute to errors. Interruptions, noise and staff shortages were particular problems. The survey's relatively small sample size needs to be considered when interpreting the findings. Also, actual error data could not be included as existing records were incomplete. The study offers several relatively low-cost recommendations to help staff reduce medication errors. Physical environmental factors are important when addressing measures to reduce errors. The findings of this study underscore the fact that the physical environment's influence on the possibility of medication errors is often neglected. This study contributes to the scarce empirical literature examining the relationship between physical design and patient safety.
Articulation in schoolchildren and adults with neurofibromatosis type 1.
Cosyns, Marjan; Mortier, Geert; Janssens, Sandra; Bogaert, Famke; D'Hondt, Stephanie; Van Borsel, John
2012-01-01
Several authors mentioned the occurrence of articulation problems in the neurofibromatosis type 1 (NF1) population. However, few studies have undertaken a detailed analysis of the articulation skills of NF1 patients, especially in schoolchildren and adults. Therefore, the aim of the present study was to examine in depth the articulation skills of NF1 schoolchildren and adults, both phonetically and phonologically. Speech samples were collected from 43 Flemish NF1 patients (14 children and 29 adults), ranging in age between 7 and 53 years, using a standardized speech test in which all Flemish single speech sounds and most clusters occur in all their permissible syllable positions. Analyses concentrated on consonants only and included a phonetic inventory, a phonetic, and a phonological analysis. It was shown that phonetic inventories were incomplete in 16.28% (7/43) of participants, in which totally correct realizations of the sibilants /ʃ/ and/or /ʒ/ were missing. Phonetic analysis revealed that distortions were the predominant phonetic error type. Sigmatismus stridens, multiple ad- or interdentality, and, in children, rhotacismus non vibrans were frequently observed. From a phonological perspective, the most common error types were substitution and syllable structure errors. Particularly, devoicing, cluster simplification, and, in children, deletion of the final consonant of words were perceived. Further, it was demonstrated that significantly more men than women presented with an incomplete phonetic inventory, and that girls tended to display more articulation errors than boys. Additionally, children exhibited significantly more articulation errors than adults, suggesting that although the articulation skills of NF1 patients evolve positively with age, articulation problems do not resolve completely from childhood to adulthood. As such, the articulation errors made by NF1 adults may be regarded as residual articulation disorders. It can be concluded that the speech of NF1 patients is characterized by mild articulation disorders at an age where this is no longer expected. Readers will be able to describe neurofibromatosis type 1 (NF1) and explain the articulation errors displayed by schoolchildren and adults with this genetic syndrome. © 2011 Elsevier Inc. All rights reserved.
Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L
2012-09-01
Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.
Byrne, M D; Jordan, T R; Welle, T
2013-01-01
The objective of this study was to investigate and improve the use of automated data collection procedures for nursing research and quality assurance. A descriptive, correlational study analyzed 44 orthopedic surgical patients who were part of an evidence-based practice (EBP) project examining post-operative oxygen therapy at a Midwestern hospital. The automation work attempted to replicate a manually-collected data set from the EBP project. Automation was successful in replicating data collection for study data elements that were available in the clinical data repository. The automation procedures identified 32 "false negative" patients who met the inclusion criteria described in the EBP project but were not selected during the manual data collection. Automating data collection for certain data elements, such as oxygen saturation, proved challenging because of workflow and practice variations and the reliance on disparate sources for data abstraction. Automation also revealed instances of human error including computational and transcription errors as well as incomplete selection of eligible patients. Automated data collection for analysis of nursing-specific phenomenon is potentially superior to manual data collection methods. Creation of automated reports and analysis may require initial up-front investment with collaboration between clinicians, researchers and information technology specialists who can manage the ambiguities and challenges of research and quality assurance work in healthcare.
Multi-saline sample distillation apparatus for hydrogen isotope analyses : design and accuracy
Hassan, Afifa Afifi
1981-01-01
A distillation apparatus for saline water samples was designed and tested. Six samples may be distilled simultaneously. The temperature was maintained at 400 C to ensure complete dehydration of the precipitating salts. Consequently, the error in the measured ratio of stable hydrogen isotopes resulting from incomplete dehydration of hydrated salts during distillation was eliminated. (USGS)
ERIC Educational Resources Information Center
Gold, Michael S.; Bentler, Peter M.; Kim, Kevin H.
2003-01-01
This article describes a Monte Carlo study of 2 methods for treating incomplete nonnormal data. Skewed, kurtotic data sets conforming to a single structured model, but varying in sample size, percentage of data missing, and missing-data mechanism, were produced. An asymptotically distribution-free available-case (ADFAC) method and structured-model…
NASA Technical Reports Server (NTRS)
Snow, Frank; Harman, Richard; Garrick, Joseph
1988-01-01
The Gamma Ray Observatory (GRO) spacecraft needs a highly accurate attitude knowledge to achieve its mission objectives. Utilizing the fixed-head star trackers (FHSTs) for observations and gyroscopes for attitude propagation, the discrete Kalman Filter processes the attitude data to obtain an onboard accuracy of 86 arc seconds (3 sigma). A combination of linear analysis and simulations using the GRO Software Simulator (GROSS) are employed to investigate the Kalman filter for stability and the effects of corrupted observations (misalignment, noise), incomplete dynamic modeling, and nonlinear errors on Kalman filter. In the simulations, on-board attitude is compared with true attitude, the sensitivity of attitude error to model errors is graphed, and a statistical analysis is performed on the residuals of the Kalman Filter. In this paper, the modeling and sensor errors that degrade the Kalman filter solution beyond mission requirements are studied, and methods are offered to identify the source of these errors.
Haegerstrom-Portnoy, G; Schneck, M E; Verdon, W A; Hewlett, S E
1996-07-01
Visual acuity, refractive error, and binocular status were determined in 43 autosomal recessive (AR) and 15 X-linked (XL) congenital achromats. The achromats were classified by color matching and spectral sensitivity data. Large interindividual variation in refractive error and visual acuity was present within each achromat group (complete AR, incomplete AR, and XL). However, the number of individuals with significant interocular acuity differences is very small. Most XLs are myopic; ARs show a wide range of refractive error from high myopia to high hyperopia. Acuity of the AR and XL groups was very similar. With-the-rule astigmatism of large amount is very common in achromats, particularly ARs. There is a close association between strabismus and interocular acuity differences in the ARs, with the fixating eye having better than average acuity. The large overlap of acuity and refractive error of XL and AR achromats suggests that these measures are less useful for differential diagnosis than generally indicated by the clinical literature.
Impact of Orbit Position Errors on Future Satellite Gravity Models
NASA Astrophysics Data System (ADS)
Encarnacao, J.; Ditmar, P.; Klees, R.
2015-12-01
We present the results of a study of the impact of orbit positioning noise (OPN) caused by incomplete knowledge of the Earth's gravity field on gravity models estimated from satellite gravity data. The OPN is simulated as the difference between two sets of orbits integrated on the basis of different static gravity field models. The OPN is propagated into ll-SST data, here computed as averaged inter-satellite accelerations projected onto the Line of Sight (LoS) vector between the two satellites. We consider the cartwheel formation (CF), pendulum formation (PF), and trailing formation (TF) as they produce a different dominant orientation of the LoS vector. Given the polar orbits of the formations, the LoS vector is mainly aligned with the North-South direction in the TF, with the East-West direction in the PF (i.e. no along-track offset), and contains a radial component in the CF. An analytical analysis predicts that the CF suffers from a very high sensitivity to the OPN. This is a fundamental characteristic of this formation, which results from the amplification of this noise by diagonal components of the gravity gradient tensor (defined in the local frame) during the propagation into satellite gravity data. In contrast, the OPN in the data from PF and TF is only scaled by off-diagonal gravity gradient components, which are much smaller than the diagonal tensor components. A numerical analysis shows that the effect of the OPN is similar in the data collected by the TF and the PF. The amplification of the OPN errors for the CF leads to errors in the gravity model that are three orders of magnitude larger than those in case of the PF. This means that any implementation of the CF will most likely produce data with relatively low quality since this error dominates the error budget, especially at low frequencies. This is particularly critical for future gravimetric missions that will be equipped with highly accurate ranging sensors.
Tudor Car, Lorainne; Papachristou, Nikolaos; Gallagher, Joseph; Samra, Rajvinder; Wazny, Kerri; El-Khatib, Mona; Bull, Adrian; Majeed, Azeem; Aylin, Paul; Atun, Rifat; Rudan, Igor; Car, Josip; Bell, Helen; Vincent, Charles; Franklin, Bryony Dean
2016-11-16
Medication error is a frequent, harmful and costly patient safety incident. Research to date has mostly focused on medication errors in hospitals. In this study, we aimed to identify the main causes of, and solutions to, medication error in primary care. We used a novel priority-setting method for identifying and ranking patient safety problems and solutions called PRIORITIZE. We invited 500 North West London primary care clinicians to complete an open-ended questionnaire to identify three main problems and solutions relating to medication error in primary care. 113 clinicians submitted responses, which we thematically synthesized into a composite list of 48 distinct problems and 45 solutions. A group of 57 clinicians randomly selected from the initial cohort scored these and an overall ranking was derived. The agreement between the clinicians' scores was presented using the average expert agreement (AEA). The study was conducted between September 2013 and November 2014. The top three problems were incomplete reconciliation of medication during patient 'hand-overs', inadequate patient education about their medication use and poor discharge summaries. The highest ranked solutions included development of a standardized discharge summary template, reduction of unnecessary prescribing, and minimisation of polypharmacy. Overall, better communication between the healthcare provider and patient, quality assurance approaches during medication prescribing and monitoring, and patient education on how to use their medication were considered the top priorities. The highest ranked suggestions received the strongest agreement among the clinicians, i.e. the highest AEA score. Clinicians identified a range of suggestions for better medication management, quality assurance procedures and patient education. According to clinicians, medication errors can be largely prevented with feasible and affordable interventions. PRIORITIZE is a new, convenient, systematic, and replicable method, and merits further exploration with a view to becoming a part of a routine preventative patient safety monitoring mechanism.
[Errors in prescriptions and their preparation at the outpatient pharmacy of a regional hospital].
Alvarado A, Carolina; Ossa G, Ximena; Bustos M, Luis
2017-01-01
Adverse effects of medications are an important cause of morbidity and hospital admissions. Errors in prescription or preparation of medications by pharmacy personnel are a factor that may influence these occurrence of the adverse effects Aim: To assess the frequency and type of errors in prescriptions and in their preparation at the pharmacy unit of a regional public hospital. Prescriptions received by ambulatory patients and those being discharged from the hospital, were reviewed using a 12-item checklist. The preparation of such prescriptions at the pharmacy unit was also reviewed using a seven item checklist. Seventy two percent of prescriptions had at least one error. The most common mistake was the impossibility of determining the concentration of the prescribed drug. Prescriptions for patients being discharged from the hospital had the higher number of errors. When a prescription had more than two drugs, the risk of error increased 2.4 times. Twenty four percent of prescription preparations had at least one error. The most common mistake was the labeling of drugs with incomplete medical indications. When a preparation included more than three drugs, the risk of preparation error increased 1.8 times. Prescription and preparation of medication delivered to patients had frequent errors. The most important risk factor for errors was the number of drugs prescribed.
Rough Set Approach to Incomplete Multiscale Information System
Yang, Xibei; Qi, Yong; Yu, Dongjun; Yu, Hualong; Song, Xiaoning; Yang, Jingyu
2014-01-01
Multiscale information system is a new knowledge representation system for expressing the knowledge with different levels of granulations. In this paper, by considering the unknown values, which can be seen everywhere in real world applications, the incomplete multiscale information system is firstly investigated. The descriptor technique is employed to construct rough sets at different scales for analyzing the hierarchically structured data. The problem of unravelling decision rules at different scales is also addressed. Finally, the reduct descriptors are formulated to simplify decision rules, which can be derived from different scales. Some numerical examples are employed to substantiate the conceptual arguments. PMID:25276852
NASA Astrophysics Data System (ADS)
Lee, Kyunghoon
To evaluate the maximum likelihood estimates (MLEs) of probabilistic principal component analysis (PPCA) parameters such as a factor-loading, PPCA can invoke an expectation-maximization (EM) algorithm, yielding an EM algorithm for PPCA (EM-PCA). In order to examine the benefits of the EM-PCA for aerospace engineering applications, this thesis attempts to qualitatively and quantitatively scrutinize the EM-PCA alongside both POD and gappy POD using high-dimensional simulation data. In pursuing qualitative investigations, the theoretical relationship between POD and PPCA is transparent such that the factor-loading MLE of PPCA, evaluated by the EM-PCA, pertains to an orthogonal basis obtained by POD. By contrast, the analytical connection between gappy POD and the EM-PCA is nebulous because they distinctively approximate missing data due to their antithetical formulation perspectives: gappy POD solves a least-squares problem whereas the EM-PCA relies on the expectation of the observation probability model. To juxtapose both gappy POD and the EM-PCA, this research proposes a unifying least-squares perspective that embraces the two disparate algorithms within a generalized least-squares framework. As a result, the unifying perspective reveals that both methods address similar least-squares problems; however, their formulations contain dissimilar bases and norms. Furthermore, this research delves into the ramifications of the different bases and norms that will eventually characterize the traits of both methods. To this end, two hybrid algorithms of gappy POD and the EM-PCA are devised and compared to the original algorithms for a qualitative illustration of the different basis and norm effects. After all, a norm reflecting a curve-fitting method is found to more significantly affect estimation error reduction than a basis for two example test data sets: one is absent of data only at a single snapshot and the other misses data across all the snapshots. From a numerical performance aspect, the EM-PCA is computationally less efficient than POD for intact data since it suffers from slow convergence inherited from the EM algorithm. For incomplete data, this thesis quantitatively found that the number of data missing snapshots predetermines whether the EM-PCA or gappy POD outperforms the other because of the computational cost of a coefficient evaluation, resulting from a norm selection. For instance, gappy POD demands laborious computational effort in proportion to the number of data-missing snapshots as a consequence of the gappy norm. In contrast, the computational cost of the EM-PCA is invariant to the number of data-missing snapshots thanks to the L2 norm. In general, the higher the number of data-missing snapshots, the wider the gap between the computational cost of gappy POD and the EM-PCA. Based on the numerical experiments reported in this thesis, the following criterion is recommended regarding the selection between gappy POD and the EM-PCA for computational efficiency: gappy POD for an incomplete data set containing a few data-missing snapshots and the EM-PCA for an incomplete data set involving multiple data-missing snapshots. Last, the EM-PCA is applied to two aerospace applications in comparison to gappy POD as a proof of concept: one with an emphasis on basis extraction and the other with a focus on missing data reconstruction for a given incomplete data set with scattered missing data. The first application exploits the EM-PCA to efficiently construct reduced-order models of engine deck responses obtained by the numerical propulsion system simulation (NPSS), some of whose results are absent due to failed analyses caused by numerical instability. Model-prediction tests validate that engine performance metrics estimated by the reduced-order NPSS model exhibit considerably good agreement with those directly obtained by NPSS. Similarly, the second application illustrates that the EM-PCA is significantly more cost effective than gappy POD at repairing spurious PIV measurements obtained from acoustically-excited, bluff-body jet flow experiments. The EM-PCA reduces computational cost on factors 8 ˜ 19 compared to gappy POD while generating the same restoration results as those evaluated by gappy POD. All in all, through comprehensive theoretical and numerical investigation, this research establishes that the EM-PCA is an efficient alternative to gappy POD for an incomplete data set containing missing data over an entire data set. (Abstract shortened by UMI.)
Fixing Stellarator Magnetic Surfaces
NASA Astrophysics Data System (ADS)
Hanson, James D.
1999-11-01
Magnetic surfaces are a perennial issue for stellarators. The design heuristic of finding a magnetic field with zero perpendicular component on a specified outer surface often yields inner magnetic surfaces with very small resonant islands. However, magnetic fields in the laboratory are not design fields. Island-causing errors can arise from coil placement errors, stray external fields, and design inadequacies such as ignoring coil leads and incomplete characterization of current distributions within the coil pack. The problem addressed is how to eliminate such error-caused islands. I take a perturbation approach, where the zero order field is assumed to have good magnetic surfaces, and comes from a VMEC equilibrium. The perturbation field consists of error and correction pieces. The error correction method is to determine the correction field so that the sum of the error and correction fields gives zero island size at specified rational surfaces. It is particularly important to correctly calculate the island size for a given perturbation field. The method works well with many correction knobs, and a Singular Value Decomposition (SVD) technique is used to determine minimal corrections necessary to eliminate islands.
ERIC Educational Resources Information Center
To, Son Thanh
2012-01-01
"Belief state" refers to the set of possible world states satisfying the agent's (usually imperfect) knowledge. The use of belief state allows the agent to reason about the world with incomplete information, by considering each possible state in the belief state individually, in the same way as if it had perfect knowledge. However, the…
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2014-01-01
This research note contributes to the discussion of methods that can be used to identify useful auxiliary variables for analyses of incomplete data sets. A latent variable approach is discussed, which is helpful in finding auxiliary variables with the property that if included in subsequent maximum likelihood analyses they may enhance considerably…
ERIC Educational Resources Information Center
Alarcon, Irma V.
2011-01-01
The present study explores knowledge of Spanish grammatical gender in both comprehension and production by heritage language speakers and second language (L2) learners, with native Spanish speakers as a baseline. Most L2 research has tended to interpret morphosyntactic variability in interlanguage production, such as errors in gender agreement, as…
Harnessing Sparse and Low-Dimensional Structures for Robust Clustering of Imagery Data
ERIC Educational Resources Information Center
Rao, Shankar Ramamohan
2009-01-01
We propose a robust framework for clustering data. In practice, data obtained from real measurement devices can be incomplete, corrupted by gross errors, or not correspond to any assumed model. We show that, by properly harnessing the intrinsic low-dimensional structure of the data, these kinds of practical problems can be dealt with in a uniform…
Code of Federal Regulations, 2013 CFR
2013-07-01
... that identifies the premanufacture notice number assigned to the new chemical substance and date on which the review period begins. The review period will begin on the date the notice is received by the...). (ix) The submitter does not submit data which the submitter believes show that the chemical substance...
Code of Federal Regulations, 2011 CFR
2011-07-01
... that identifies the premanufacture notice number assigned to the new chemical substance and date on which the review period begins. The review period will begin on the date the notice is received by the...). (ix) The submitter does not submit data which the submitter believes show that the chemical substance...
ERIC Educational Resources Information Center
Chiarini, Marc A.
2010-01-01
Traditional methods for system performance analysis have long relied on a mix of queuing theory, detailed system knowledge, intuition, and trial-and-error. These approaches often require construction of incomplete gray-box models that can be costly to build and difficult to scale or generalize. In this thesis, we present a black-box analysis…
Doubravsky, Karel; Dohnal, Mirko
2015-01-01
Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details. PMID:26158662
Doubravsky, Karel; Dohnal, Mirko
2015-01-01
Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details.
Recognizing and managing errors of cognitive underspecification.
Duthie, Elizabeth A
2014-03-01
James Reason describes cognitive underspecification as incomplete communication that creates a knowledge gap. Errors occur when an information mismatch occurs in bridging that gap with a resulting lack of shared mental models during the communication process. There is a paucity of studies in health care examining this cognitive error and the role it plays in patient harm. The goal of the following case analyses is to facilitate accurate recognition, identify how it contributes to patient harm, and suggest appropriate management strategies. Reason's human error theory is applied in case analyses of errors of cognitive underspecification. Sidney Dekker's theory of human incident investigation is applied to event investigation to facilitate identification of this little recognized error. Contributory factors leading to errors of cognitive underspecification include workload demands, interruptions, inexperienced practitioners, and lack of a shared mental model. Detecting errors of cognitive underspecification relies on blame-free listening and timely incident investigation. Strategies for interception include two-way interactive communication, standardization of communication processes, and technological support to ensure timely access to documented clinical information. Although errors of cognitive underspecification arise at the sharp end with the care provider, effective management is dependent upon system redesign that mitigates the latent contributory factors. Cognitive underspecification is ubiquitous whenever communication occurs. Accurate identification is essential if effective system redesign is to occur.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiyko, V V; Kislov, V I; Ofitserov, E N
2015-08-31
In the framework of a statistical model of an adaptive optics system (AOS) of phase conjugation, three algorithms based on an integrated mathematical approach are considered, each of them intended for minimisation of one of the following characteristics: the sensor error (in the case of an ideal corrector), the corrector error (in the case of ideal measurements) and the compensation error (with regard to discreteness and measurement noises and to incompleteness of a system of response functions of the corrector actuators). Functional and statistical relationships between the algorithms are studied and a relation is derived to ensure calculation of themore » mean-square compensation error as a function of the errors of the sensor and corrector with an accuracy better than 10%. Because in adjusting the AOS parameters, it is reasonable to proceed from the equality of the sensor and corrector errors, in the case the Hartmann sensor is used as a wavefront sensor, the required number of actuators in the absence of the noise component in the sensor error turns out 1.5 – 2.5 times less than the number of counts, and that difference grows with increasing measurement noise. (adaptive optics)« less
Matacchiera, F; Manes, C; Beaven, R P; Rees-White, T C; Boano, F; Mønster, J; Scheutz, C
2018-02-13
The measurement of methane emissions from landfills is important to the understanding of landfills' contribution to greenhouse gas emissions. The Tracer Dispersion Method (TDM) is becoming widely accepted as a technique, which allows landfill emissions to be quantified accurately provided that measurements are taken where the plumes of a released tracer-gas and landfill-gas are well-mixed. However, the distance at which full mixing of the gases occurs is generally unknown prior to any experimental campaign. To overcome this problem the present paper demonstrates that, for any specific TDM application, a simple Gaussian dispersion model (AERMOD) can be run beforehand to help determine the distance from the source at which full mixing conditions occur, and the likely associated measurement errors. An AERMOD model was created to simulate a series of TDM trials carried out at a UK landfill, and was benchmarked against the experimental data obtained. The model was used to investigate the impact of different factors (e.g. tracer cylinder placements, wind directions, atmospheric stability parameters) on TDM results to identify appropriate experimental set ups for different conditions. The contribution of incomplete vertical mixing of tracer and landfill gas on TDM measurement error was explored using the model. It was observed that full mixing conditions at ground level do not imply full mixing over the entire plume height. However, when full mixing conditions were satisfied at ground level, then the error introduced by variations in mixing higher up were always less than 10%. Copyright © 2018. Published by Elsevier Ltd.
Ripberger, Joseph T; Silva, Carol L; Jenkins-Smith, Hank C; Carlson, Deven E; James, Mark; Herron, Kerry G
2015-01-01
Theory and conventional wisdom suggest that errors undermine the credibility of tornado warning systems and thus decrease the probability that individuals will comply (i.e., engage in protective action) when future warnings are issued. Unfortunately, empirical research on the influence of warning system accuracy on public responses to tornado warnings is incomplete and inconclusive. This study adds to existing research by analyzing two sets of relationships. First, we assess the relationship between perceptions of accuracy, credibility, and warning response. Using data collected via a large regional survey, we find that trust in the National Weather Service (NWS; the agency responsible for issuing tornado warnings) increases the likelihood that an individual will opt for protective action when responding to a hypothetical warning. More importantly, we find that subjective perceptions of warning system accuracy are, as theory suggests, systematically related to trust in the NWS and (by extension) stated responses to future warnings. The second half of the study matches survey data against NWS warning and event archives to investigate a critical follow-up question--Why do some people perceive that their warning system is accurate, whereas others perceive that their system is error prone? We find that subjective perceptions are--in part-a function of objective experience, knowledge, and demographic characteristics. When considered in tandem, these findings support the proposition that errors influence perceptions about the accuracy of warning systems, which in turn impact the credibility that people assign to information provided by systems and, ultimately, public decisions about how to respond when warnings are issued. © 2014 Society for Risk Analysis.
Clustering redshift distributions for the Dark Energy Survey
NASA Astrophysics Data System (ADS)
Helsby, Jennifer
Accurate determination of photometric redshifts and their errors is critical for large scale structure and weak lensing studies for constraining cosmology from deep, wide imaging surveys. Current photometric redshift methods suffer from bias and scatter due to incomplete training sets. Exploiting the clustering between a sample of galaxies for which we have spectroscopic redshifts and a sample of galaxies for which the redshifts are unknown can allow us to reconstruct the true redshift distribution of the unknown sample. Here we use this method in both simulations and early data from the Dark Energy Survey (DES) to determine the true redshift distributions of galaxies in photometric redshift bins. We find that cross-correlating with the spectroscopic samples currently used for training provides a useful test of photometric redshifts and provides reliable estimates of the true redshift distribution in a photometric redshift bin. We discuss the use of the cross-correlation method in validating template- or learning-based approaches to redshift estimation and its future use in Stage IV surveys.
Classifying with confidence from incomplete information.
Parrish, Nathan; Anderson, Hyrum S.; Gupta, Maya R.; ...
2013-12-01
For this paper, we consider the problem of classifying a test sample given incomplete information. This problem arises naturally when data about a test sample is collected over time, or when costs must be incurred to compute the classification features. For example, in a distributed sensor network only a fraction of the sensors may have reported measurements at a certain time, and additional time, power, and bandwidth is needed to collect the complete data to classify. A practical goal is to assign a class label as soon as enough data is available to make a good decision. We formalize thismore » goal through the notion of reliability—the probability that a label assigned given incomplete data would be the same as the label assigned given the complete data, and we propose a method to classify incomplete data only if some reliability threshold is met. Our approach models the complete data as a random variable whose distribution is dependent on the current incomplete data and the (complete) training data. The method differs from standard imputation strategies in that our focus is on determining the reliability of the classification decision, rather than just the class label. We show that the method provides useful reliability estimates of the correctness of the imputed class labels on a set of experiments on time-series data sets, where the goal is to classify the time-series as early as possible while still guaranteeing that the reliability threshold is met.« less
Developing a generalized allometric equation for aboveground biomass estimation
NASA Astrophysics Data System (ADS)
Xu, Q.; Balamuta, J. J.; Greenberg, J. A.; Li, B.; Man, A.; Xu, Z.
2015-12-01
A key potential uncertainty in estimating carbon stocks across multiple scales stems from the use of empirically calibrated allometric equations, which estimate aboveground biomass (AGB) from plant characteristics such as diameter at breast height (DBH) and/or height (H). The equations themselves contain significant and, at times, poorly characterized errors. Species-specific equations may be missing. Plant responses to their local biophysical environment may lead to spatially varying allometric relationships. The structural predictor may be difficult or impossible to measure accurately, particularly when derived from remote sensing data. All of these issues may lead to significant and spatially varying uncertainties in the estimation of AGB that are unexplored in the literature. We sought to quantify the errors in predicting AGB at the tree and plot level for vegetation plots in California. To accomplish this, we derived a generalized allometric equation (GAE) which we used to model the AGB on a full set of tree information such as DBH, H, taxonomy, and biophysical environment. The GAE was derived using published allometric equations in the GlobAllomeTree database. The equations were sparse in details about the error since authors provide the coefficient of determination (R2) and the sample size. A more realistic simulation of tree AGB should also contain the noise that was not captured by the allometric equation. We derived an empirically corrected variance estimate for the amount of noise to represent the errors in the real biomass. Also, we accounted for the hierarchical relationship between different species by treating each taxonomic level as a covariate nested within a higher taxonomic level (e.g. species < genus). This approach provides estimation under incomplete tree information (e.g. missing species) or blurred information (e.g. conjecture of species), plus the biophysical environment. The GAE allowed us to quantify contribution of each different covariate in estimating the AGB of trees. Lastly, we applied the GAE to an existing vegetation plot database - Forest Inventory and Analysis database - to derive per-tree and per-plot AGB estimations, their errors, and how much the error could be contributed to the original equations, the plant's taxonomy, and their biophysical environment.
He, Xiaowei; Liang, Jimin; Wang, Xiaorui; Yu, Jingjing; Qu, Xiaochao; Wang, Xiaodong; Hou, Yanbin; Chen, Duofang; Liu, Fang; Tian, Jie
2010-11-22
In this paper, we present an incomplete variables truncated conjugate gradient (IVTCG) method for bioluminescence tomography (BLT). Considering the sparse characteristic of the light source and insufficient surface measurement in the BLT scenarios, we combine a sparseness-inducing (ℓ1 norm) regularization term with a quadratic error term in the IVTCG-based framework for solving the inverse problem. By limiting the number of variables updated at each iterative and combining a variable splitting strategy to find the search direction more efficiently, it obtains fast and stable source reconstruction, even without a priori information of the permissible source region and multispectral measurements. Numerical experiments on a mouse atlas validate the effectiveness of the method. In vivo mouse experimental results further indicate its potential for a practical BLT system.
Protein structure estimation from NMR data by matrix completion.
Li, Zhicheng; Li, Yang; Lei, Qiang; Zhao, Qing
2017-09-01
Knowledge of protein structures is very important to understand their corresponding physical and chemical properties. Nuclear Magnetic Resonance (NMR) spectroscopy is one of the main methods to measure protein structure. In this paper, we propose a two-stage approach to calculate the structure of a protein from a highly incomplete distance matrix, where most data are obtained from NMR. We first randomly "guess" a small part of unobservable distances by utilizing the triangle inequality, which is crucial for the second stage. Then we use matrix completion to calculate the protein structure from the obtained incomplete distance matrix. We apply the accelerated proximal gradient algorithm to solve the corresponding optimization problem. Furthermore, the recovery error of our method is analyzed, and its efficiency is demonstrated by several practical examples.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.
Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette
2018-05-01
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Double propensity-score adjustment: A solution to design bias or bias due to incomplete matching.
Austin, Peter C
2017-02-01
Propensity-score matching is frequently used to reduce the effects of confounding when using observational data to estimate the effects of treatments. Matching allows one to estimate the average effect of treatment in the treated. Rosenbaum and Rubin coined the term "bias due to incomplete matching" to describe the bias that can occur when some treated subjects are excluded from the matched sample because no appropriate control subject was available. The presence of incomplete matching raises important questions around the generalizability of estimated treatment effects to the entire population of treated subjects. We describe an analytic solution to address the bias due to incomplete matching. Our method is based on using optimal or nearest neighbor matching, rather than caliper matching (which frequently results in the exclusion of some treated subjects). Within the sample matched on the propensity score, covariate adjustment using the propensity score is then employed to impute missing potential outcomes under lack of treatment for each treated subject. Using Monte Carlo simulations, we found that the proposed method resulted in estimates of treatment effect that were essentially unbiased. This method resulted in decreased bias compared to caliper matching alone and compared to either optimal matching or nearest neighbor matching alone. Caliper matching alone resulted in design bias or bias due to incomplete matching, while optimal matching or nearest neighbor matching alone resulted in bias due to residual confounding. The proposed method also tended to result in estimates with decreased mean squared error compared to when caliper matching was used.
Double propensity-score adjustment: A solution to design bias or bias due to incomplete matching
2016-01-01
Propensity-score matching is frequently used to reduce the effects of confounding when using observational data to estimate the effects of treatments. Matching allows one to estimate the average effect of treatment in the treated. Rosenbaum and Rubin coined the term “bias due to incomplete matching” to describe the bias that can occur when some treated subjects are excluded from the matched sample because no appropriate control subject was available. The presence of incomplete matching raises important questions around the generalizability of estimated treatment effects to the entire population of treated subjects. We describe an analytic solution to address the bias due to incomplete matching. Our method is based on using optimal or nearest neighbor matching, rather than caliper matching (which frequently results in the exclusion of some treated subjects). Within the sample matched on the propensity score, covariate adjustment using the propensity score is then employed to impute missing potential outcomes under lack of treatment for each treated subject. Using Monte Carlo simulations, we found that the proposed method resulted in estimates of treatment effect that were essentially unbiased. This method resulted in decreased bias compared to caliper matching alone and compared to either optimal matching or nearest neighbor matching alone. Caliper matching alone resulted in design bias or bias due to incomplete matching, while optimal matching or nearest neighbor matching alone resulted in bias due to residual confounding. The proposed method also tended to result in estimates with decreased mean squared error compared to when caliper matching was used. PMID:25038071
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jester, Sebastian; Schneider, Donald P.; Richards, Gordon T.
The author investigates the extent to which the Palomar-Green (PG) Bright Quasar Survey (BQS) is complete and representative of the general quasar population by comparing with imaging and spectroscopy from the Sloan Digital Sky Survey. A comparison of SDSS and PG photometry of both stars and quasars reveals the need to apply a color and magnitude recalibration to the PG data. Using the SDSS photometric catalog, they define the PG's parent sample of objects that are not main-sequence stars and simulate the selection of objects from this parent sample using the PG photometric criteria and errors. This simulation shows thatmore » the effective U-B cut in the PG survey is U-B < -0.71, implying a color-related incompleteness. As the color distribution of bright quasars peaks near U-B = -0.7 and the 2-{sigma} error in U-B is comparable to the full width of the color distribution of quasars, the color incompleteness of the BQS is approximately 50% and essentially random with respect to U-B color for z < 0.5. There is however, a bias against bright quasars at 0.5 < z < 1, which is induced by the color-redshift relation of quasars (although quasars at z > 0.5 are inherently rare in bright surveys in any case). They find no evidence for any other systematic incompleteness when comparing the distributions in color, redshift, and FIRST radio properties of the BQS and a BQS-like subsample of the SDSS quasar sample. However, the application of a bright magnitude limit biases the BQS toward the inclusion of objects which are blue in g-i, in particular compared to the full range of g-i colors found among the i-band limited SDSS quasars, and even at i-band magnitudes comparable to those of the BQS objects.« less
NASA Astrophysics Data System (ADS)
Chung, Kee-Choo; Park, Hwangseo
2016-11-01
The performance of the extended solvent-contact model has been addressed in the SAMPL5 blind prediction challenge for distribution coefficient (LogD) of drug-like molecules with respect to the cyclohexane/water partitioning system. All the atomic parameters defined for 41 atom types in the solvation free energy function were optimized by operating a standard genetic algorithm with respect to water and cyclohexane solvents. In the parameterizations for cyclohexane, the experimental solvation free energy (Δ G sol ) data of 15 molecules for 1-octanol were combined with those of 77 molecules for cyclohexane to construct a training set because Δ G sol values of the former were unavailable for cyclohexane in publicly accessible databases. Using this hybrid training set, we established the LogD prediction model with the correlation coefficient ( R), average error (AE), and root mean square error (RMSE) of 0.55, 1.53, and 3.03, respectively, for the comparison of experimental and computational results for 53 SAMPL5 molecules. The modest accuracy in LogD prediction could be attributed to the incomplete optimization of atomic solvation parameters for cyclohexane. With respect to 31 SAMPL5 molecules containing the atom types for which experimental reference data for Δ G sol were available for both water and cyclohexane, the accuracy in LogD prediction increased remarkably with the R, AE, and RMSE values of 0.82, 0.89, and 1.60, respectively. This significant enhancement in performance stemmed from the better optimization of atomic solvation parameters by limiting the element of training set to the molecules with experimental Δ G sol data for cyclohexane. Due to the simplicity in model building and to low computational cost for parameterizations, the extended solvent-contact model is anticipated to serve as a valuable computational tool for LogD prediction upon the enrichment of experimental Δ G sol data for organic solvents.
Barriers to Specialty Care and Specialty Referral Completion in the Community Health Center Setting
Zuckerman, Katharine E.; Perrin, James M.; Hobrecker, Karin; Donelan, Karen
2013-01-01
Objective To assess the frequency of barriers to specialty care and to assess which barriers are associated with an incomplete specialty referral (not attending a specialty visit when referred by a primary care provider) among children seen in community health centers. Study design Two months after their child’s specialty referral, 341 parents completed telephone surveys assessing whether a specialty visit was completed and whether they experienced any of 10 barriers to care. Family/community barriers included difficulty leaving work, obtaining childcare, obtaining transportation, and inadequate insurance. Health care system barriers included getting appointments quickly, understanding doctors and nurses, communicating with doctors’ offices, locating offices, accessing interpreters, and inconvenient office hours. We calculated barrier frequency and total barriers experienced. Using logistic regression, we assessed which barriers were associated with incomplete referral, and whether experiencing ≥4 barriers was associated with incomplete referral. Results A total of 22.9% of families experienced incomplete referral. 42.0% of families encountered 1 or more barriers. The most frequent barriers were difficulty leaving work, obtaining childcare, and obtaining transportation. On multivariate analysis, difficulty getting appointments quickly, difficulty finding doctors’ offices, and inconvenient office hours were associated with incomplete referral. Families experiencing ≥4 barriers were more likely than those experiencing ≤3 barriers to have incomplete referral. Conclusion Barriers to specialty care were common and associated with incomplete referral. Families experiencing many barriers had greater risk of incomplete referral. Improving family/community factors may increase satisfaction with specialty care; however, improving health system factors may be the best way to reduce incomplete referrals. PMID:22929162
Reducing diagnostic errors in medicine: what's the goal?
Graber, Mark; Gordon, Ruthanna; Franklin, Nancy
2002-10-01
This review considers the feasibility of reducing or eliminating the three major categories of diagnostic errors in medicine: "No-fault errors" occur when the disease is silent, presents atypically, or mimics something more common. These errors will inevitably decline as medical science advances, new syndromes are identified, and diseases can be detected more accurately or at earlier stages. These errors can never be eradicated, unfortunately, because new diseases emerge, tests are never perfect, patients are sometimes noncompliant, and physicians will inevitably, at times, choose the most likely diagnosis over the correct one, illustrating the concept of necessary fallibility and the probabilistic nature of choosing a diagnosis. "System errors" play a role when diagnosis is delayed or missed because of latent imperfections in the health care system. These errors can be reduced by system improvements, but can never be eliminated because these improvements lag behind and degrade over time, and each new fix creates the opportunity for novel errors. Tradeoffs also guarantee system errors will persist, when resources are just shifted. "Cognitive errors" reflect misdiagnosis from faulty data collection or interpretation, flawed reasoning, or incomplete knowledge. The limitations of human processing and the inherent biases in using heuristics guarantee that these errors will persist. Opportunities exist, however, for improving the cognitive aspect of diagnosis by adopting system-level changes (e.g., second opinions, decision-support systems, enhanced access to specialists) and by training designed to improve cognition or cognitive awareness. Diagnostic error can be substantially reduced, but never eradicated.
Byrne, M.D.; Jordan, T.R.; Welle, T.
2013-01-01
Objective The objective of this study was to investigate and improve the use of automated data collection procedures for nursing research and quality assurance. Methods A descriptive, correlational study analyzed 44 orthopedic surgical patients who were part of an evidence-based practice (EBP) project examining post-operative oxygen therapy at a Midwestern hospital. The automation work attempted to replicate a manually-collected data set from the EBP project. Results Automation was successful in replicating data collection for study data elements that were available in the clinical data repository. The automation procedures identified 32 “false negative” patients who met the inclusion criteria described in the EBP project but were not selected during the manual data collection. Automating data collection for certain data elements, such as oxygen saturation, proved challenging because of workflow and practice variations and the reliance on disparate sources for data abstraction. Automation also revealed instances of human error including computational and transcription errors as well as incomplete selection of eligible patients. Conclusion Automated data collection for analysis of nursing-specific phenomenon is potentially superior to manual data collection methods. Creation of automated reports and analysis may require initial up-front investment with collaboration between clinicians, researchers and information technology specialists who can manage the ambiguities and challenges of research and quality assurance work in healthcare. PMID:23650488
Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.
2013-01-01
Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679
Edger, Patrick P; VanBuren, Robert; Colle, Marivi; Poorten, Thomas J; Wai, Ching Man; Niederhuth, Chad E; Alger, Elizabeth I; Ou, Shujun; Acharya, Charlotte B; Wang, Jie; Callow, Pete; McKain, Michael R; Shi, Jinghua; Collier, Chad; Xiong, Zhiyong; Mower, Jeffrey P; Slovin, Janet P; Hytönen, Timo; Jiang, Ning; Childs, Kevin L; Knapp, Steven J
2018-02-01
Although draft genomes are available for most agronomically important plant species, the majority are incomplete, highly fragmented, and often riddled with assembly and scaffolding errors. These assembly issues hinder advances in tool development for functional genomics and systems biology. Here we utilized a robust, cost-effective approach to produce high-quality reference genomes. We report a near-complete genome of diploid woodland strawberry (Fragaria vesca) using single-molecule real-time sequencing from Pacific Biosciences (PacBio). This assembly has a contig N50 length of ∼7.9 million base pairs (Mb), representing a ∼300-fold improvement of the previous version. The vast majority (>99.8%) of the assembly was anchored to 7 pseudomolecules using 2 sets of optical maps from Bionano Genomics. We obtained ∼24.96 Mb of sequence not present in the previous version of the F. vesca genome and produced an improved annotation that includes 1496 new genes. Comparative syntenic analyses uncovered numerous, large-scale scaffolding errors present in each chromosome in the previously published version of the F. vesca genome. Our results highlight the need to improve existing short-read based reference genomes. Furthermore, we demonstrate how genome quality impacts commonly used analyses for addressing both fundamental and applied biological questions. © The Authors 2017. Published by Oxford University Press.
Pfützner, Andreas; Schipper, Christina; Ramljak, Sanja; Flacke, Frank; Sieber, Jochen; Forst, Thomas; Musholt, Petra B
2013-11-01
Accuracy of blood glucose readings is (among other things) dependent on the test strip being completely filled with sufficient sample volume. The devices are supposed to display an error message in case of incomplete filling. This laboratory study was performed to test the performance of 31 commercially available devices in case of incomplete strip filling. Samples with two different glucose levels (60-90 and 300-350 mg/dl) were used to generate three different sample volumes: 0.20 µl (too low volume for any device), 0.32 µl (borderline volume), and 1.20 µl (low but supposedly sufficient volume for all devices). After a point-of-care capillary reference measurement (StatStrip, NovaBiomedical), the meter strip was filled (6x) with the respective volume, and the response of the meters (two devices) was documented (72 determinations/meter type). Correct response was defined as either an error message indicating incomplete filling or a correct reading (±20% compared with reference reading). Only five meters showed 100% correct responses [BGStar and iBGStar (both Sanofi), ACCU-CHEK Compact+ and ACCU-CHEK Mobile (both Roche Diagnostics), OneTouch Verio (LifeScan)]. The majority of the meters (17) had up to 10% incorrect reactions [predominantly incorrect readings with sufficient volume; Precision Xceed and Xtra, FreeStyle Lite, and Freedom Lite (all Abbott); GlucoCard+ and GlucoMen GM (both Menarini); Contour, Contour USB, and Breeze2 (all Bayer); OneTouch Ultra Easy, Ultra 2, and Ultra Smart (all LifeScan); Wellion Dialog and Premium (both MedTrust); FineTouch (Terumo); ACCU-CHEK Aviva (Roche); and GlucoTalk (Axis-Shield)]. Ten percent to 20% incorrect reactions were seen with OneTouch Vita (LifeScan), ACCU-CHEK Aviva Nano (Roche), OmniTest+ (BBraun), and AlphaChek+ (Berger Med). More than 20% incorrect reactions were obtained with Pura (Ypsomed), GlucoCard Meter and GlucoMen LX (both Menarini), Elite (Bayer), and MediTouch (Medisana). In summary, partial and incomplete blood filling of glucose meter strips is often associated with inaccurate reading. These findings underline the importance of appropriate patient education on this aspect of blood glucose self-monitoring. © 2013 Diabetes Technology Society.
Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C; Backeljau, Thierry; De Meyer, Marc
2012-01-01
We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods.
Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C.; Backeljau, Thierry; De Meyer, Marc
2012-01-01
We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods. PMID:22359600
Criterion for estimation of stress-deformed state of SD-materials
NASA Astrophysics Data System (ADS)
Orekhov, Andrey V.
2018-05-01
A criterion is proposed that determines the moment when the growth pattern of the monotonic numerical sequence varies from the linear to the parabolic one. The criterion is based on the comparison of squares of errors for the linear and the incomplete quadratic approximation. The approximating functions are constructed locally, only at those points that are located near a possible change in nature of the increase in the sequence.
Information preserving coding for multispectral data
NASA Technical Reports Server (NTRS)
Duan, J. R.; Wintz, P. A.
1973-01-01
A general formulation of the data compression system is presented. A method of instantaneous expansion of quantization levels by reserving two codewords in the codebook to perform a folding over in quantization is implemented for error free coding of data with incomplete knowledge of the probability density function. Results for simple DPCM with folding and an adaptive transform coding technique followed by a DPCM technique are compared using ERTS-1 data.
Adversarial risk analysis with incomplete information: a level-k approach.
Rothschild, Casey; McLay, Laura; Guikema, Seth
2012-07-01
This article proposes, develops, and illustrates the application of level-k game theory to adversarial risk analysis. Level-k reasoning, which assumes that players play strategically but have bounded rationality, is useful for operationalizing a Bayesian approach to adversarial risk analysis. It can be applied in a broad class of settings, including settings with asynchronous play and partial but incomplete revelation of early moves. Its computational and elicitation requirements are modest. We illustrate the approach with an application to a simple defend-attack model in which the defender's countermeasures are revealed with a probability less than one to the attacker before he decides on how or whether to attack. © 2011 Society for Risk Analysis.
Dillon, Neal P.; Siebold, Michael A.; Mitchell, Jason E.; Blachon, Gregoire S.; Balachandran, Ramya; Fitzpatrick, J. Michael; Webster, Robert J.
2017-01-01
Safe and effective planning for robotic surgery that involves cutting or ablation of tissue must consider all potential sources of error when determining how close the tool may come to vital anatomy. A pre-operative plan that does not adequately consider potential deviations from ideal system behavior may lead to patient injury. Conversely, a plan that is overly conservative may result in ineffective or incomplete performance of the task. Thus, enforcing simple, uniform-thickness safety margins around vital anatomy is insufficient in the presence of spatially varying, anisotropic error. Prior work has used registration error to determine a variable-thickness safety margin around vital structures that must be approached during mastoidectomy but ultimately preserved. In this paper, these methods are extended to incorporate image distortion and physical robot errors, including kinematic errors and deflections of the robot. These additional sources of error are discussed and stochastic models for a bone-attached robot for otologic surgery are developed. An algorithm for generating appropriate safety margins based on a desired probability of preserving the underlying anatomical structure is presented. Simulations are performed on a CT scan of a cadaver head and safety margins are calculated around several critical structures for planning of a robotic mastoidectomy. PMID:29200595
NASA Astrophysics Data System (ADS)
Dillon, Neal P.; Siebold, Michael A.; Mitchell, Jason E.; Blachon, Gregoire S.; Balachandran, Ramya; Fitzpatrick, J. Michael; Webster, Robert J.
2016-03-01
Safe and effective planning for robotic surgery that involves cutting or ablation of tissue must consider all potential sources of error when determining how close the tool may come to vital anatomy. A pre-operative plan that does not adequately consider potential deviations from ideal system behavior may lead to patient injury. Conversely, a plan that is overly conservative may result in ineffective or incomplete performance of the task. Thus, enforcing simple, uniform-thickness safety margins around vital anatomy is insufficient in the presence of spatially varying, anisotropic error. Prior work has used registration error to determine a variable-thickness safety margin around vital structures that must be approached during mastoidectomy but ultimately preserved. In this paper, these methods are extended to incorporate image distortion and physical robot errors, including kinematic errors and deflections of the robot. These additional sources of error are discussed and stochastic models for a bone-attached robot for otologic surgery are developed. An algorithm for generating appropriate safety margins based on a desired probability of preserving the underlying anatomical structure is presented. Simulations are performed on a CT scan of a cadaver head and safety margins are calculated around several critical structures for planning of a robotic mastoidectomy.
Bonilla, Manuel G.; Mark, Robert K.; Lienkaemper, James J.
1984-01-01
In order to refine correlations of surface-wave magnitude, fault rupture length at the ground surface, and fault displacement at the surface by including the uncertainties in these variables, the existing data were critically reviewed and a new data base was compiled. Earthquake magnitudes were redetermined as necessary to make them as consistent as possible with the Gutenberg methods and results, which make up much of the data base. Measurement errors were estimated for the three variables for 58 moderate to large shallow-focus earthquakes. Regression analyses were then made utilizing the estimated measurement errors.The regression analysis demonstrates that the relations among the variables magnitude, length, and displacement are stochastic in nature. The stochastic variance, introduced in part by incomplete surface expression of seismogenic faulting, variation in shear modulus, and regional factors, dominates the estimated measurement errors. Thus, it is appropriate to use ordinary least squares for the regression models, rather than regression models based upon an underlying deterministic relation in which the variance results primarily from measurement errors.Significant differences exist in correlations of certain combinations of length, displacement, and magnitude when events are grouped by fault type or by region, including attenuation regions delineated by Evernden and others.Estimates of the magnitude and the standard deviation of the magnitude of a prehistoric or future earthquake associated with a fault can be made by correlating Ms with the logarithms of rupture length, fault displacement, or the product of length and displacement.Fault rupture area could be reliably estimated for about 20 of the events in the data set. Regression of Ms on rupture area did not result in a marked improvement over regressions that did not involve rupture area. Because no subduction-zone earthquakes are included in this study, the reported results do not apply to such zones.
Bonilla, M.G.; Mark, R.K.; Lienkaemper, J.J.
1984-01-01
In order to refine correlations of surface-wave magnitude, fault rupture length at the ground surface, and fault displacement at the surface by including the uncertainties in these variables, the existing data were critically reviewed and a new data base was compiled. Earthquake magnitudes were redetermined as necessary to make them as consistent as possible with the Gutenberg methods and results, which necessarily make up much of the data base. Measurement errors were estimated for the three variables for 58 moderate to large shallow-focus earthquakes. Regression analyses were then made utilizing the estimated measurement errors. The regression analysis demonstrates that the relations among the variables magnitude, length, and displacement are stochastic in nature. The stochastic variance, introduced in part by incomplete surface expression of seismogenic faulting, variation in shear modulus, and regional factors, dominates the estimated measurement errors. Thus, it is appropriate to use ordinary least squares for the regression models, rather than regression models based upon an underlying deterministic relation with the variance resulting from measurement errors. Significant differences exist in correlations of certain combinations of length, displacement, and magnitude when events are qrouped by fault type or by region, including attenuation regions delineated by Evernden and others. Subdivision of the data results in too few data for some fault types and regions, and for these only regressions using all of the data as a group are reported. Estimates of the magnitude and the standard deviation of the magnitude of a prehistoric or future earthquake associated with a fault can be made by correlating M with the logarithms of rupture length, fault displacement, or the product of length and displacement. Fault rupture area could be reliably estimated for about 20 of the events in the data set. Regression of MS on rupture area did not result in a marked improvement over regressions that did not involve rupture area. Because no subduction-zone earthquakes are included in this study, the reported results do not apply to such zones.
Iterative random vs. Kennard-Stone sampling for IR spectrum-based classification task using PLS2-DA
NASA Astrophysics Data System (ADS)
Lee, Loong Chuen; Liong, Choong-Yeun; Jemain, Abdul Aziz
2018-04-01
External testing (ET) is preferred over auto-prediction (AP) or k-fold-cross-validation in estimating more realistic predictive ability of a statistical model. With IR spectra, Kennard-stone (KS) sampling algorithm is often used to split the data into training and test sets, i.e. respectively for model construction and for model testing. On the other hand, iterative random sampling (IRS) has not been the favored choice though it is theoretically more likely to produce reliable estimation. The aim of this preliminary work is to compare performances of KS and IRS in sampling a representative training set from an attenuated total reflectance - Fourier transform infrared spectral dataset (of four varieties of blue gel pen inks) for PLS2-DA modeling. The `best' performance achievable from the dataset is estimated with AP on the full dataset (APF, error). Both IRS (n = 200) and KS were used to split the dataset in the ratio of 7:3. The classic decision rule (i.e. maximum value-based) is employed for new sample prediction via partial least squares - discriminant analysis (PLS2-DA). Error rate of each model was estimated repeatedly via: (a) AP on full data (APF, error); (b) AP on training set (APS, error); and (c) ET on the respective test set (ETS, error). A good PLS2-DA model is expected to produce APS, error and EVS, error that is similar to the APF, error. Bearing that in mind, the similarities between (a) APS, error vs. APF, error; (b) ETS, error vs. APF, error and; (c) APS, error vs. ETS, error were evaluated using correlation tests (i.e. Pearson and Spearman's rank test), using series of PLS2-DA models computed from KS-set and IRS-set, respectively. Overall, models constructed from IRS-set exhibits more similarities between the internal and external error rates than the respective KS-set, i.e. less risk of overfitting. In conclusion, IRS is more reliable than KS in sampling representative training set.
Impact of an antiretroviral stewardship strategy on medication error rates.
Shea, Katherine M; Hobbs, Athena Lv; Shumake, Jason D; Templet, Derek J; Padilla-Tolentino, Eimeira; Mondy, Kristin E
2018-05-02
The impact of an antiretroviral stewardship strategy on medication error rates was evaluated. This single-center, retrospective, comparative cohort study included patients at least 18 years of age infected with human immunodeficiency virus (HIV) who were receiving antiretrovirals and admitted to the hospital. A multicomponent approach was developed and implemented and included modifications to the order-entry and verification system, pharmacist education, and a pharmacist-led antiretroviral therapy checklist. Pharmacists performed prospective audits using the checklist at the time of order verification. To assess the impact of the intervention, a retrospective review was performed before and after implementation to assess antiretroviral errors. Totals of 208 and 24 errors were identified before and after the intervention, respectively, resulting in a significant reduction in the overall error rate ( p < 0.001). In the postintervention group, significantly lower medication error rates were found in both patient admissions containing at least 1 medication error ( p < 0.001) and those with 2 or more errors ( p < 0.001). Significant reductions were also identified in each error type, including incorrect/incomplete medication regimen, incorrect dosing regimen, incorrect renal dose adjustment, incorrect administration, and the presence of a major drug-drug interaction. A regression tree selected ritonavir as the only specific medication that best predicted more errors preintervention ( p < 0.001); however, no antiretrovirals reliably predicted errors postintervention. An antiretroviral stewardship strategy for hospitalized HIV patients including prospective audit by staff pharmacists through use of an antiretroviral medication therapy checklist at the time of order verification decreased error rates. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Estimation of Blood Flow Rates in Large Microvascular Networks
Fry, Brendan C.; Lee, Jack; Smith, Nicolas P.; Secomb, Timothy W.
2012-01-01
Objective Recent methods for imaging microvascular structures provide geometrical data on networks containing thousands of segments. Prediction of functional properties, such as solute transport, requires information on blood flow rates also, but experimental measurement of many individual flows is difficult. Here, a method is presented for estimating flow rates in a microvascular network based on incomplete information on the flows in the boundary segments that feed and drain the network. Methods With incomplete boundary data, the equations governing blood flow form an underdetermined linear system. An algorithm was developed that uses independent information about the distribution of wall shear stresses and pressures in microvessels to resolve this indeterminacy, by minimizing the deviation of pressures and wall shear stresses from target values. Results The algorithm was tested using previously obtained experimental flow data from four microvascular networks in the rat mesentery. With two or three prescribed boundary conditions, predicted flows showed relatively small errors in most segments and fewer than 10% incorrect flow directions on average. Conclusions The proposed method can be used to estimate flow rates in microvascular networks, based on incomplete boundary data and provides a basis for deducing functional properties of microvessel networks. PMID:22506980
NASA Astrophysics Data System (ADS)
Catanzarite, Joseph; Jenkins, Jon Michael; McCauliff, Sean D.; Burke, Christopher; Bryson, Steve; Batalha, Natalie; Coughlin, Jeffrey; Rowe, Jason; mullally, fergal; thompson, susan; Seader, Shawn; Twicken, Joseph; Li, Jie; morris, robert; smith, jeffrey; haas, michael; christiansen, jessie; Clarke, Bruce
2015-08-01
NASA’s Kepler Space Telescope monitored the photometric variations of over 170,000 stars, at half-hour cadence, over its four-year prime mission. The Kepler pipeline calibrates the pixels of the target apertures for each star, produces light curves with simple aperture photometry, corrects for systematic error, and detects threshold-crossing events (TCEs) that may be due to transiting planets. The pipeline estimates planet parameters for all TCEs and computes diagnostics used by the Threshold Crossing Event Review Team (TCERT) to produce a catalog of objects that are deemed either likely transiting planet candidates or false positives.We created a training set from the Q1-Q12 and Q1-Q16 TCERT catalogs and an ensemble of synthetic transiting planets that were injected at the pixel level into all 17 quarters of data, and used it to train a random forest classifier. The classifier uniformly and consistently applies diagnostics developed by the Transiting Planet Search and Data Validation pipeline components and by TCERT to produce a robust catalog of planet candidates.The characteristics of the planet candidates detected by Kepler (planet radius and period) do not reflect the intrinsic planet population. Detection efficiency is a function of SNR, so the set of detected planet candidates is incomplete. Transit detection preferentially finds close-in planets with nearly edge-on orbits and misses planets whose orbital geometry precludes transits. Reliability of the planet candidates must also be considered, as they may be false positives. Errors in detected planet radius and in assumed star properties can also bias inference of intrinsic planet population characteristics.In this work we infer the intrinsic planet population, starting with the catalog of detected planet candidates produced by our random forest classifier, and accounting for detection biases and reliabilities as well as for radius errors in the detected population.Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate.
Govindarajan, R; Llueguera, E; Melero, A; Molero, J; Soler, N; Rueda, C; Paradinas, C
2010-01-01
Statistical Process Control (SPC) was applied to monitor patient set-up in radiotherapy and, when the measured set-up error values indicated a loss of process stability, its root cause was identified and eliminated to prevent set-up errors. Set up errors were measured for medial-lateral (ml), cranial-caudal (cc) and anterior-posterior (ap) dimensions and then the upper control limits were calculated. Once the control limits were known and the range variability was acceptable, treatment set-up errors were monitored using sub-groups of 3 patients, three times each shift. These values were plotted on a control chart in real time. Control limit values showed that the existing variation was acceptable. Set-up errors, measured and plotted on a X chart, helped monitor the set-up process stability and, if and when the stability was lost, treatment was interrupted, the particular cause responsible for the non-random pattern was identified and corrective action was taken before proceeding with the treatment. SPC protocol focuses on controlling the variability due to assignable cause instead of focusing on patient-to-patient variability which normally does not exist. Compared to weekly sampling of set-up error in each and every patient, which may only ensure that just those sampled sessions were set-up correctly, the SPC method enables set-up error prevention in all treatment sessions for all patients and, at the same time, reduces the control costs. Copyright © 2009 SECA. Published by Elsevier Espana. All rights reserved.
Execution monitoring for a mobile robot system
NASA Technical Reports Server (NTRS)
Miller, David P.
1990-01-01
Due to sensor errors, uncertainty, incomplete knowledge, and a dynamic world, robot plans will not always be executed exactly as planned. This paper describes an implemented robot planning system that enhances the traditional sense-think-act cycle in ways that allow the robot system monitor its behavior and react in emergencies in real-time. A proposal on how robot systems can completely break away from the traditional three-step cycle is also made.
Misclassification of childhood homicide on death certificates.
Lapidus, G D; Gregorio, D I; Hansen, H
1990-01-01
Suspect classification of homicide deaths of Connecticut residents under 20 years of age was noted for 29 percent of cases examined. Misclassification was attributed to incomplete or erroneous information recorded on the death certificates, rather than errors in the designation of ICD-9 homicide codes. The results have important implications in the interpretation of vital statistics when homicide is listed as the cause of death and underscore the value of record linkage systems. PMID:2297072
Gomes, Manuel; Hatfield, Laura; Normand, Sharon-Lise
2016-09-20
Meta-analysis of individual participant data (IPD) is increasingly utilised to improve the estimation of treatment effects, particularly among different participant subgroups. An important concern in IPD meta-analysis relates to partially or completely missing outcomes for some studies, a problem exacerbated when interest is on multiple discrete and continuous outcomes. When leveraging information from incomplete correlated outcomes across studies, the fully observed outcomes may provide important information about the incompleteness of the other outcomes. In this paper, we compare two models for handling incomplete continuous and binary outcomes in IPD meta-analysis: a joint hierarchical model and a sequence of full conditional mixed models. We illustrate how these approaches incorporate the correlation across the multiple outcomes and the between-study heterogeneity when addressing the missing data. Simulations characterise the performance of the methods across a range of scenarios which differ according to the proportion and type of missingness, strength of correlation between outcomes and the number of studies. The joint model provided confidence interval coverage consistently closer to nominal levels and lower mean squared error compared with the fully conditional approach across the scenarios considered. Methods are illustrated in a meta-analysis of randomised controlled trials comparing the effectiveness of implantable cardioverter-defibrillator devices alone to implantable cardioverter-defibrillator combined with cardiac resynchronisation therapy for treating patients with chronic heart failure. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Shotgun Protein Sequencing with Meta-contig Assembly*
Guthals, Adrian; Clauser, Karl R.; Bandeira, Nuno
2012-01-01
Full-length de novo sequencing from tandem mass (MS/MS) spectra of unknown proteins such as antibodies or proteins from organisms with unsequenced genomes remains a challenging open problem. Conventional algorithms designed to individually sequence each MS/MS spectrum are limited by incomplete peptide fragmentation or low signal to noise ratios and tend to result in short de novo sequences at low sequencing accuracy. Our shotgun protein sequencing (SPS) approach was developed to ameliorate these limitations by first finding groups of unidentified spectra from the same peptides (contigs) and then deriving a consensus de novo sequence for each assembled set of spectra (contig sequences). But whereas SPS enables much more accurate reconstruction of de novo sequences longer than can be recovered from individual MS/MS spectra, it still requires error-tolerant matching to homologous proteins to group smaller contig sequences into full-length protein sequences, thus limiting its effectiveness on sequences from poorly annotated proteins. Using low and high resolution CID and high resolution HCD MS/MS spectra, we address this limitation with a Meta-SPS algorithm designed to overlap and further assemble SPS contigs into Meta-SPS de novo contig sequences extending as long as 100 amino acids at over 97% accuracy without requiring any knowledge of homologous protein sequences. We demonstrate Meta-SPS using distinct MS/MS data sets obtained with separate enzymatic digestions and discuss how the remaining de novo sequencing limitations relate to MS/MS acquisition settings. PMID:22798278
Shotgun protein sequencing with meta-contig assembly.
Guthals, Adrian; Clauser, Karl R; Bandeira, Nuno
2012-10-01
Full-length de novo sequencing from tandem mass (MS/MS) spectra of unknown proteins such as antibodies or proteins from organisms with unsequenced genomes remains a challenging open problem. Conventional algorithms designed to individually sequence each MS/MS spectrum are limited by incomplete peptide fragmentation or low signal to noise ratios and tend to result in short de novo sequences at low sequencing accuracy. Our shotgun protein sequencing (SPS) approach was developed to ameliorate these limitations by first finding groups of unidentified spectra from the same peptides (contigs) and then deriving a consensus de novo sequence for each assembled set of spectra (contig sequences). But whereas SPS enables much more accurate reconstruction of de novo sequences longer than can be recovered from individual MS/MS spectra, it still requires error-tolerant matching to homologous proteins to group smaller contig sequences into full-length protein sequences, thus limiting its effectiveness on sequences from poorly annotated proteins. Using low and high resolution CID and high resolution HCD MS/MS spectra, we address this limitation with a Meta-SPS algorithm designed to overlap and further assemble SPS contigs into Meta-SPS de novo contig sequences extending as long as 100 amino acids at over 97% accuracy without requiring any knowledge of homologous protein sequences. We demonstrate Meta-SPS using distinct MS/MS data sets obtained with separate enzymatic digestions and discuss how the remaining de novo sequencing limitations relate to MS/MS acquisition settings.
Braiding errors in interacting Majorana quantum wires
NASA Astrophysics Data System (ADS)
Sekania, Michael; Plugge, Stephan; Greiter, Martin; Thomale, Ronny; Schmitteckert, Peter
2017-09-01
Avenues of Majorana bound states (MBSs) have become one of the primary directions towards a possible realization of topological quantum computation. For a Y junction of Kitaev quantum wires, we numerically investigate the braiding of MBSs while considering the full quasiparticle background. The two central sources of braiding errors are found to be the fidelity loss due to the incomplete adiabaticity of the braiding operation as well as the finite hybridization of the MBSs. The explicit extraction of the braiding phase from the full many-particle states allows us to analyze the breakdown of the independent-particle picture of Majorana braiding. Furthermore, we find nearest-neighbor interactions to significantly affect the braiding performance for better or worse, depending on the sign and magnitude of the coupling.
Visualizing Uncertainty of Point Phenomena by Redesigned Error Ellipses
NASA Astrophysics Data System (ADS)
Murphy, Christian E.
2018-05-01
Visualizing uncertainty remains one of the great challenges in modern cartography. There is no overarching strategy to display the nature of uncertainty, as an effective and efficient visualization depends, besides on the spatial data feature type, heavily on the type of uncertainty. This work presents a design strategy to visualize uncertainty con-nected to point features. The error ellipse, well-known from mathematical statistics, is adapted to display the uncer-tainty of point information originating from spatial generalization. Modified designs of the error ellipse show the po-tential of quantitative and qualitative symbolization and simultaneous point based uncertainty symbolization. The user can intuitively depict the centers of gravity, the major orientation of the point arrays as well as estimate the ex-tents and possible spatial distributions of multiple point phenomena. The error ellipse represents uncertainty in an intuitive way, particularly suitable for laymen. Furthermore it is shown how applicable an adapted design of the er-ror ellipse is to display the uncertainty of point features originating from incomplete data. The suitability of the error ellipse to display the uncertainty of point information is demonstrated within two showcases: (1) the analysis of formations of association football players, and (2) uncertain positioning of events on maps for the media.
Limited Plasticity of Prismatic Visuomotor Adaptation
Wischhusen, Sven; Fahle, Manfred
2017-01-01
Movements toward an object displaced optically through prisms adapt quickly, a striking example for the plasticity of neuronal visuomotor programs. We investigated the degree and time course of this system’s plasticity. Participants performed goal-directed throwing or pointing movements with terminal feedback before, during, and after wearing prism goggles shifting the visual world laterally either to the right or to the left. Prism adaptation was incomplete even after 240 throwing movements, still deviating significantly laterally by on average of 0.8° (CI = 0.20°) at the end of the adaptation period. The remaining lateral deviation was significant for pointing movements only with left shifting prisms. In both tasks, removal of the prisms led to an aftereffect which disappeared in the course of further training. This incomplete prism adaptation may be caused by movement variability combined with an adaptive neuronal control system exhibiting a finite capacity for evaluating movement errors. PMID:28473909
A Policy Representation Using Weighted Multiple Normal Distribution
NASA Astrophysics Data System (ADS)
Kimura, Hajime; Aramaki, Takeshi; Kobayashi, Shigenobu
In this paper, we challenge to solve a reinforcement learning problem for a 5-linked ring robot within a real-time so that the real-robot can stand up to the trial and error. On this robot, incomplete perception problems are caused from noisy sensors and cheap position-control motor systems. This incomplete perception also causes varying optimum actions with the progress of the learning. To cope with this problem, we adopt an actor-critic method, and we propose a new hierarchical policy representation scheme, that consists of discrete action selection on the top level and continuous action selection on the low level of the hierarchy. The proposed hierarchical scheme accelerates learning on continuous action space, and it can pursue the optimum actions varying with the progress of learning on our robotics problem. This paper compares and discusses several learning algorithms through simulations, and demonstrates the proposed method showing application for the real robot.
Errors, error detection, error correction and hippocampal-region damage: data and theories.
MacKay, Donald G; Johnson, Laura W
2013-11-01
This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.
Pinchi, Vilma; Varvara, Giuseppe; Pradella, Francesco; Focardi, Martina; Donati, Michele D; Norelli, Gianaristide
2014-01-01
The aim of the study was to analyze the characteristics of implant dentistry claims in Italy based on insurance company technical reports for malpractice claims. One hundred twenty-one technical reports of cases of professional malpractice in implant dentistry between 2006 and 2010 were included in the study. Data included the sex and age of the patient and dentist, the kind of negligence claimed, and the damages awarded as a consequence of the alleged misconduct. Of the cases examined in this study, 9.9% went to court. The patients were female in 73.6% of the cases. Most of the technical errors were committed during implant insertion (82.6%). In 50.4% of cases, the technical error involved the surrounding structures, such as damage to the inferior alveolar nerve (32.2%) or the lingual nerve (2.5%), invasion of the maxillary sinus (9.1%), or pulpal dental necrosis in adjacent teeth (6.6%). Incomplete clinical documentation was apparent in 54.5% of cases. In 9.9% of cases, a civil suit had already been filed before a visit, and medicolegal advice from the insurance expert had been procured. The discrepancy between the total number of cases examined and those that went to court indicates that implant malpractice claims in Italy are most often settled out of court. The large number of intraoperative errors seen and the high proportion of injuries to surrounding structures suggest that implant dentists would benefit from further specific training. Also, clinical documentation vital to a defense against any claims relating to professional misconduct was incomplete or absent in more than half of the cases.
Induction of belief decision trees from data
NASA Astrophysics Data System (ADS)
AbuDahab, Khalil; Xu, Dong-ling; Keane, John
2012-09-01
In this paper, a method for acquiring belief rule-bases by inductive inference from data is described and evaluated. Existing methods extract traditional rules inductively from data, with consequents that are believed to be either 100% true or 100% false. Belief rules can capture uncertain or incomplete knowledge using uncertain belief degrees in consequents. Instead of using singled-value consequents, each belief rule deals with a set of collectively exhaustive and mutually exclusive consequents. The proposed method extracts belief rules from data which contain uncertain or incomplete knowledge.
Comprehensive characterization of atmospheric organic carbon at a forested site
NASA Astrophysics Data System (ADS)
Hunter, James F.; Day, Douglas A.; Palm, Brett B.; Yatavelli, Reddy L. N.; Chan, Arthur W. H.; Kaser, Lisa; Cappellin, Luca; Hayes, Patrick L.; Cross, Eben S.; Carrasquillo, Anthony J.; Campuzano-Jost, Pedro; Stark, Harald; Zhao, Yunliang; Hohaus, Thorsten; Smith, James N.; Hansel, Armin; Karl, Thomas; Goldstein, Allen H.; Guenther, Alex; Worsnop, Douglas R.; Thornton, Joel A.; Heald, Colette L.; Jimenez, Jose L.; Kroll, Jesse H.
2017-10-01
Atmospheric organic compounds are central to key chemical processes that influence air quality, ecological health, and climate. However, longstanding difficulties in predicting important quantities such as organic aerosol formation and oxidant lifetimes indicate that our understanding of atmospheric organic chemistry is fundamentally incomplete, probably due in part to the presence of organic species that are unmeasured using standard analytical techniques. Here we present measurements of a wide range of atmospheric organic compounds--including previously unmeasured species--taken concurrently at a single site (a ponderosa pine forest during summertime) by five state-of-the-art mass spectrometric instruments. The combined data set provides a comprehensive characterization of atmospheric organic carbon, covering a wide range in chemical properties (volatility, oxidation state, and molecular size), and exhibiting no obvious measurement gaps. This enables the first construction of a measurement-based local organic budget, highlighting the high emission, deposition, and oxidation fluxes in this environment. Moreover, previously unmeasured species, including semivolatile and intermediate-volatility organic species (S/IVOCs), account for one-third of the total organic carbon, and (within error) provide closure on both OH reactivity and potential secondary organic aerosol formation.
A model for incomplete longitudinal multivariate ordinal data.
Liu, Li C
2008-12-30
In studies where multiple outcome items are repeatedly measured over time, missing data often occur. A longitudinal item response theory model is proposed for analysis of multivariate ordinal outcomes that are repeatedly measured. Under the MAR assumption, this model accommodates missing data at any level (missing item at any time point and/or missing time point). It allows for multiple random subject effects and the estimation of item discrimination parameters for the multiple outcome items. The covariates in the model can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is described utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher-scoring solution, which provides standard errors for all model parameters, is used. A data set from a longitudinal prevention study is used to motivate the application of the proposed model. In this study, multiple ordinal items of health behavior are repeatedly measured over time. Because of a planned missing design, subjects answered only two-third of all items at a given point. Copyright 2008 John Wiley & Sons, Ltd.
Bertholet, Jenny; Worm, Esben; Høyer, Morten; Poulsen, Per
2017-06-01
Accurate patient positioning is crucial in stereotactic body radiation therapy (SBRT) due to a high dose regimen. Cone-beam computed tomography (CBCT) is often used for patient positioning based on radio-opaque markers. We compared six CBCT-based set-up strategies with or without rotational correction. Twenty-nine patients with three implanted markers received 3-6 fraction liver SBRT. The markers were delineated on the mid-ventilation phase of a 4D-planning-CT. One pretreatment CBCT was acquired per fraction. Set-up strategy 1 used only translational correction based on manual marker match between the CBCT and planning CT. Set-up strategy 2 used automatic 6 degrees-of-freedom registration of the vertebrae closest to the target. The 3D marker trajectories were also extracted from the projections and the mean position of each marker was calculated and used for set-up strategies 3-6. Translational correction only was used for strategy 3. Translational and rotational corrections were used for strategies 4-6 with the rotation being either vertebrae based (strategy 4), or marker based and constrained to ±3° (strategy 5) or unconstrained (strategy 6). The resulting set-up error was calculated as the 3D root-mean-square set-up error of the three markers. The set-up error of the spinal cord was calculated for all strategies. The bony anatomy set-up (2) had the largest set-up error (5.8 mm). The marker-based set-up with unconstrained rotations (6) had the smallest set-up error (0.8 mm) but the largest spinal cord set-up error (12.1 mm). The marker-based set-up with translational correction only (3) or with bony anatomy rotational correction (4) had equivalent set-up error (1.3 mm) but rotational correction reduced the spinal cord set-up error from 4.1 mm to 3.5 mm. Marker-based set-up was substantially better than bony-anatomy set-up. Rotational correction may improve the set-up, but further investigations are required to determine the optimal correction strategy.
Do Social Conditions Affect Capuchin Monkeys' (Cebus apella) Choices in a Quantity Judgment Task?
Beran, Michael J; Perdue, Bonnie M; Parrish, Audrey E; Evans, Theodore A
2012-01-01
Beran et al. (2012) reported that capuchin monkeys closely matched the performance of humans in a quantity judgment test in which information was incomplete but a judgment still had to be made. In each test session, subjects first made quantity judgments between two known options. Then, they made choices where only one option was visible. Both humans and capuchin monkeys were guided by past outcomes, as they shifted from selecting a known option to selecting an unknown option at the point at which the known option went from being more than the average rate of return to less than the average rate of return from earlier choices in the test session. Here, we expanded this assessment of what guides quantity judgment choice behavior in the face of incomplete information to include manipulations to the unselected quantity. We manipulated the unchosen set in two ways: first, we showed the monkeys what they did not get (the unchosen set), anticipating that "losses" would weigh heavily on subsequent trials in which the same known quantity was presented. Second, we sometimes gave the unchosen set to another monkey, anticipating that this social manipulation might influence the risk-taking responses of the focal monkey when faced with incomplete information. However, neither manipulation caused difficulty for the monkeys who instead continued to use the rational strategy of choosing known sets when they were as large as or larger than the average rate of return in the session, and choosing the unknown (riskier) set when the known set was not sufficiently large. As in past experiments, this was true across a variety of daily ranges of quantities, indicating that monkeys were not using some absolute quantity as a threshold for selecting (or not) the known set, but instead continued to use the daily average rate of return to determine when to choose the known versus the unknown quantity.
2016-06-01
an effective system monitoring and display capability. The SOM, C-SSE, and resource managers access MUOS via a web portal called the MUOS Planning...and Provisioning Application (PlanProvApp). This web portal is their window into MUOS and is designed to provide them with a shared understanding of...including page loading errors, partially loaded web pages, incomplete reports, and inaccurate reports. For example, MUOS reported that there were
NASA Technical Reports Server (NTRS)
Ortega, J. M.
1985-01-01
Synopses are given for NASA supported work in computer science at the University of Virginia. Some areas of research include: error seeding as a testing method; knowledge representation for engineering design; analysis of faults in a multi-version software experiment; implementation of a parallel programming environment; two computer graphics systems for visualization of pressure distribution and convective density particles; task decomposition for multiple robot arms; vectorized incomplete conjugate gradient; and iterative methods for solving linear equations on the Flex/32.
Heping Liu; James T. Randerson; Jamie Lindfors; William J. Massman; Thomas Foken
2006-01-01
We present an approach for assessing the impact of systematic biases in measured energy fluxes on CO2 flux estimates obtained from open-path eddy-covariance systems. In our analysis, we present equations to analyse the propagation of errors through the Webb, Pearman, and Leuning (WPL) algorithm [Quart. J. Roy. Meteorol. Soc. 106, 85Â100, 1980] that is widely used to...
A novel hybrid forecasting model for PM₁₀ and SO₂ daily concentrations.
Wang, Ping; Liu, Yong; Qin, Zuodong; Zhang, Guisheng
2015-02-01
Air-quality forecasting in urban areas is difficult because of the uncertainties in describing both the emission and meteorological fields. The use of incomplete information in the training phase restricts practical air-quality forecasting. In this paper, we propose a hybrid artificial neural network and a hybrid support vector machine, which effectively enhance the forecasting accuracy of an artificial neural network (ANN) and support vector machine (SVM) by revising the error term of the traditional methods. The hybrid methodology can be described in two stages. First, we applied the ANN or SVM forecasting system with historical data and exogenous parameters, such as meteorological variables. Then, the forecasting target was revised by the Taylor expansion forecasting model using the residual information of the error term in the previous stage. The innovation involved in this approach is that it sufficiently and validly utilizes the useful residual information on an incomplete input variable condition. The proposed method was evaluated by experiments using a 2-year dataset of daily PM₁₀ (particles with a diameter of 10 μm or less) concentrations and SO₂ (sulfur dioxide) concentrations from four air pollution monitoring stations located in Taiyuan, China. The theoretical analysis and experimental results demonstrated that the forecasting accuracy of the proposed model is very promising. Copyright © 2014 Elsevier B.V. All rights reserved.
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.
Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao
2017-06-30
Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do
2017-01-01
Numerous chemical data sets have become available for quantitative structure–activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting. PMID:28691113
NASA Astrophysics Data System (ADS)
Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan
2017-06-01
Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.
Image-processing algorithms for inspecting characteristics of hybrid rice seed
NASA Astrophysics Data System (ADS)
Cheng, Fang; Ying, Yibin
2004-03-01
Incompletely closed glumes, germ and disease are three characteristics of hybrid rice seed. Image-processing algorithms developed to detect these seed characteristics were presented in this paper. The rice seed used for this study involved five varieties of Jinyou402, Shanyou10, Zhongyou207, Jiayou and IIyou. The algorithms were implemented with a 5*600 images set, a 4*400 images set and the other 5*600 images set respectively. The image sets included black background images, white background images and both sides images of rice seed. Results show that the algorithm for inspecting seeds with incompletely closed glumes based on Radon Transform achieved an accuracy of 96% for normal seeds, 92% for seeds with fine fissure and 87% for seeds with unclosed glumes, the algorithm for inspecting germinated seeds on panicle based on PCA and ANN achieved n average accuracy of 98% for normal seeds, 88% for germinated seeds on panicle and the algorithm for inspecting diseased seeds based on color features achieved an accuracy of 92% for normal and healthy seeds, 95% for spot diseased seeds and 83% for severe diseased seeds.
Yu, Liang; Wang, Bingbo; Ma, Xiaoke; Gao, Lin
2016-12-23
Extracting drug-disease correlations is crucial in unveiling disease mechanisms, as well as discovering new indications of available drugs, or drug repositioning. Both the interactome and the knowledge of disease-associated and drug-associated genes remain incomplete. We present a new method to predict the associations between drugs and diseases. Our method is based on a module distance, which is originally proposed to calculate distances between modules in incomplete human interactome. We first map all the disease genes and drug genes to a combined protein interaction network. Then based on the module distance, we calculate the distances between drug gene sets and disease gene sets, and take the distances as the relationships of drug-disease pairs. We also filter possible false positive drug-disease correlations by p-value. Finally, we validate the top-100 drug-disease associations related to six drugs in the predicted results. The overlapping between our predicted correlations with those reported in Comparative Toxicogenomics Database (CTD) and literatures, and their enriched Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways demonstrate our approach can not only effectively identify new drug indications, but also provide new insight into drug-disease discovery.
General relaxation schemes in multigrid algorithms for higher order singularity methods
NASA Technical Reports Server (NTRS)
Oskam, B.; Fray, J. M. J.
1981-01-01
Relaxation schemes based on approximate and incomplete factorization technique (AF) are described. The AF schemes allow construction of a fast multigrid method for solving integral equations of the second and first kind. The smoothing factors for integral equations of the first kind, and comparison with similar results from the second kind of equations are a novel item. Application of the MD algorithm shows convergence to the level of truncation error of a second order accurate panel method.
Emission and absorption x-ray edges of Li
DOE Office of Scientific and Technical Information (OSTI.GOV)
Callcott, T A; Arakawa, E T; Ederer, D L
1977-01-01
Measurements of the K X-ray absorption and emission edges of Li are reported. They were made with the same spectrometer at the NBS storage ring and serve to establish a 0.1 eV separation between the edges with no possibility of instrument calibration error. These results are compared with recent theories of Almbladh and Mahan describing the effects of incomplete phonon relaxation about the core hole. It is concluded that these theories give a satisfactory explanation of the data.
Adaptive management: Chapter 1
Allen, Craig R.; Garmestani, Ahjond S.; Allen, Craig R.; Garmestani, Ahjond S.
2015-01-01
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.
Allen, Craig R.; Garmestani, Ahjond S.
2015-01-01
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.
Photonic entanglement-assisted quantum low-density parity-check encoders and decoders.
Djordjevic, Ivan B
2010-05-01
I propose encoder and decoder architectures for entanglement-assisted (EA) quantum low-density parity-check (LDPC) codes suitable for all-optical implementation. I show that two basic gates needed for EA quantum error correction, namely, controlled-NOT (CNOT) and Hadamard gates can be implemented based on Mach-Zehnder interferometer. In addition, I show that EA quantum LDPC codes from balanced incomplete block designs of unitary index require only one entanglement qubit to be shared between source and destination.
NASA Astrophysics Data System (ADS)
Kougioumtzoglou, Ioannis A.; dos Santos, Ketson R. M.; Comerford, Liam
2017-09-01
Various system identification techniques exist in the literature that can handle non-stationary measured time-histories, or cases of incomplete data, or address systems following a fractional calculus modeling. However, there are not many (if any) techniques that can address all three aforementioned challenges simultaneously in a consistent manner. In this paper, a novel multiple-input/single-output (MISO) system identification technique is developed for parameter identification of nonlinear and time-variant oscillators with fractional derivative terms subject to incomplete non-stationary data. The technique utilizes a representation of the nonlinear restoring forces as a set of parallel linear sub-systems. In this regard, the oscillator is transformed into an equivalent MISO system in the wavelet domain. Next, a recently developed L1-norm minimization procedure based on compressive sensing theory is applied for determining the wavelet coefficients of the available incomplete non-stationary input-output (excitation-response) data. Finally, these wavelet coefficients are utilized to determine appropriately defined time- and frequency-dependent wavelet based frequency response functions and related oscillator parameters. Several linear and nonlinear time-variant systems with fractional derivative elements are used as numerical examples to demonstrate the reliability of the technique even in cases of noise corrupted and incomplete data.
Tian, Guo-Liang; Li, Hui-Qiong
2017-08-01
Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.
Detecting Role Errors in the Gene Hierarchy of the NCI Thesaurus
Min, Hua; Cohen, Barry; Halper, Michael; Oren, Marc; Perl, Yehoshua
2008-01-01
Gene terminologies are playing an increasingly important role in the ever-growing field of genomic research. While errors in large, complex terminologies are inevitable, gene terminologies are even more susceptible to them due to the rapid growth of genomic knowledge and the nature of its discovery. It is therefore very important to establish quality-assurance protocols for such genomic-knowledge repositories. Different kinds of terminologies oftentimes require auditing methodologies adapted to their particular structures. In light of this, an auditing methodology tailored to the characteristics of the NCI Thesaurus’s (NCIT’s) Gene hierarchy is presented. The Gene hierarchy is of particular interest to the NCIT’s designers due to the primary role of genomics in current cancer research. This multiphase methodology focuses on detecting role-errors, such as missing roles or roles with incorrect or incomplete target structures, occurring within that hierarchy. The methodology is based on two kinds of abstraction networks, called taxonomies, that highlight the role distribution among concepts within the IS-A (subsumption) hierarchy. These abstract views tend to highlight portions of the hierarchy having a higher concentration of errors. The errors found during an application of the methodology are reported. Hypotheses pertaining to the efficacy of our methodology are investigated. PMID:19221606
Factors associated with disclosure of medical errors by housestaff.
Kronman, Andrea C; Paasche-Orlow, Michael; Orlander, Jay D
2012-04-01
Attributes of the organisational culture of residency training programmes may impact patient safety. Training environments are complex, composed of clinical teams, residency programmes, and clinical units. We examined the relationship between residents' perceptions of their training environment and disclosure of or apology for their worst error. Anonymous, self-administered surveys were distributed to Medicine and Surgery residents at Boston Medical Center in 2005. Surveys asked residents to describe their worst medical error, and to answer selected questions from validated surveys measuring elements of working environments that promote learning from error. Subscales measured the microenvironments of the clinical team, residency programme, and clinical unit. Univariate and bivariate statistical analyses examined relationships between trainee characteristics, their perceived learning environment(s), and their responses to the error. Out of 109 surveys distributed to residents, 99 surveys were returned (91% overall response rate), two incomplete surveys were excluded, leaving 97: 61% internal medicine, 39% surgery, 59% male residents. While 31% reported apologising for the situation associated with the error, only 17% reported disclosing the error to patients and/or family. More male residents disclosed the error than female residents (p=0.04). Surgery residents scored higher on the subscales of safety culture pertaining to the residency programme (p=0.02) and managerial commitment to safety (p=0.05). Our Medical Culture Summary score was positively associated with disclosure (p=0.04) and apology (p=0.05). Factors in the learning environments of residents are associated with responses to medical errors. Organisational safety culture can be measured, and used to evaluate environmental attributes of clinical training that are associated with disclosure of, and apology for, medical error.
Group prioritisation with unknown expert weights in incomplete linguistic context
NASA Astrophysics Data System (ADS)
Cheng, Dong; Cheng, Faxin; Zhou, Zhili; Wang, Juan
2017-09-01
In this paper, we study a group prioritisation problem in situations when the expert weights are completely unknown and their judgement preferences are linguistic and incomplete. Starting from the theory of relative entropy (RE) and multiplicative consistency, an optimisation model is provided for deriving an individual priority vector without estimating the missing value(s) of an incomplete linguistic preference relation. In order to address the unknown expert weights in the group aggregating process, we define two new kinds of expert weight indicators based on RE: proximity entropy weight and similarity entropy weight. Furthermore, a dynamic-adjusting algorithm (DAA) is proposed to obtain an objective expert weight vector and capture the dynamic properties involved in it. Unlike the extant literature of group prioritisation, the proposed RE approach does not require pre-allocation of expert weights and can solve incomplete preference relations. An interesting finding is that once all the experts express their preference relations, the final expert weight vector derived from the DAA is fixed irrespective of the initial settings of expert weights. Finally, an application example is conducted to validate the effectiveness and robustness of the RE approach.
On the Fallibility of Principal Components in Research
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.; Li, Tenglong
2017-01-01
The measurement error in principal components extracted from a set of fallible measures is discussed and evaluated. It is shown that as long as one or more measures in a given set of observed variables contains error of measurement, so also does any principal component obtained from the set. The error variance in any principal component is shown…
Ly, Thomas; Pamer, Carol; Dang, Oanh; Brajovic, Sonja; Haider, Shahrukh; Botsis, Taxiarchis; Milward, David; Winter, Andrew; Lu, Susan; Ball, Robert
2018-05-31
The FDA Adverse Event Reporting System (FAERS) is a primary data source for identifying unlabeled adverse events (AEs) in a drug or biologic drug product's postmarketing phase. Many AE reports must be reviewed by drug safety experts to identify unlabeled AEs, even if the reported AEs are previously identified, labeled AEs. Integrating the labeling status of drug product AEs into FAERS could increase report triage and review efficiency. Medical Dictionary for Regulatory Activities (MedDRA) is the standard for coding AE terms in FAERS cases. However, drug manufacturers are not required to use MedDRA to describe AEs in product labels. We hypothesized that natural language processing (NLP) tools could assist in automating the extraction and MedDRA mapping of AE terms in drug product labels. We evaluated the performance of three NLP systems, (ETHER, I2E, MetaMap) for their ability to extract AE terms from drug labels and translate the terms to MedDRA Preferred Terms (PTs). Pharmacovigilance-based annotation guidelines for extracting AE terms from drug labels were developed for this study. We compared each system's output to MedDRA PT AE lists, manually mapped by FDA pharmacovigilance experts using the guidelines, for ten drug product labels known as the "gold standard AE list" (GSL) dataset. Strict time and configuration conditions were imposed in order to test each system's capabilities under conditions of no human intervention and minimal system configuration. Each NLP system's output was evaluated for precision, recall and F measure in comparison to the GSL. A qualitative error analysis (QEA) was conducted to categorize a random sample of each NLP system's false positive and false negative errors. A total of 417, 278, and 250 false positive errors occurred in the ETHER, I2E, and MetaMap outputs, respectively. A total of 100, 80, and 187 false negative errors occurred in ETHER, I2E, and MetaMap outputs, respectively. Precision ranged from 64% to 77%, recall from 64% to 83% and F measure from 67% to 79%. I2E had the highest precision (77%), recall (83%) and F measure (79%). ETHER had the lowest precision (64%). MetaMap had the lowest recall (64%). The QEA found that the most prevalent false positive errors were context errors such as "Context error/General term", "Context error/Instructions or monitoring parameters", "Context error/Medical history preexisting condition underlying condition risk factor or contraindication", and "Context error/AE manifestations or secondary complication". The most prevalent false negative errors were in the "Incomplete or missed extraction" error category. Missing AE terms were typically due to long terms, or terms containing non-contiguous words which do not correspond exactly to MedDRA synonyms. MedDRA mapping errors were a minority of errors for ETHER and I2E but were the most prevalent false positive errors for MetaMap. The results demonstrate that it may be feasible to use NLP tools to extract and map AE terms to MedDRA PTs. However, the NLP tools we tested would need to be modified or reconfigured to lower the error rates to support their use in a regulatory setting. Tools specific for extracting AE terms from drug labels and mapping the terms to MedDRA PTs may need to be developed to support pharmacovigilance. Conducting research using additional NLP systems on a larger, diverse GSL would also be informative. Copyright © 2018. Published by Elsevier Inc.
An investigation of error correcting techniques for OMV and AXAF
NASA Technical Reports Server (NTRS)
Ingels, Frank; Fryer, John
1991-01-01
The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.
Self-reports of induced abortion: an empathetic setting can improve the quality of data.
Rasch, V; Muhammad, H; Urassa, E; Bergström, S
2000-01-01
OBJECTIVES: This study estimated the proportion of incomplete abortions that are induced in hospital-based settings in Tanzania. METHODS: A cross-sectional questionnaire study was conducted in 2 phases at 3 hospitals in Tanzania. Phase 1 included 302 patients with a diagnosis of incomplete abortion, and phase 2 included 823 such patients. RESULTS: In phase 1, in which cases were classified by clinical criteria and information from the patient, 3.9% to 16.1% of the cases were classified as induced abortion. In phase 2, in which the structured interview was changed to an empathetic dialogue and previously used clinical criteria were omitted, 30.9% to 60.0% of the cases were classified as induced abortion. CONCLUSIONS: An empathetic dialogue improves the quality of data collected among women with induced abortion. PMID:10897196
Estimation After a Group Sequential Trial.
Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert
2015-10-01
Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.
Heneka, Nicole; Shaw, Tim; Rowett, Debra; Phillips, Jane L
2016-06-01
Opioids are the primary pharmacological treatment for cancer pain and, in the palliative care setting, are routinely used to manage symptoms at the end of life. Opioids are one of the most frequently reported drug classes in medication errors causing patient harm. Despite their widespread use, little is known about the incidence and impact of opioid medication errors in oncology and palliative care settings. To determine the incidence, types and impact of reported opioid medication errors in adult oncology and palliative care patient settings. A systematic review. Five electronic databases and the grey literature were searched from 1980 to August 2014. Empirical studies published in English, reporting data on opioid medication error incidence, types or patient impact, within adult oncology and/or palliative care services, were included. Popay's narrative synthesis approach was used to analyse data. Five empirical studies were included in this review. Opioid error incidence rate was difficult to ascertain as each study focussed on a single narrow area of error. The predominant error type related to deviation from opioid prescribing guidelines, such as incorrect dosing intervals. None of the included studies reported the degree of patient harm resulting from opioid errors. This review has highlighted the paucity of the literature examining opioid error incidence, types and patient impact in adult oncology and palliative care settings. Defining, identifying and quantifying error reporting practices for these populations should be an essential component of future oncology and palliative care quality and safety initiatives. © The Author(s) 2015.
Is adult gait less susceptible than paediatric gait to hip joint centre regression equation error?
Kiernan, D; Hosking, J; O'Brien, T
2016-03-01
Hip joint centre (HJC) regression equation error during paediatric gait has recently been shown to have clinical significance. In relation to adult gait, it has been inferred that comparable errors with children in absolute HJC position may in fact result in less significant kinematic and kinetic error. This study investigated the clinical agreement of three commonly used regression equation sets (Bell et al., Davis et al. and Orthotrak) for adult subjects against the equations of Harrington et al. The relationship between HJC position error and subject size was also investigated for the Davis et al. set. Full 3-dimensional gait analysis was performed on 12 healthy adult subjects with data for each set compared to Harrington et al. The Gait Profile Score, Gait Variable Score and GDI-kinetic were used to assess clinical significance while differences in HJC position between the Davis and Harrington sets were compared to leg length and subject height using regression analysis. A number of statistically significant differences were present in absolute HJC position. However, all sets fell below the clinically significant thresholds (GPS <1.6°, GDI-Kinetic <3.6 points). Linear regression revealed a statistically significant relationship for both increasing leg length and increasing subject height with decreasing error in anterior/posterior and superior/inferior directions. Results confirm a negligible clinical error for adult subjects suggesting that any of the examined sets could be used interchangeably. Decreasing error with both increasing leg length and increasing subject height suggests that the Davis set should be used cautiously on smaller subjects. Copyright © 2016 Elsevier B.V. All rights reserved.
Kim, Haksoo; Park, Samuel B; Monroe, James I; Traughber, Bryan J; Zheng, Yiran; Lo, Simon S; Yao, Min; Mansur, David; Ellis, Rodney; Machtay, Mitchell; Sohn, Jason W
2015-08-01
This article proposes quantitative analysis tools and digital phantoms to quantify intrinsic errors of deformable image registration (DIR) systems and establish quality assurance (QA) procedures for clinical use of DIR systems utilizing local and global error analysis methods with clinically realistic digital image phantoms. Landmark-based image registration verifications are suitable only for images with significant feature points. To address this shortfall, we adapted a deformation vector field (DVF) comparison approach with new analysis techniques to quantify the results. Digital image phantoms are derived from data sets of actual patient images (a reference image set, R, a test image set, T). Image sets from the same patient taken at different times are registered with deformable methods producing a reference DVFref. Applying DVFref to the original reference image deforms T into a new image R'. The data set, R', T, and DVFref, is from a realistic truth set and therefore can be used to analyze any DIR system and expose intrinsic errors by comparing DVFref and DVFtest. For quantitative error analysis, calculating and delineating differences between DVFs, 2 methods were used, (1) a local error analysis tool that displays deformation error magnitudes with color mapping on each image slice and (2) a global error analysis tool that calculates a deformation error histogram, which describes a cumulative probability function of errors for each anatomical structure. Three digital image phantoms were generated from three patients with a head and neck, a lung and a liver cancer. The DIR QA was evaluated using the case with head and neck. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Terray, P.; Sooraj, K. P.; Masson, S.; Krishna, R. P. M.; Samson, G.; Prajeesh, A. G.
2017-07-01
State-of-the-art global coupled models used in seasonal prediction systems and climate projections still have important deficiencies in representing the boreal summer tropical rainfall climatology. These errors include prominently a severe dry bias over all the Northern Hemisphere monsoon regions, excessive rainfall over the ocean and an unrealistic double inter-tropical convergence zone (ITCZ) structure in the tropical Pacific. While these systematic errors can be partly reduced by increasing the horizontal atmospheric resolution of the models, they also illustrate our incomplete understanding of the key mechanisms controlling the position of the ITCZ during boreal summer. Using a large collection of coupled models and dedicated coupled experiments, we show that these tropical rainfall errors are partly associated with insufficient surface thermal forcing and incorrect representation of the surface albedo over the Northern Hemisphere continents. Improving the parameterization of the land albedo in two global coupled models leads to a large reduction of these systematic errors and further demonstrates that the Northern Hemisphere subtropical deserts play a seminal role in these improvements through a heat low mechanism.
Comparison of Oral Reading Errors between Contextual Sentences and Random Words among Schoolchildren
ERIC Educational Resources Information Center
Khalid, Nursyairah Mohd; Buari, Noor Halilah; Chen, Ai-Hong
2017-01-01
This paper compares the oral reading errors between the contextual sentences and random words among schoolchildren. Two sets of reading materials were developed to test the oral reading errors in 30 schoolchildren (10.00±1.44 years). Set A was comprised contextual sentences while Set B encompassed random words. The schoolchildren were asked to…
Learning Data Set Influence on Identification Accuracy of Gas Turbine Neural Network Model
NASA Astrophysics Data System (ADS)
Kuznetsov, A. V.; Makaryants, G. M.
2018-01-01
There are many gas turbine engine identification researches via dynamic neural network models. It should minimize errors between model and real object during identification process. Questions about training data set processing of neural networks are usually missed. This article presents a study about influence of data set type on gas turbine neural network model accuracy. The identification object is thermodynamic model of micro gas turbine engine. The thermodynamic model input signal is the fuel consumption and output signal is the engine rotor rotation frequency. Four types input signals was used for creating training and testing data sets of dynamic neural network models - step, fast, slow and mixed. Four dynamic neural networks were created based on these types of training data sets. Each neural network was tested via four types test data sets. In the result 16 transition processes from four neural networks and four test data sets from analogous solving results of thermodynamic model were compared. The errors comparison was made between all neural network errors in each test data set. In the comparison result it was shown error value ranges of each test data set. It is shown that error values ranges is small therefore the influence of data set types on identification accuracy is low.
Evaluation of ship-based sediment flux measurements by ADCPs in tidal flows
NASA Astrophysics Data System (ADS)
Becker, Marius; Maushake, Christian; Grünler, Steffen; Winter, Christian
2017-04-01
In the past decades acoustic backscatter calibration developed into a frequently applied technique to measure fluxes of suspended sediments in rivers and estuaries. Data is mainly acquired using single-frequency profiling devices, such as ADCPs. In this case, variations of acoustic particle properties may have a significant impact on the calibration with respect to suspended sediment concentration, but associated effects are rarely considered. Further challenges regarding flux determination arise from incomplete vertical and lateral coverage of the cross-section, and the small ratio of the residual transport to the tidal transport, depending on the tidal prism. We analyzed four sets of 13h cross-sectional ADCP data, collected at different locations in the range of the turbidity zone of the Weser estuary, North Sea, Germany. Vertical LISST, OBS and CTD measurements were taken very hour. During the calibration sediment absorption was taken into account. First, acoustic properties were estimated using LISST particle size distributions. Due to the tidal excursion and displacement of the turbidity zone, acoustic properties of particles changed during the tidal cycle, at all locations. Applying empirical functions, the lowest backscattering cross-section and highest sediment absorption coefficient were found in the center of the turbidity zone. Outside the tidally averaged location of the turbidity zone, changes of acoustic parameters were caused mainly by advection. In the turbidity zone, these properties were also affected by settling and entrainment, inducing vertical differences and systematic errors in concentration. In general, due to the iterative correction of sediment absorption along the acoustic path, local errors in concentration propagate and amplify exponentially. Based on reference concentration obtained from water samples and OBS data, we quantified these errors and their effect on cross-sectional averaged concentration and sediment flux. We found that errors are effectively decreased by applying calibration parameters interpolated in time, and by an optimization of the sediment absorption coefficient. We further discuss practical aspects of residual flux determination in tidal environments and of measuring strategies in relation to site-specific tidal dynamics.
Healing assessment of tile sets for error tolerance in DNA self-assembly.
Hashempour, M; Mashreghian Arani, Z; Lombardi, F
2008-12-01
An assessment of the effectiveness of healing for error tolerance in DNA self-assembly tile sets for algorithmic/nano-manufacturing applications is presented. Initially, the conditions for correct binding of a tile to an existing aggregate are analysed using a Markovian approach; based on this analysis, it is proved that correct aggregation (as identified with a so-called ideal tile set) is not always met for the existing tile sets for nano-manufacturing. A metric for assessing tile sets for healing by utilising punctures is proposed. Tile sets are investigated and assessed with respect to features such as error (mismatched tile) movement, punctured area and bond types. Subsequently, it is shown that the proposed metric can comprehensively assess the healing effectiveness of a puncture type for a tile set and its capability to attain error tolerance for the desired pattern. Extensive simulation results are provided.
Özdemir, Vural; Springer, Simon
2018-03-01
Diversity is increasingly at stake in early 21st century. Diversity is often conceptualized across ethnicity, gender, socioeconomic status, sexual preference, and professional credentials, among other categories of difference. These are important and relevant considerations and yet, they are incomplete. Diversity also rests in the way we frame questions long before answers are sought. Such diversity in the framing (epistemology) of scientific and societal questions is important for they influence the types of data, results, and impacts produced by research. Errors in the framing of a research question, whether in technical science or social science, are known as type III errors, as opposed to the better known type I (false positives) and type II errors (false negatives). Kimball defined "error of the third kind" as giving the right answer to the wrong problem. Raiffa described the type III error as correctly solving the wrong problem. Type III errors are upstream or design flaws, often driven by unchecked human values and power, and can adversely impact an entire innovation ecosystem, waste money, time, careers, and precious resources by focusing on the wrong or incorrectly framed question and hypothesis. Decades may pass while technology experts, scientists, social scientists, funding agencies and management consultants continue to tackle questions that suffer from type III errors. We propose a new diversity metric, the Frame Diversity Index (FDI), based on the hitherto neglected diversities in knowledge framing. The FDI would be positively correlated with epistemological diversity and technological democracy, and inversely correlated with prevalence of type III errors in innovation ecosystems, consortia, and knowledge networks. We suggest that the FDI can usefully measure (and prevent) type III error risks in innovation ecosystems, and help broaden the concepts and practices of diversity and inclusion in science, technology, innovation and society.
Park, S B; Kim, H; Yao, M; Ellis, R; Machtay, M; Sohn, J W
2012-06-01
To quantify the systematic error of a Deformable Image Registration (DIR) system and establish Quality Assurance (QA) procedure. To address the shortfall of landmark approach which it is only available at the significant visible feature points, we adapted a Deformation Vector Map (DVM) comparison approach. We used two CT image sets (R and T image sets) taken for the same patient at different time and generated a DVM, which includes the DIR systematic error. The DVM was calculated using fine-tuned B-Spline DIR and L-BFGS optimizer. By utilizing this DVM we generated R' image set to eliminate the systematic error in DVM,. Thus, we have truth data set, R' and T image sets, and the truth DVM. To test a DIR system, we use R' and T image sets to a DIR system. We compare the test DVM to the truth DVM. If there is no systematic error, they should be identical. We built Deformation Error Histogram (DEH) for quantitative analysis. The test registration was performed with an in-house B-Spline DIR system using a stochastic gradient descent optimizer. Our example data set was generated with a head and neck patient case. We also tested CT to CBCT deformable registration. We found skin regions which interface with the air has relatively larger errors. Also mobile joints such as shoulders had larger errors. Average error for ROIs were as follows; CTV: 0.4mm, Brain stem: 1.4mm, Shoulders: 1.6mm, and Normal tissues: 0.7mm. We succeeded to build DEH approach to quantify the DVM uncertainty. Our data sets are available for testing other systems in our web page. Utilizing DEH, users can decide how much systematic error they would accept. DEH and our data can be a tool for an AAPM task group to compose a DIR system QA guideline. This project is partially supported by the Agency for Healthcare Research and Quality (AHRQ) grant 1R18HS017424-01A2. © 2012 American Association of Physicists in Medicine.
Data on empirically estimated corporate survival rate in Russia.
Kuzmin, Evgeny A
2018-02-01
The article presents data on the corporate survival rate in Russia in 1991-2014. The empirical survey was based on a random sample with the average number of non-repeated observations (number of companies) for the survey each year equal to 75,958 (24,236 minimum and 126,953 maximum). The actual limiting mean error ∆ p was 2.24% with 99% integrity. The survey methodology was based on a cross joining of various formal periods in the corporate life cycles (legal and business), which makes it possible to talk about a conventionally active time life of companies' existence with a number of assumptions. The empirical survey values were grouped by Russian regions and industries according to the classifier and consolidated into a single database for analysing the corporate life cycle and their survival rate and searching for deviation dependencies in calculated parameters. Preliminary and incomplete figures were available in the paper entitled "Survival Rate and Lifecycle in Terms of Uncertainty: Review of Companies from Russia and Eastern Europe" (Kuzmin and Guseva, 2016) [3]. The further survey led to filtered processed data with clerical errors excluded. These particular values are available in the article. The survey intended to fill a fact-based gap in various fundamental surveys that involved matters of the corporate life cycle in Russia within the insufficient statistical framework. The data are of interest for an analysis of Russian entrepreneurship, assessment of the market development and incorporation risks in the current business environment. A further heuristic potential is achievable through an ability of forecasted changes in business demography and model building based on the representative data set.
Moritz, Steffen; Voigt, Miriam; Köther, Ulf; Leighton, Lucy; Kjahili, Besiane; Babur, Zehra; Jungclaussen, David; Veckenstedt, Ruth; Grzella, Karsten
2014-06-01
There is emerging evidence that the induction of doubt can reduce positive symptoms in patients with schizophrenia. Based on prior investigations indicating that brief psychological interventions may attenuate core aspects of delusions, we set up a proof of concept study using a virtual reality experiment. We explored whether feedback for false judgments positively influences delusion severity. A total of 33 patients with schizophrenia participated in the experiment. Following a short practice trial, patients were instructed to navigate through a virtual street on two occasions (noise versus no noise), where they met six different pedestrians in each condition. Subsequently, patients were asked to recollect the pedestrians and their corresponding facial affect in a recognition task graded for confidence. Before and after the experiment, the Paranoia Checklist (frequency subscale) was administered. The Paranoia Checklist score declined significantly from pre to post at a medium effect size. We split the sample into those with some improvement versus those that either showed no improvement, or worsened. Improvement was associated with lower confidence ratings (both during the experiment, particularly for incorrect responses, and according to retrospect assessment). No control condition, unclear if improvement is sustained. The study tentatively suggests that a brief virtual reality experiment involving error feedback may ameliorate delusional ideas. Randomized controlled trials and dismantling studies are now needed to substantiate the findings and to pinpoint the underlying therapeutic mechanisms, for example error feedback or fostering attenuation of confidence judgments in the face of incomplete evidence. Copyright © 2013 Elsevier Ltd. All rights reserved.
Bauer, Amy M.; Alegría, Margarita
2010-01-01
Objective To determine the effects of limited English proficiency and use of interpreters on the quality of psychiatric care. Methods A systematic literature search for English-language publications was conducted in PubMed, PsycInfo, and CINAHL and by review of the reference lists of included articles and expert sources. Of 321 citations, 26 peer-reviewed articles met inclusion criteria by reporting primary data on the clinical care for psychiatric disorders among patients with limited proficiency in English or in the providers’ language. Results Little systematic research has addressed the impact of language proficiency or interpreter use on the quality of psychiatric care in contemporary US settings. Therefore, the literature to date is insufficient to inform evidence-based guidelines for improving quality of care among patients with limited English proficiency. Nonetheless, evaluation in a patient’s non-primary language can lead to incomplete or distorted mental status assessment whereas assessments conducted via untrained interpreters may contain interpreting errors. Consequences of interpreter errors include clinicians’ failure to identify disordered thought or delusional content. Use of professional interpreters may improve disclosure and attenuate some difficulties. Diagnostic agreement, collaborative treatment planning, and referral for specialty care may be compromised. Conclusions Clinicians should become aware of the types of quality problems that may occur when evaluating patients in a non-primary language or via an interpreter. Given demographic trends in the US, future research should aim to address the deficit in the evidence base to guide clinical practice and policy. PMID:20675834
NASA Astrophysics Data System (ADS)
He, Lidong; Anderson, Lissa C.; Barnidge, David R.; Murray, David L.; Hendrickson, Christopher L.; Marshall, Alan G.
2017-05-01
With the rapid growth of therapeutic monoclonal antibodies (mAbs), stringent quality control is needed to ensure clinical safety and efficacy. Monoclonal antibody primary sequence and post-translational modifications (PTM) are conventionally analyzed with labor-intensive, bottom-up tandem mass spectrometry (MS/MS), which is limited by incomplete peptide sequence coverage and introduction of artifacts during the lengthy analysis procedure. Here, we describe top-down and middle-down approaches with the advantages of fast sample preparation with minimal artifacts, ultrahigh mass accuracy, and extensive residue cleavages by use of 21 tesla FT-ICR MS/MS. The ultrahigh mass accuracy yields an RMS error of 0.2-0.4 ppm for antibody light chain, heavy chain, heavy chain Fc/2, and Fd subunits. The corresponding sequence coverages are 81%, 38%, 72%, and 65% with MS/MS RMS error 4 ppm. Extension to a monoclonal antibody in human serum as a monoclonal gammopathy model yielded 53% sequence coverage from two nano-LC MS/MS runs. A blind analysis of five therapeutic monoclonal antibodies at clinically relevant concentrations in human serum resulted in correct identification of all five antibodies. Nano-LC 21 T FT-ICR MS/MS provides nonpareil mass resolution, mass accuracy, and sequence coverage for mAbs, and sets a benchmark for MS/MS analysis of multiple mAbs in serum. This is the first time that extensive cleavages for both variable and constant regions have been achieved for mAbs in a human serum background.
Barcoding Neotropical birds: assessing the impact of nonmonophyly in a highly diverse group.
Chaves, Bárbara R N; Chaves, Anderson V; Nascimento, Augusto C A; Chevitarese, Juliana; Vasconcelos, Marcelo F; Santos, Fabrício R
2015-07-01
In this study, we verified the power of DNA barcodes to discriminate Neotropical birds using Bayesian tree reconstructions of a total of 7404 COI sequences from 1521 species, including 55 Brazilian species with no previous barcode data. We found that 10.4% of species were nonmonophyletic, most likely due to inaccurate taxonomy, incomplete lineage sorting or hybridization. At least 0.5% of the sequences (2.5% of the sampled species) retrieved from GenBank were associated with database errors (poor-quality sequences, NuMTs, misidentification or unnoticed hybridization). Paraphyletic species (5.8% of the total) can be related to rapid speciation events leading to nonreciprocal monophyly between recently diverged sister species, or to absence of synapomorphies in the small COI region analysed. We also performed two series of genetic distance calculations under the K2P model for intraspecific and interspecific comparisons: the first included all COI sequences, and the second included only monophyletic taxa observed in the Bayesian trees. As expected, the mean and median pairwise distances were smaller for intraspecific than for interspecific comparisons. However, there was no precise 'barcode gap', which was shown to be larger in the monophyletic taxon data set than for the data from all species, as expected. Our results indicated that although database errors may explain some of the difficulties in the species discrimination of Neotropical birds, distance-based barcode assignment may also be compromised because of the high diversity of bird species and more complex speciation events in the Neotropics. © 2014 John Wiley & Sons Ltd.
Analyzing contentious relationships and outlier genes in phylogenomics.
Walker, Joseph F; Brown, Joseph W; Smith, Stephen A
2018-06-08
Recent studies have demonstrated that conflict is common among gene trees in phylogenomic studies, and that less than one percent of genes may ultimately drive species tree inference in supermatrix analyses. Here, we examined two datasets where supermatrix and coalescent-based species trees conflict. We identified two highly influential "outlier" genes in each dataset. When removed from each dataset, the inferred supermatrix trees matched the topologies obtained from coalescent analyses. We also demonstrate that, while the outlier genes in the vertebrate dataset have been shown in a previous study to be the result of errors in orthology detection, the outlier genes from a plant dataset did not exhibit any obvious systematic error and therefore may be the result of some biological process yet to be determined. While topological comparisons among a small set of alternate topologies can be helpful in discovering outlier genes, they can be limited in several ways, such as assuming all genes share the same topology. Coalescent species tree methods relax this assumption but do not explicitly facilitate the examination of specific edges. Coalescent methods often also assume that conflict is the result of incomplete lineage sorting (ILS). Here we explored a framework that allows for quickly examining alternative edges and support for large phylogenomic datasets that does not assume a single topology for all genes. For both datasets, these analyses provided detailed results confirming the support for coalescent-based topologies. This framework suggests that we can improve our understanding of the underlying signal in phylogenomic datasets by asking more targeted edge-based questions.
Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats
2015-05-01
Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.
An Evidential Reasoning-Based CREAM to Human Reliability Analysis in Maritime Accident Process.
Wu, Bing; Yan, Xinping; Wang, Yang; Soares, C Guedes
2017-10-01
This article proposes a modified cognitive reliability and error analysis method (CREAM) for estimating the human error probability in the maritime accident process on the basis of an evidential reasoning approach. This modified CREAM is developed to precisely quantify the linguistic variables of the common performance conditions and to overcome the problem of ignoring the uncertainty caused by incomplete information in the existing CREAM models. Moreover, this article views maritime accident development from the sequential perspective, where a scenario- and barrier-based framework is proposed to describe the maritime accident process. This evidential reasoning-based CREAM approach together with the proposed accident development framework are applied to human reliability analysis of a ship capsizing accident. It will facilitate subjective human reliability analysis in different engineering systems where uncertainty exists in practice. © 2017 Society for Risk Analysis.
The accuracy of MEDLINE and Journal contents pages for papers published in Clinical Otolaryngology.
De, S; Jones, T; Brazier, H; Jones, A S; Fenton, J E
2001-02-01
MEDLINE is widely used as a source for identifying and reviewing medical journal literature. Its accuracy is generally taken for granted, as is that of the contents pages published by the journals themselves. In this study of citation accuracy we examined the articles published in Clinical Otolaryngology and Allied Sciences from 1976 to 1998. The entries in MEDLINE were compared with the entries in the Journal's contents pages, and with the actual articles. Of 1651 articles published in the journal, one was omitted from MEDLINE and 25 (1.5%) were incorrectly cited, while 88 (5.3%) were incorrectly cited in the contents pages. Twenty-one (84%) of the errors in MEDLINE involved names of authors. Apart from incomplete retrieval of information for practice and research, errors could result in an author not getting credit for publications.
Arney, Jennifer; Rafalovich, Adam
2007-01-01
The researchers collected a data set of consumer-directed print advertisements for antidepressant medications from three female-directed magazines, three male-directed magazines, and four common readership magazines published between 1997 and 2003. They evaluated these data for advertising techniques that enable drug advertisements to function as agents of medicalization. The investigators discuss the use of incomplete syllogisms in drug advertisements and identify strategies that might lead readers to frame personal physical and/or emotional conditions medically. Key features in advertisements function as the particular and general premises of a syllogism, and the concluding premise--that the reader has a mood disorder--is unarticulated but implied. The researchers examine the implications of incomplete syllogisms in advertisements and suggest that their use might lead readers to redefine their physical and/or emotional problems to fit medical models of mental distress.
NASA Technical Reports Server (NTRS)
Elishakoff, Isaac; Lin, Y. K.; Zhu, Li-Ping; Fang, Jian-Jie; Cai, G. Q.
1994-01-01
This report supplements a previous report of the same title submitted in June, 1992. It summarizes additional analytical techniques which have been developed for predicting the response of linear and nonlinear structures to noise excitations generated by large propulsion power plants. The report is divided into nine chapters. The first two deal with incomplete knowledge of boundary conditions of engineering structures. The incomplete knowledge is characterized by a convex set, and its diagnosis is formulated as a multi-hypothesis discrete decision-making algorithm with attendant criteria of adaptive termination.
Using Audit Information to Adjust Parameter Estimates for Data Errors in Clinical Trials
Shepherd, Bryan E.; Shaw, Pamela A.; Dodd, Lori E.
2013-01-01
Background Audits are often performed to assess the quality of clinical trial data, but beyond detecting fraud or sloppiness, the audit data is generally ignored. In earlier work using data from a non-randomized study, Shepherd and Yu (2011) developed statistical methods to incorporate audit results into study estimates, and demonstrated that audit data could be used to eliminate bias. Purpose In this manuscript we examine the usefulness of audit-based error-correction methods in clinical trial settings where a continuous outcome is of primary interest. Methods We demonstrate the bias of multiple linear regression estimates in general settings with an outcome that may have errors and a set of covariates for which some may have errors and others, including treatment assignment, are recorded correctly for all subjects. We study this bias under different assumptions including independence between treatment assignment, covariates, and data errors (conceivable in a double-blinded randomized trial) and independence between treatment assignment and covariates but not data errors (possible in an unblinded randomized trial). We review moment-based estimators to incorporate the audit data and propose new multiple imputation estimators. The performance of estimators is studied in simulations. Results When treatment is randomized and unrelated to data errors, estimates of the treatment effect using the original error-prone data (i.e., ignoring the audit results) are unbiased. In this setting, both moment and multiple imputation estimators incorporating audit data are more variable than standard analyses using the original data. In contrast, in settings where treatment is randomized but correlated with data errors and in settings where treatment is not randomized, standard treatment effect estimates will be biased. And in all settings, parameter estimates for the original, error-prone covariates will be biased. Treatment and covariate effect estimates can be corrected by incorporating audit data using either the multiple imputation or moment-based approaches. Bias, precision, and coverage of confidence intervals improve as the audit size increases. Limitations The extent of bias and the performance of methods depend on the extent and nature of the error as well as the size of the audit. This work only considers methods for the linear model. Settings much different than those considered here need further study. Conclusions In randomized trials with continuous outcomes and treatment assignment independent of data errors, standard analyses of treatment effects will be unbiased and are recommended. However, if treatment assignment is correlated with data errors or other covariates, naive analyses may be biased. In these settings, and when covariate effects are of interest, approaches for incorporating audit results should be considered. PMID:22848072
Decorrelation of the true and estimated classifier errors in high-dimensional settings.
Hanczar, Blaise; Hua, Jianping; Dougherty, Edward R
2007-01-01
The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error), in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting). Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, k-fold cross-validation, and .632 bootstrap). Moreover, three scenarios are considered: (1) feature selection, (2) known-feature set, and (3) all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection or using all features, with the better correlation between the latter two showing no general trend, but differing for different models.
Errors in imaging patients in the emergency setting
Reginelli, Alfonso; Lo Re, Giuseppe; Midiri, Federico; Muzj, Carlo; Romano, Luigia; Brunese, Luca
2016-01-01
Emergency and trauma care produces a “perfect storm” for radiological errors: uncooperative patients, inadequate histories, time-critical decisions, concurrent tasks and often junior personnel working after hours in busy emergency departments. The main cause of diagnostic errors in the emergency department is the failure to correctly interpret radiographs, and the majority of diagnoses missed on radiographs are fractures. Missed diagnoses potentially have important consequences for patients, clinicians and radiologists. Radiologists play a pivotal role in the diagnostic assessment of polytrauma patients and of patients with non-traumatic craniothoracoabdominal emergencies, and key elements to reduce errors in the emergency setting are knowledge, experience and the correct application of imaging protocols. This article aims to highlight the definition and classification of errors in radiology, the causes of errors in emergency radiology and the spectrum of diagnostic errors in radiography, ultrasonography and CT in the emergency setting. PMID:26838955
Errors in imaging patients in the emergency setting.
Pinto, Antonio; Reginelli, Alfonso; Pinto, Fabio; Lo Re, Giuseppe; Midiri, Federico; Muzj, Carlo; Romano, Luigia; Brunese, Luca
2016-01-01
Emergency and trauma care produces a "perfect storm" for radiological errors: uncooperative patients, inadequate histories, time-critical decisions, concurrent tasks and often junior personnel working after hours in busy emergency departments. The main cause of diagnostic errors in the emergency department is the failure to correctly interpret radiographs, and the majority of diagnoses missed on radiographs are fractures. Missed diagnoses potentially have important consequences for patients, clinicians and radiologists. Radiologists play a pivotal role in the diagnostic assessment of polytrauma patients and of patients with non-traumatic craniothoracoabdominal emergencies, and key elements to reduce errors in the emergency setting are knowledge, experience and the correct application of imaging protocols. This article aims to highlight the definition and classification of errors in radiology, the causes of errors in emergency radiology and the spectrum of diagnostic errors in radiography, ultrasonography and CT in the emergency setting.
Error decomposition and estimation of inherent optical properties.
Salama, Mhd Suhyb; Stein, Alfred
2009-09-10
We describe a methodology to quantify and separate the errors of inherent optical properties (IOPs) derived from ocean-color model inversion. Their total error is decomposed into three different sources, namely, model approximations and inversion, sensor noise, and atmospheric correction. Prior information on plausible ranges of observation, sensor noise, and inversion goodness-of-fit are employed to derive the posterior probability distribution of the IOPs. The relative contribution of each error component to the total error budget of the IOPs, all being of stochastic nature, is then quantified. The method is validated with the International Ocean Colour Coordinating Group (IOCCG) data set and the NASA bio-Optical Marine Algorithm Data set (NOMAD). The derived errors are close to the known values with correlation coefficients of 60-90% and 67-90% for IOCCG and NOMAD data sets, respectively. Model-induced errors inherent to the derived IOPs are between 10% and 57% of the total error, whereas atmospheric-induced errors are in general above 43% and up to 90% for both data sets. The proposed method is applied to synthesized and in situ measured populations of IOPs. The mean relative errors of the derived values are between 2% and 20%. A specific error table to the Medium Resolution Imaging Spectrometer (MERIS) sensor is constructed. It serves as a benchmark to evaluate the performance of the atmospheric correction method and to compute atmospheric-induced errors. Our method has a better performance and is more appropriate to estimate actual errors of ocean-color derived products than the previously suggested methods. Moreover, it is generic and can be applied to quantify the error of any derived biogeophysical parameter regardless of the used derivation.
Dynamic Financial Constraints: Distinguishing Mechanism Design from Exogenously Incomplete Regimes*
Karaivanov, Alexander; Townsend, Robert M.
2014-01-01
We formulate and solve a range of dynamic models of constrained credit/insurance that allow for moral hazard and limited commitment. We compare them to full insurance and exogenously incomplete financial regimes (autarky, saving only, borrowing and lending in a single asset). We develop computational methods based on mechanism design, linear programming, and maximum likelihood to estimate, compare, and statistically test these alternative dynamic models with financial/information constraints. Our methods can use both cross-sectional and panel data and allow for measurement error and unobserved heterogeneity. We estimate the models using data on Thai households running small businesses from two separate samples. We find that in the rural sample, the exogenously incomplete saving only and borrowing regimes provide the best fit using data on consumption, business assets, investment, and income. Family and other networks help consumption smoothing there, as in a moral hazard constrained regime. In contrast, in urban areas, we find mechanism design financial/information regimes that are decidedly less constrained, with the moral hazard model fitting best combined business and consumption data. We perform numerous robustness checks in both the Thai data and in Monte Carlo simulations and compare our maximum likelihood criterion with results from other metrics and data not used in the estimation. A prototypical counterfactual policy evaluation exercise using the estimation results is also featured. PMID:25246710
Linger, Michele L; Ray, Glen E; Zachar, Peter; Underhill, Andrea T; LoBello, Steven G
2007-10-01
Studies of graduate students learning to administer the Wechsler scales have generally shown that training is not associated with the development of scoring proficiency. Many studies report on the reduction of aggregated administration and scoring errors, a strategy that does not highlight the reduction of errors on subtests identified as most prone to error. This study evaluated the development of scoring proficiency specifically on the Wechsler (WISC-IV and WAIS-III) Vocabulary, Comprehension, and Similarities subtests during training by comparing a set of 'early test administrations' to 'later test administrations.' Twelve graduate students enrolled in an intelligence-testing course participated in the study. Scoring errors (e.g., incorrect point assignment) were evaluated on the students' actual practice administration test protocols. Errors on all three subtests declined significantly when scoring errors on 'early' sets of Wechsler scales were compared to those made on 'later' sets. However, correcting these subtest scoring errors did not cause significant changes in subtest scaled scores. Implications for clinical instruction and future research are discussed.
Categorical Working Memory Representations are used in Delayed Estimation of Continuous Colors
Hardman, Kyle O; Vergauwe, Evie; Ricker, Timothy J
2016-01-01
In the last decade, major strides have been made in understanding visual working memory through mathematical modeling of color production responses. In the delayed color estimation task (Wilken & Ma, 2004), participants are given a set of colored squares to remember and a few seconds later asked to reproduce those colors by clicking on a color wheel. The degree of error in these responses is characterized with mathematical models that estimate working memory precision and the proportion of items remembered by participants. A standard mathematical model of color memory assumes that items maintained in memory are remembered through memory for precise details about the particular studied shade of color. We contend that this model is incomplete in its present form because no mechanism is provided for remembering the coarse category of a studied color. In the present work we remedy this omission and present a model of visual working memory that includes both continuous and categorical memory representations. In two experiments we show that our new model outperforms this standard modeling approach, which demonstrates that categorical representations should be accounted for by mathematical models of visual working memory. PMID:27797548
Automated segmentation and tracking for large-scale analysis of focal adhesion dynamics.
Würflinger, T; Gamper, I; Aach, T; Sechi, A S
2011-01-01
Cell adhesion, a process mediated by the formation of discrete structures known as focal adhesions (FAs), is pivotal to many biological events including cell motility. Much is known about the molecular composition of FAs, although our knowledge of the spatio-temporal recruitment and the relative occupancy of the individual components present in the FAs is still incomplete. To fill this gap, an essential prerequisite is a highly reliable procedure for the recognition, segmentation and tracking of FAs. Although manual segmentation and tracking may provide some advantages when done by an expert, its performance is usually hampered by subjective judgement and the long time required in analysing large data sets. Here, we developed a model-based segmentation and tracking algorithm that overcomes these problems. In addition, we developed a dedicated computational approach to correct segmentation errors that may arise from the analysis of poorly defined FAs. Thus, by achieving accurate and consistent FA segmentation and tracking, our work establishes the basis for a comprehensive analysis of FA dynamics under various experimental regimes and the future development of mathematical models that simulate FA behaviour. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.
Earthquake location in transversely isotropic media with a tilted symmetry axis
NASA Astrophysics Data System (ADS)
Zhao, Aihua; Ding, Zhifeng
2009-04-01
The conventional intersection method for earthquake location in isotropic media is developed in the case of transversely isotropic media with a tilted symmetry axis (TTI media). The hypocenter is determined using its loci, which are calculated through a minimum travel time tree algorithm for ray tracing in TTI media. There are no restrictions on the structural complexity of the model or on the anisotropy strength of the medium. The location method is validated by its application to determine the hypocenter and origin time of an event in a complex TTI structure, in accordance with four hypotheses or study cases: (a) accurate model and arrival times, (b) perturbed model with randomly variable elastic parameter, (c) noisy arrival time data, and (d) incomplete set of observations from the seismic stations. Furthermore, several numerical tests demonstrate that the orientation of the symmetry axis has a significant effect on the hypocenter location when the seismic anisotropy is not very weak. Moreover, if the hypocentral determination is based on an isotropic reference model while the real medium is anisotropic, the resultant location errors can be considerable even though the anisotropy strength does not exceed 6.10%.
Categorical working memory representations are used in delayed estimation of continuous colors.
Hardman, Kyle O; Vergauwe, Evie; Ricker, Timothy J
2017-01-01
In the last decade, major strides have been made in understanding visual working memory through mathematical modeling of color production responses. In the delayed color estimation task (Wilken & Ma, 2004), participants are given a set of colored squares to remember, and a few seconds later asked to reproduce those colors by clicking on a color wheel. The degree of error in these responses is characterized with mathematical models that estimate working memory precision and the proportion of items remembered by participants. A standard mathematical model of color memory assumes that items maintained in memory are remembered through memory for precise details about the particular studied shade of color. We contend that this model is incomplete in its present form because no mechanism is provided for remembering the coarse category of a studied color. In the present work, we remedy this omission and present a model of visual working memory that includes both continuous and categorical memory representations. In 2 experiments, we show that our new model outperforms this standard modeling approach, which demonstrates that categorical representations should be accounted for by mathematical models of visual working memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Novel Method for Incorporating Model Uncertainties into Gravitational Wave Parameter Estimates
NASA Astrophysics Data System (ADS)
Moore, Christopher J.; Gair, Jonathan R.
2014-12-01
Posterior distributions on parameters computed from experimental data using Bayesian techniques are only as accurate as the models used to construct them. In many applications, these models are incomplete, which both reduces the prospects of detection and leads to a systematic error in the parameter estimates. In the analysis of data from gravitational wave detectors, for example, accurate waveform templates can be computed using numerical methods, but the prohibitive cost of these simulations means this can only be done for a small handful of parameters. In this Letter, a novel method to fold model uncertainties into data analysis is proposed; the waveform uncertainty is analytically marginalized over using with a prior distribution constructed by using Gaussian process regression to interpolate the waveform difference from a small training set of accurate templates. The method is well motivated, easy to implement, and no more computationally expensive than standard techniques. The new method is shown to perform extremely well when applied to a toy problem. While we use the application to gravitational wave data analysis to motivate and illustrate the technique, it can be applied in any context where model uncertainties exist.
NASA Astrophysics Data System (ADS)
Schlegel, N.-J.; Larour, E.; Seroussi, H.; Morlighem, M.; Box, J. E.
2013-06-01
The behavior of the Greenland Ice Sheet, which is considered a major contributor to sea level changes, is best understood on century and longer time scales. However, on decadal time scales, its response is less predictable due to the difficulty of modeling surface climate, as well as incomplete understanding of the dynamic processes responsible for ice flow. Therefore, it is imperative to understand how modeling advancements, such as increased spatial resolution or more comprehensive ice flow equations, might improve projections of ice sheet response to climatic trends. Here we examine how a finely resolved climate forcing influences a high-resolution ice stream model that considers longitudinal stresses. We simulate ice flow using a two-dimensional Shelfy-Stream Approximation implemented within the Ice Sheet System Model (ISSM) and use uncertainty quantification tools embedded within the model to calculate the sensitivity of ice flow within the Northeast Greenland Ice Stream to errors in surface mass balance (SMB) forcing. Our results suggest that the model tends to smooth ice velocities even when forced with extreme errors in SMB. Indeed, errors propagate linearly through the model, resulting in discharge uncertainty of 16% or 1.9 Gt/yr. We find that mass flux is most sensitive to local errors but is also affected by errors hundreds of kilometers away; thus, an accurate SMB map of the entire basin is critical for realistic simulation. Furthermore, sensitivity analyses indicate that SMB forcing needs to be provided at a resolution of at least 40 km.
NASA Astrophysics Data System (ADS)
Merker, Claire; Ament, Felix; Clemens, Marco
2017-04-01
The quantification of measurement uncertainty for rain radar data remains challenging. Radar reflectivity measurements are affected, amongst other things, by calibration errors, noise, blocking and clutter, and attenuation. Their combined impact on measurement accuracy is difficult to quantify due to incomplete process understanding and complex interdependencies. An improved quality assessment of rain radar measurements is of interest for applications both in meteorology and hydrology, for example for precipitation ensemble generation, rainfall runoff simulations, or in data assimilation for numerical weather prediction. Especially a detailed description of the spatial and temporal structure of errors is beneficial in order to make best use of the areal precipitation information provided by radars. Radar precipitation ensembles are one promising approach to represent spatially variable radar measurement errors. We present a method combining ensemble radar precipitation nowcasting with data assimilation to estimate radar measurement uncertainty at each pixel. This combination of ensemble forecast and observation yields a consistent spatial and temporal evolution of the radar error field. We use an advection-based nowcasting method to generate an ensemble reflectivity forecast from initial data of a rain radar network. Subsequently, reflectivity data from single radars is assimilated into the forecast using the Local Ensemble Transform Kalman Filter. The spread of the resulting analysis ensemble provides a flow-dependent, spatially and temporally correlated reflectivity error estimate at each pixel. We will present first case studies that illustrate the method using data from a high-resolution X-band radar network.
Is Single-Port Laparoscopy More Precise and Faster with the Robot?
Fransen, Sofie A F; van den Bos, Jacqueline; Stassen, Laurents P S; Bouvy, Nicole D
2016-11-01
Single-port laparoscopy is a step forward toward nearly scar less surgery. Concern has been raised that single-incision laparoscopic surgery (SILS) is technically more challenging because of the lack of triangulation and the clashing of instruments. Robotic single-incision laparoscopic surgery (RSILS) in chopstick setting might overcome these problems. This study evaluated the outcome in time and errors of two tasks of the Fundamentals of Laparoscopic Surgery on a dry platform, in two settings: SILS versus RSILS. Nine experienced laparoscopic surgeons performed two tasks: peg transfer and a suturing task, on a standard box trainer. All participants practiced each task three times in both settings: SILS and a RSILS setting. The assessment scores (time and errors) were recorded. For the first task of peg transfer, RSILS was significantly better in time (124 versus 230 seconds, P = .0004) and errors (0.80 errors versus 2.60 errors, P = .024) at the first run, compared to the SILS setting. At the third and final run, RSILS still proved to be significantly better in errors (0.10 errors versus 0.80 errors, P = .025) compared to the SILS group. RSILS was faster in the third run, but not significant (116 versus 157 seconds, P = .08). For the second task, a suturing task, only 3 participants of the SILS group were able to perform this task within the set time frame of 600 seconds. There was no significant difference in time in the three runs between SILS and RSILS for the 3 participants that fulfilled both tasks within the 600 seconds. This study shows that robotic single-port surgery seems easier, faster, and more precise to perform basis tasks of the Fundamentals of laparoscopic surgery. For the more complex task of suturing, only the single-port robotic setting enabled all participants to fulfill this task, within the set time frame.
Tokuda, Yasuharu; Kishida, Naoki; Konishi, Ryota; Koizumi, Shunzo
2011-03-01
Cognitive errors in the course of clinical decision-making are prevalent in many cases of medical injury. We used information on verdict's judgment from closed claims files to determine the important cognitive factors associated with cases of medical injury. Data were collected from claims closed between 2001 to 2005 at district courts in Tokyo and Osaka, Japan. In each case, we recorded all the contributory cognitive, systemic, and patient-related factors judged in the verdicts to be causally related to the medical injury. We also analyzed the association between cognitive factors and cases involving paid compensation using a multivariable logistic regression model. Among 274 cases (mean age 49 years old; 45% women), there were 122 (45%) deaths and 67 (24%) major injuries (incomplete recovery within a year). In 103 cases (38%), the verdicts ordered hospitals to pay compensation (median; 8,000,000 Japanese Yen). An error in judgment (199/274, 73%) and failure of vigilance (177/274, 65%) were the most prevalent causative cognitive factors, and error in judgment was also significantly associated with paid compensation (odds ratio, 1.9; 95% confidence interval [CI], 1.0-3.4). Systemic causative factors including poor teamwork (11/274, 4%) and technology failure (5/274, 2%) were less common. The closed claims analysis based on verdict's judgment showed that cognitive errors were common in cases of medical injury, with an error in judgment being most prevalent and closely associated with compensation payment. Reduction of this type of error is required to produce safer healthcare. 2010 Society of Hospital Medicine.
Sequence data - Magnitude and implications of some ambiguities.
NASA Technical Reports Server (NTRS)
Holmquist, R.; Jukes, T. H.
1972-01-01
A stochastic model is applied to the divergence of the horse-pig lineage from a common ansestor in terms of the alpha and beta chains of hemoglobin and fibrinopeptides. The results are compared with those based on the minimum mutation distance model of Fitch (1972). Buckwheat and cauliflower cytochrome c sequences are analyzed to demonstrate their ambiguities. A comparative analysis of evolutionary rates for various proteins of horses and pigs shows that errors of considerable magnitude are introduced by Glx and Asx ambiguities into evolutionary conclusions drawn from sequences of incompletely analyzed proteins.
Stability of continuous-time quantum filters with measurement imperfections
NASA Astrophysics Data System (ADS)
Amini, H.; Pellegrini, C.; Rouchon, P.
2014-07-01
The fidelity between the state of a continuously observed quantum system and the state of its associated quantum filter, is shown to be always a submartingale. The observed system is assumed to be governed by a continuous-time Stochastic Master Equation (SME), driven simultaneously by Wiener and Poisson processes and that takes into account incompleteness and errors in measurements. This stability result is the continuous-time counterpart of a similar stability result already established for discrete-time quantum systems and where the measurement imperfections are modelled by a left stochastic matrix.
NASA Technical Reports Server (NTRS)
Ortega, J. M.
1984-01-01
Several short summaries of the work performed during this reporting period are presented. Topics discussed in this document include: (1) resilient seeded errors via simple techniques; (2) knowledge representation for engineering design; (3) analysis of faults in a multiversion software experiment; (4) implementation of parallel programming environment; (5) symbolic execution of concurrent programs; (6) two computer graphics systems for visualization of pressure distribution and convective density particles; (7) design of a source code management system; (8) vectorizing incomplete conjugate gradient on the Cyber 203/205; (9) extensions of domain testing theory and; (10) performance analyzer for the pisces system.
A study of the luminosity function for field galaxies. [non-rich-cluster galaxies
NASA Technical Reports Server (NTRS)
Felten, J. E.
1977-01-01
Nine determinations of the luminosity function (LF) for field galaxies are analyzed and compared. Corrections for differences in Hubble constants, magnitude systems, galactic absorption functions, and definitions of the LF are necessary prior to comparison. Errors in previous comparisons are pointed out. After these corrections, eight of the nine determinations are in fairly good agreement. The discrepancy in the ninth appears to be mainly an incompleteness effect. The LF data suggest that there is little if any distinction between field galaxies and those in small groups.
Abrams, Robert M
2015-09-01
Sleep deprivation occurs when inadequate sleep leads to decreased performance, inadequate alertness, and deterioration in health. It is incompletely understood why humans need sleep, although some theories include energy conservation, restoration, and information processing. Sleep deprivation has many deleterious health effects. Residency programs have enacted strict work restrictions because of medically related errors due to sleep deprivation. Because obstetrics is an unpredictable specialty with long irregular hours, enacting a hospitalist program enhances patient safety, decreases malpractice risk, and improves the physician's quality of life by allowing obstetricians to get sufficient rest. Copyright © 2015 Elsevier Inc. All rights reserved.
de Freitas, Carolina P.; Cabot, Florence; Manns, Fabrice; Culbertson, William; Yoo, Sonia H.; Parel, Jean-Marie
2015-01-01
Purpose. To assess if a change in refractive index of the anterior chamber during femtosecond laser-assisted cataract surgery can affect the laser beam focus position. Methods. The index of refraction and chromatic dispersion of six ophthalmic viscoelastic devices (OVDs) was measured with an Abbe refractometer. Using the Gullstrand eye model, the index values were used to predict the error in the depth of a femtosecond laser cut when the anterior chamber is filled with OVD. Two sources of error produced by the change in refractive index were evaluated: the error in anterior capsule position measured with optical coherence tomography biometry and the shift in femtosecond laser beam focus depth. Results. The refractive indices of the OVDs measured ranged from 1.335 to 1.341 in the visible light (at 587 nm). The error in depth measurement of the refilled anterior chamber ranged from −5 to +7 μm. The OVD produced a shift of the femtosecond laser focus ranging from −1 to +6 μm. Replacement of the aqueous humor with OVDs with the densest compound produced a predicted error in cut depth of 13 μm anterior to the expected cut. Conclusions. Our calculations show that the change in refractive index due to anterior chamber refilling does not sufficiently shift the laser beam focus position to cause the incomplete capsulotomies reported during femtosecond laser–assisted cataract surgery. PMID:25626971
Palmero, David; Di Paolo, Ermindo R; Beauport, Lydie; Pannatier, André; Tolsa, Jean-François
2016-01-01
The objective of this study was to assess whether the introduction of a new preformatted medical order sheet coupled with an introductory course affected prescription quality and the frequency of errors during the prescription stage in a neonatal intensive care unit (NICU). Two-phase observational study consisting of two consecutive 4-month phases: pre-intervention (phase 0) and post-intervention (phase I) conducted in an 11-bed NICU in a Swiss university hospital. Interventions consisted of the introduction of a new preformatted medical order sheet with explicit information supplied, coupled with a staff introductory course on appropriate prescription and medication errors. The main outcomes measured were formal aspects of prescription and frequency and nature of prescription errors. Eighty-three and 81 patients were included in phase 0 and phase I, respectively. A total of 505 handwritten prescriptions in phase 0 and 525 in phase I were analysed. The rate of prescription errors decreased significantly from 28.9% in phase 0 to 13.5% in phase I (p < 0.05). Compared with phase 0, dose errors, name confusion and errors in frequency and rate of drug administration decreased in phase I, from 5.4 to 2.7% (p < 0.05), 5.9 to 0.2% (p < 0.05), 3.6 to 0.2% (p < 0.05), and 4.7 to 2.1% (p < 0.05), respectively. The rate of incomplete and ambiguous prescriptions decreased from 44.2 to 25.7 and 8.5 to 3.2% (p < 0.05), respectively. Inexpensive and simple interventions can improve the intelligibility of prescriptions and reduce medication errors. Medication errors are frequent in NICUs and prescription is one of the most critical steps. CPOE reduce prescription errors, but their implementation is not available everywhere. Preformatted medical order sheet coupled with an introductory course decrease medication errors in a NICU. Preformatted medical order sheet is an inexpensive and readily implemented alternative to CPOE.
Severe infectious diseases of childhood as monogenic inborn errors of immunity
Casanova, Jean-Laurent
2015-01-01
This paper reviews the developments that have occurred in the field of human genetics of infectious diseases from the second half of the 20th century onward. In particular, it stresses and explains the importance of the recently described monogenic inborn errors of immunity underlying resistance or susceptibility to specific infections. The monogenic component of the genetic theory provides a plausible explanation for the occurrence of severe infectious diseases during primary infection. Over the last 20 y, increasing numbers of life-threatening infectious diseases striking otherwise healthy children, adolescents, and even young adults have been attributed to single-gene inborn errors of immunity. These studies were inspired by seminal but neglected findings in plant and animal infections. Infectious diseases typically manifest as sporadic traits because human genotypes often display incomplete penetrance (most genetically predisposed individuals remain healthy) and variable expressivity (different infections can be allelic at the same locus). Infectious diseases of childhood, once thought to be archetypal environmental diseases, actually may be among the most genetically determined conditions of mankind. This nascent and testable notion has interesting medical and biological implications. PMID:26621750
Vallejo, Guillermo; Ato, Manuel; Fernández García, Paula; Livacic Rojas, Pablo E; Tuero Herrero, Ellián
2016-08-01
S. Usami (2014) describes a method to realistically determine sample size in longitudinal research using a multilevel model. The present research extends the aforementioned work to situations where it is likely that the assumption of homogeneity of the errors across groups is not met and the error term does not follow a scaled identity covariance structure. For this purpose, we followed a procedure based on transforming the variance components of the linear growth model and the parameter related to the treatment effect into specific and easily understandable indices. At the same time, we provide the appropriate statistical machinery for researchers to use when data loss is unavoidable, and changes in the expected value of the observed responses are not linear. The empirical powers based on unknown variance components were virtually the same as the theoretical powers derived from the use of statistically processed indexes. The main conclusion of the study is the accuracy of the proposed method to calculate sample size in the described situations with the stipulated power criteria.
Kasaie, Parastu; Mathema, Barun; Kelton, W David; Azman, Andrew S; Pennington, Jeff; Dowdy, David W
2015-01-01
In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission ("recent transmission proportion"), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional 'n-1' approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the 'n-1' technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the 'n-1' model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models' performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data.
Kasaie, Parastu; Mathema, Barun; Kelton, W. David; Azman, Andrew S.; Pennington, Jeff; Dowdy, David W.
2015-01-01
In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission (“recent transmission proportion”), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional ‘n-1’ approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the ‘n-1’ technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the ‘n-1’ model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models’ performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data. PMID:26679499
Modelling regulation of decomposition and related root/mycorrhizal processes in arctic tundra soils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linkins, A.E.
1992-01-01
Since this was the final year of this project principal activities were directed towards either collecting data needed to complete existing incomplete data sets or writing manuscripts. Data sets on Imnaviat Creek watershed basin are functionally complete and data finialized on the cellulose mineralizaiton and dust impact on soil organic carbon and phsophorus decomposition. Seven manuscripts were prepared, and are briefly outlined.
Carnes, Debra; Kilpatrick, Sue; Iedema, Rick
2015-12-01
This study aims to determine the likelihood that rural nurses perceive a hypothetical medication error would be reported in their workplace. This employs cross-sectional survey using hypothetical error scenario with varying levels of harm. Clinical settings in rural Tasmania. Participants were 116 eligible surveys received from registered and enrolled nurses. Frequency of responses indicating the likelihood that severe, moderate and near miss (no harm) scenario would 'always' be reported or disclosed. Eighty per cent of nurses viewed a severe error would 'always' be reported, 64.8% a moderate error and 45.7% a near-miss error. In regards to disclosure, 54.7% felt this was 'always' likely to occur for a severe error, 44.8% for a moderate error and 26.4% for a near miss. Across all levels of severity, aged-care nurses were more likely than nurses in other settings to view error to 'always' be reported (ranging from 72-96%, P = 0.010 to 0.042,) and disclosed (68-88%, P = 0.000). Those in a management role were more likely to view error to 'always' be disclosed compared to those in a clinical role (50-77.3%, P = 0.008-0.024). Further research in rural clinical settings is needed to improve the understanding of error management and disclosure. © 2015 The Authors. Australian Journal of Rural Health published by Wiley Publishing Asia Pty Ltd on behalf of National Rural Health Alliance.
Automated extraction of Biomarker information from pathology reports.
Lee, Jeongeun; Song, Hyun-Je; Yoon, Eunsil; Park, Seong-Bae; Park, Sung-Hye; Seo, Jeong-Wook; Park, Peom; Choi, Jinwook
2018-05-21
Pathology reports are written in free-text form, which precludes efficient data gathering. We aimed to overcome this limitation and design an automated system for extracting biomarker profiles from accumulated pathology reports. We designed a new data model for representing biomarker knowledge. The automated system parses immunohistochemistry reports based on a "slide paragraph" unit defined as a set of immunohistochemistry findings obtained for the same tissue slide. Pathology reports are parsed using context-free grammar for immunohistochemistry, and using a tree-like structure for surgical pathology. The performance of the approach was validated on manually annotated pathology reports of 100 randomly selected patients managed at Seoul National University Hospital. High F-scores were obtained for parsing biomarker name and corresponding test results (0.999 and 0.998, respectively) from the immunohistochemistry reports, compared to relatively poor performance for parsing surgical pathology findings. However, applying the proposed approach to our single-center dataset revealed information on 221 unique biomarkers, which represents a richer result than biomarker profiles obtained based on the published literature. Owing to the data representation model, the proposed approach can associate biomarker profiles extracted from an immunohistochemistry report with corresponding pathology findings listed in one or more surgical pathology reports. Term variations are resolved by normalization to corresponding preferred terms determined by expanded dictionary look-up and text similarity-based search. Our proposed approach for biomarker data extraction addresses key limitations regarding data representation and can handle reports prepared in the clinical setting, which often contain incomplete sentences, typographical errors, and inconsistent formatting.
A comparative study of two hazard handling training methods for novice drivers.
Wang, Y B; Zhang, W; Salvendy, G
2010-10-01
The effectiveness of two hazard perception training methods, simulation-based error training (SET) and video-based guided error training (VGET), for novice drivers' hazard handling performance was tested, compared, and analyzed. Thirty-two novice drivers participated in the hazard perception training. Half of the participants were trained using SET by making errors and/or experiencing accidents while driving with a desktop simulator. The other half were trained using VGET by watching prerecorded video clips of errors and accidents that were made by other people. The two groups had exposure to equal numbers of errors for each training scenario. All the participants were tested and evaluated for hazard handling on a full cockpit driving simulator one week after training. Hazard handling performance and hazard response were measured in this transfer test. Both hazard handling performance scores and hazard response distances were significantly better for the SET group than the VGET group. Furthermore, the SET group had more metacognitive activities and intrinsic motivation. SET also seemed more effective in changing participants' confidence, but the result did not reach the significance level. SET exhibited a higher training effectiveness of hazard response and handling than VGET in the simulated transfer test. The superiority of SET might benefit from the higher levels of metacognition and intrinsic motivation during training, which was observed in the experiment. Future research should be conducted to assess whether the advantages of error training are still effective under real road conditions.
Automatic cortical segmentation in the developing brain.
Xue, Hui; Srinivasan, Latha; Jiang, Shuzhou; Rutherford, Mary; Edwards, A David; Rueckert, Daniel; Hajnal, Jo V
2007-01-01
The segmentation of neonatal cortex from magnetic resonance (MR) images is much more challenging than the segmentation of cortex in adults. The main reason is the inverted contrast between grey matter (GM) and white matter (WM) that occurs when myelination is incomplete. This causes mislabeled partial volume voxels, especially at the interface between GM and cerebrospinal fluid (CSF). We propose a fully automatic cortical segmentation algorithm, detecting these mislabeled voxels using a knowledge-based approach and correcting errors by adjusting local priors to favor the correct classification. Our results show that the proposed algorithm corrects errors in the segmentation of both GM and WM compared to the classic EM scheme. The segmentation algorithm has been tested on 25 neonates with the gestational ages ranging from approximately 27 to 45 weeks. Quantitative comparison to the manual segmentation demonstrates good performance of the method (mean Dice similarity: 0.758 +/- 0.037 for GM and 0.794 +/- 0.078 for WM).
Douali, Nassim; Csaba, Huszka; De Roo, Jos; Papageorgiou, Elpiniki I; Jaulent, Marie-Christine
2014-01-01
Several studies have described the prevalence and severity of diagnostic errors. Diagnostic errors can arise from cognitive, training, educational and other issues. Examples of cognitive issues include flawed reasoning, incomplete knowledge, faulty information gathering or interpretation, and inappropriate use of decision-making heuristics. We describe a new approach, case-based fuzzy cognitive maps, for medical diagnosis and evaluate it by comparison with Bayesian belief networks. We created a semantic web framework that supports the two reasoning methods. We used database of 174 anonymous patients from several European hospitals: 80 of the patients were female and 94 male with an average age 45±16 (average±stdev). Thirty of the 80 female patients were pregnant. For each patient, signs/symptoms/observables/age/sex were taken into account by the system. We used a statistical approach to compare the two methods. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
First Order Reliability Application and Verification Methods for Semistatic Structures
NASA Technical Reports Server (NTRS)
Verderaime, Vincent
1994-01-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.
Insight into organic reactions from the direct random phase approximation and its corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruzsinszky, Adrienn; Zhang, Igor Ying; Scheffler, Matthias
2015-10-14
The performance of the random phase approximation (RPA) and beyond-RPA approximations for the treatment of electron correlation is benchmarked on three different molecular test sets. The test sets are chosen to represent three typical sources of error which can contribute to the failure of most density functional approximations in chemical reactions. The first test set (atomization and n-homodesmotic reactions) offers a gradually increasing balance of error from the chemical environment. The second test set (Diels-Alder reaction cycloaddition = DARC) reflects more the effect of weak dispersion interactions in chemical reactions. Finally, the third test set (self-interaction error 11 = SIE11)more » represents reactions which are exposed to noticeable self-interaction errors. This work seeks to answer whether any one of the many-body approximations considered here successfully addresses all these challenges.« less
Ensemble positive unlabeled learning for disease gene identification.
Yang, Peng; Li, Xiaoli; Chua, Hon-Nian; Kwoh, Chee-Keong; Ng, See-Kiong
2014-01-01
An increasing number of genes have been experimentally confirmed in recent years as causative genes to various human diseases. The newly available knowledge can be exploited by machine learning methods to discover additional unknown genes that are likely to be associated with diseases. In particular, positive unlabeled learning (PU learning) methods, which require only a positive training set P (confirmed disease genes) and an unlabeled set U (the unknown candidate genes) instead of a negative training set N, have been shown to be effective in uncovering new disease genes in the current scenario. Using only a single source of data for prediction can be susceptible to bias due to incompleteness and noise in the genomic data and a single machine learning predictor prone to bias caused by inherent limitations of individual methods. In this paper, we propose an effective PU learning framework that integrates multiple biological data sources and an ensemble of powerful machine learning classifiers for disease gene identification. Our proposed method integrates data from multiple biological sources for training PU learning classifiers. A novel ensemble-based PU learning method EPU is then used to integrate multiple PU learning classifiers to achieve accurate and robust disease gene predictions. Our evaluation experiments across six disease groups showed that EPU achieved significantly better results compared with various state-of-the-art prediction methods as well as ensemble learning classifiers. Through integrating multiple biological data sources for training and the outputs of an ensemble of PU learning classifiers for prediction, we are able to minimize the potential bias and errors in individual data sources and machine learning algorithms to achieve more accurate and robust disease gene predictions. In the future, our EPU method provides an effective framework to integrate the additional biological and computational resources for better disease gene predictions.
Jørgensen, Vivien; Roaldsen, Kirsti Skavberg
2016-01-01
Objective: Explore and describe experiences and perceptions of falls, risk of falling, and fall-related consequences in individuals with incomplete spinal cord injury (SCI) who are still walking. Design: A qualitative interview study applying interpretive content analysis with an inductive approach. Setting: Specialized rehabilitation hospital. Subjects: A purposeful sample of 15 individuals (10 men), 23 to 78 years old, 2-34 years post injury with chronic incomplete traumatic SCI, and walking ⩾75% of time for mobility needs. Methods: Individual, semi-structured face-to-face interviews were recorded, condensed, and coded to find themes and subthemes. Results: One overarching theme was revealed: “Falling challenges identity and self-image as normal” which comprised two main themes “Walking with incomplete SCI involves minimizing fall risk and fall-related concerns without compromising identity as normal” and “Walking with incomplete SCI implies willingness to increase fall risk in order to maintain identity as normal”. Informants were aware of their increased fall risk and took precautions, but willingly exposed themselves to risky situations when important to self-identity. All informants expressed some conditional fall-related concerns, and a few experienced concerns limiting activity and participation. Conclusion: Ambulatory individuals with incomplete SCI considered falls to be a part of life. However, falls interfered with the informants’ identities and self-images as normal, healthy, and well-functioning. A few expressed dysfunctional concerns about falling, and interventions should target these. PMID:27170274
NASA Astrophysics Data System (ADS)
Smith, L. A.
2012-04-01
There is, at present, no attractive foundation for quantitative probabilistic decision support in the face of model inadequacy, or given ambiguity (deep uncertainty) regarding the relative likelihood of various outcomes, known or unknown. True model error arguably precludes the extraction of objective probabilities from an ensemble of model runs drawn from an available (inadequate) model class, while the acknowledgement of incomplete understanding precludes the justified use of (if not the very formation of) an individual's subjective probabilities. An alternative approach based on Sustainable Odds is proposed and investigated. Sustainable Odds differ from "fair odds" (and are easily distinguished any claim which implying well defined probabilities) as the probabilities implied by sustainable odds summed over all outcomes is expected to exceed one. Traditionally, a person's fair odds are found by identifying the probability level at which one would happily accept either side of a bet, thus the probabilities implied by fair odds always sum to one. Knowing that one has incomplete information and perhaps even erroneous beliefs, there is no compelling reason a rational agent should accept the constraint implied by "fair odds" in any bet. Rather, a rational agent might insist on longer odds both on the event and against the event in order to account for acknowledged ignorance. Let probabilistic odds imply any set of odds for which the implied probabilities sum to one; once model error is acknowledged can one rationally demand non-probabilistic odds? The danger of using fair odds (or probabilities) in decision making is illustrated by considering the risk of ruin a cooperative insurance scheme using probabilistic odds is exposed to. Cases where knowing merely that the insurer's model is imperfect, and nothing else, is sufficient to place bets which drive the insurer to an unexpectedly early ruin are presented. Methodologies which allow the insurer to avoid this early ruin are explored; those which prevent early ruin are said to provide "sustainable odds", and it is suggested that these must be non-probabilistic. The aim here is not for the insurance cooperative to make a profit in the long run (or to form a book in any one round) but rather to increase the chance that the cooperative will not go bust, merely breaking even in the long run and thereby continuing to provide a service. In the perfect model scenario, with complete knowledge of all uncertainties and unlimited computational resources, fair odds may prove to be sustainable. The implications these results hold in the case of games against nature, which is perhaps a more relevant context for decision makers concerned with geophysical systems, are discussed. The claim that acknowledged model error makes fair (probabilistic) odds an irrational aim is considered, as are the challenges of working within the framework of sustainable (but non-probabilistic) odds.
Uncertainties in climate data sets
NASA Technical Reports Server (NTRS)
Mcguirk, James P.
1992-01-01
Climate diagnostics are constructed from either analyzed fields or from observational data sets. Those that have been commonly used are normally considered ground truth. However, in most of these collections, errors and uncertainties exist which are generally ignored due to the consistency of usage over time. Examples of uncertainties and errors are described in NMC and ECMWF analyses and in satellite observational sets-OLR, TOVS, and SMMR. It is suggested that these errors can be large, systematic, and not negligible in climate analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linkins, A.E.
1992-09-01
Since this was the final year of this project principal activities were directed towards either collecting data needed to complete existing incomplete data sets or writing manuscripts. Data sets on Imnaviat Creek watershed basin are functionally complete and data finialized on the cellulose mineralizaiton and dust impact on soil organic carbon and phsophorus decomposition. Seven manuscripts were prepared, and are briefly outlined.
Computerized N-acetylcysteine physician order entry by template protocol for acetaminophen toxicity.
Thompson, Trevonne M; Lu, Jenny J; Blackwood, Louisa; Leikin, Jerrold B
2011-01-01
Some medication dosing protocols are logistically complex for traditional physician ordering. The use of computerized physician order entry (CPOE) with templates, or order sets, may be useful to reduce medication administration errors. This study evaluated the rate of medication administration errors using CPOE order sets for N-acetylcysteine (NAC) use in treating acetaminophen poisoning. An 18-month retrospective review of computerized inpatient pharmacy records for NAC use was performed. All patients who received NAC for the treatment of acetaminophen poisoning were included. Each record was analyzed to determine the form of NAC given and whether an administration error occurred. In the 82 cases of acetaminophen poisoning in which NAC was given, no medication administration errors were identified. Oral NAC was given in 31 (38%) cases; intravenous NAC was given in 51 (62%) cases. In this retrospective analysis of N-acetylcysteine administration using computerized physician order entry and order sets, no medication administration errors occurred. CPOE is an effective tool in safely executing complicated protocols in an inpatient setting.
49 CFR 529.6 - Requirements for final-stage manufacturers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION MANUFACTURERS OF MULTISTAGE AUTOMOBILES § 529... section, each final-stage manufacturer whose manufacturing operations on an incomplete automobile cause the completed automobile to exceed the maximum curb weight or maximum frontal area set forth in the...
49 CFR 529.6 - Requirements for final-stage manufacturers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION MANUFACTURERS OF MULTISTAGE AUTOMOBILES § 529... section, each final-stage manufacturer whose manufacturing operations on an incomplete automobile cause the completed automobile to exceed the maximum curb weight or maximum frontal area set forth in the...
49 CFR 529.6 - Requirements for final-stage manufacturers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION MANUFACTURERS OF MULTISTAGE AUTOMOBILES § 529... section, each final-stage manufacturer whose manufacturing operations on an incomplete automobile cause the completed automobile to exceed the maximum curb weight or maximum frontal area set forth in the...
49 CFR 529.6 - Requirements for final-stage manufacturers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION MANUFACTURERS OF MULTISTAGE AUTOMOBILES § 529... section, each final-stage manufacturer whose manufacturing operations on an incomplete automobile cause the completed automobile to exceed the maximum curb weight or maximum frontal area set forth in the...
49 CFR 529.6 - Requirements for final-stage manufacturers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION MANUFACTURERS OF MULTISTAGE AUTOMOBILES § 529... section, each final-stage manufacturer whose manufacturing operations on an incomplete automobile cause the completed automobile to exceed the maximum curb weight or maximum frontal area set forth in the...
On the inherent competition between valid and spurious inductive inferences in Boolean data
NASA Astrophysics Data System (ADS)
Andrecut, M.
Inductive inference is the process of extracting general rules from specific observations. This problem also arises in the analysis of biological networks, such as genetic regulatory networks, where the interactions are complex and the observations are incomplete. A typical task in these problems is to extract general interaction rules as combinations of Boolean covariates, that explain a measured response variable. The inductive inference process can be considered as an incompletely specified Boolean function synthesis problem. This incompleteness of the problem will also generate spurious inferences, which are a serious threat to valid inductive inference rules. Using random Boolean data as a null model, here we attempt to measure the competition between valid and spurious inductive inference rules from a given data set. We formulate two greedy search algorithms, which synthesize a given Boolean response variable in a sparse disjunct normal form, and respectively a sparse generalized algebraic normal form of the variables from the observation data, and we evaluate numerically their performance.
NASA Astrophysics Data System (ADS)
McCook, L. J.; Almany, G. R.; Berumen, M. L.; Day, J. C.; Green, A. L.; Jones, G. P.; Leis, J. M.; Planes, S.; Russ, G. R.; Sale, P. F.; Thorrold, S. R.
2009-06-01
The global decline in coral reefs demands urgent management strategies to protect resilience. Protecting ecological connectivity, within and among reefs, and between reefs and other ecosystems is critical to resilience. However, connectivity science is not yet able to clearly identify the specific measures for effective protection of connectivity. This article aims to provide a set of principles or practical guidelines that can be applied currently to protect connectivity. These ‘rules of thumb’ are based on current knowledge and expert opinion, and on the philosophy that, given the urgency, it is better to act with incomplete knowledge than to wait for detailed understanding that may come too late. The principles, many of which are not unique to connectivity, include: (1) allow margins of error in extent and nature of protection, as insurance against unforeseen or incompletely understood threats or critical processes; (2) spread risks among areas; (3) aim for networks of protected areas which are: (a) comprehensive and spread—protect all biotypes, habitats and processes, etc., to capture as many possible connections, known and unknown; (b) adequate—maximise extent of protection for each habitat type, and for the entire region; (c) representative—maximise likelihood of protecting the full range of processes and spatial requirements; (d) replicated—multiple examples of biotypes or processes enhances risk spreading; (4) protect entire biological units where possible (e.g. whole reefs), including buffers around core areas. Otherwise, choose bigger rather than smaller areas; (5) provide for connectivity at a wide range of dispersal distances (within and between patches), emphasising distances <20-30 km; and (6) use a portfolio of approaches, including but not limited to MPAs. Three case studies illustrating the application of these principles to coral reef management in the Bohol Sea (Philippines), the Great Barrier Reef (Australia) and Kimbe Bay (Papua New Guinea) are described.
A Filtering of Incomplete GNSS Position Time Series with Probabilistic Principal Component Analysis
NASA Astrophysics Data System (ADS)
Gruszczynski, Maciej; Klos, Anna; Bogusz, Janusz
2018-04-01
For the first time, we introduced the probabilistic principal component analysis (pPCA) regarding the spatio-temporal filtering of Global Navigation Satellite System (GNSS) position time series to estimate and remove Common Mode Error (CME) without the interpolation of missing values. We used data from the International GNSS Service (IGS) stations which contributed to the latest International Terrestrial Reference Frame (ITRF2014). The efficiency of the proposed algorithm was tested on the simulated incomplete time series, then CME was estimated for a set of 25 stations located in Central Europe. The newly applied pPCA was compared with previously used algorithms, which showed that this method is capable of resolving the problem of proper spatio-temporal filtering of GNSS time series characterized by different observation time span. We showed, that filtering can be carried out with pPCA method when there exist two time series in the dataset having less than 100 common epoch of observations. The 1st Principal Component (PC) explained more than 36% of the total variance represented by time series residuals' (series with deterministic model removed), what compared to the other PCs variances (less than 8%) means that common signals are significant in GNSS residuals. A clear improvement in the spectral indices of the power-law noise was noticed for the Up component, which is reflected by an average shift towards white noise from - 0.98 to - 0.67 (30%). We observed a significant average reduction in the accuracy of stations' velocity estimated for filtered residuals by 35, 28 and 69% for the North, East, and Up components, respectively. CME series were also subjected to analysis in the context of environmental mass loading influences of the filtering results. Subtraction of the environmental loading models from GNSS residuals provides to reduction of the estimated CME variance by 20 and 65% for horizontal and vertical components, respectively.
Lee, Nam-Ju; Cho, Eunhee; Bakken, Suzanne
2010-03-01
The purposes of this study were to develop a taxonomy for detection of errors related to hypertension management and to apply the taxonomy to retrospectively analyze the documentation of nurses in Advanced Practice Nurse (APN) training. We developed the Hypertension Diagnosis and Management Error Taxonomy and applied it in a sample of adult patient encounters (N = 15,862) that were documented in a personal digital assistant-based clinical log by registered nurses in APN training. We used Standard Query Language queries to retrieve hypertension-related data from the central database. The data were summarized using descriptive statistics. Blood pressure was documented in 77.5% (n = 12,297) of encounters; 21% had high blood pressure values. Missed diagnosis, incomplete diagnosis and misdiagnosis rates were 63.7%, 6.8% and 7.5% respectively. In terms of treatment, the omission rates were 17.9% for essential medications and 69.9% for essential patient teaching. Contraindicated anti-hypertensive medications were documented in 12% of encounters with co-occurring diagnoses of hypertension and asthma. The Hypertension Diagnosis and Management Error Taxonomy was useful for identifying errors based on documentation in a clinical log. The results provide an initial understanding of the nature of errors associated with hypertension diagnosis and management of nurses in APN training. The information gained from this study can contribute to educational interventions that promote APN competencies in identification and management of hypertension as well as overall patient safety and informatics competencies. Copyright © 2010 Korean Society of Nursing Science. Published by . All rights reserved.
Sparse subspace clustering for data with missing entries and high-rank matrix completion.
Fan, Jicong; Chow, Tommy W S
2017-09-01
Many methods have recently been proposed for subspace clustering, but they are often unable to handle incomplete data because of missing entries. Using matrix completion methods to recover missing entries is a common way to solve the problem. Conventional matrix completion methods require that the matrix should be of low-rank intrinsically, but most matrices are of high-rank or even full-rank in practice, especially when the number of subspaces is large. In this paper, a new method called Sparse Representation with Missing Entries and Matrix Completion is proposed to solve the problems of incomplete-data subspace clustering and high-rank matrix completion. The proposed algorithm alternately computes the matrix of sparse representation coefficients and recovers the missing entries of a data matrix. The proposed algorithm recovers missing entries through minimizing the representation coefficients, representation errors, and matrix rank. Thorough experimental study and comparative analysis based on synthetic data and natural images were conducted. The presented results demonstrate that the proposed algorithm is more effective in subspace clustering and matrix completion compared with other existing methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Social Interactions under Incomplete Information: Games, Equilibria, and Expectations
NASA Astrophysics Data System (ADS)
Yang, Chao
My dissertation research investigates interactions of agents' behaviors through social networks when some information is not shared publicly, focusing on solutions to a series of challenging problems in empirical research, including heterogeneous expectations and multiple equilibria. The first chapter, "Social Interactions under Incomplete Information with Heterogeneous Expectations", extends the current literature in social interactions by devising econometric models and estimation tools with private information in not only the idiosyncratic shocks but also some exogenous covariates. For example, when analyzing peer effects in class performances, it was previously assumed that all control variables, including individual IQ and SAT scores, are known to the whole class, which is unrealistic. This chapter allows such exogenous variables to be private information and models agents' behaviors as outcomes of a Bayesian Nash Equilibrium in an incomplete information game. The distribution of equilibrium outcomes can be described by the equilibrium conditional expectations, which is unique when the parameters are within a reasonable range according to the contraction mapping theorem in function spaces. The equilibrium conditional expectations are heterogeneous in both exogenous characteristics and the private information, which makes estimation in this model more demanding than in previous ones. This problem is solved in a computationally efficient way by combining the quadrature method and the nested fixed point maximum likelihood estimation. In Monte Carlo experiments, if some exogenous characteristics are private information and the model is estimated under the mis-specified hypothesis that they are known to the public, estimates will be biased. Applying this model to municipal public spending in North Carolina, significant negative correlations between contiguous municipalities are found, showing free-riding effects. The Second chapter "A Tobit Model with Social Interactions under Incomplete Information", is an application of the first chapter to censored outcomes, corresponding to the situation when agents" behaviors are subjected to some binding restrictions. In an interesting empirical analysis for property tax rates set by North Carolina municipal governments, it is found that there is a significant positive correlation among near-by municipalities. Additionally, some private information about its own residents is used by a municipal government to predict others' tax rates, which enriches current empirical work about tax competition. The third chapter, "Social Interactions under Incomplete Information with Multiple Equilibria", extends the first chapter by investigating effective estimation methods when the condition for a unique equilibrium may not be satisfied. With multiple equilibria, the previous model is incomplete due to the unobservable equilibrium selection. Neither conventional likelihoods nor moment conditions can be used to estimate parameters without further specifications. Although there are some solutions to this issue in the current literature, they are based on strong assumptions such as agents with the same observable characteristics play the same strategy. This paper relaxes those assumptions and extends the all-solution method used to estimate discrete choice games to a setting with both discrete and continuous choices, bounded and unbounded outcomes, and a general form of incomplete information, where the existence of a pure strategy equilibrium has been an open question for a long time. By the use of differential topology and functional analysis, it is found that when all exogenous characteristics are public information, there are a finite number of equilibria. With privately known exogenous characteristics, the equilbria can be represented by a compact set in a Banach space and be approximated by a finite set. As a result, a finite-state probability mass function can be used to specify a probability measure for equilibrium selection, which completes the model. From Monte Carlo experiments about two types of binary choice models, it is found that assuming equilibrium uniqueness can bring in estimation biases when the true value of interaction intensity is large and there are multiple equilibria in the data generating process.
Holland-Letz, Tim; Endres, Heinz G; Biedermann, Stefanie; Mahn, Matthias; Kunert, Joachim; Groh, Sabine; Pittrow, David; von Bilderling, Peter; Sternitzky, Reinhardt; Diehm, Curt
2007-05-01
The reliability of ankle-brachial index (ABI) measurements performed by different observer groups in primary care has not yet been determined. The aims of the study were to provide precise estimates for all effects influencing the variability of the ABI (patients' individual variability, intra- and inter-observer variability), with particular focus on the performance of different observer groups. Using a partially balanced incomplete block design, 144 unselected individuals aged > or = 65 years underwent double ABI measurements by one vascular surgeon or vascular physician, one family physician and one nurse with training in Doppler sonography. Three groups comprising a total of 108 individuals were analyzed (only two with ABI < 0.90). Errors for two repeated measurements for all three observer groups did not differ (experts 8.5%, family physicians 7.7%, and nurses 7.5%, p = 0.39). There was no relevant bias among observer groups. Intra-observer variability expressed as standard deviation divided by the mean was 8%, and inter-observer variability was 9%. In conclusion, reproducibility of the ABI measurement was good in this cohort of elderly patients who almost all had values in the normal range. The mean error of 8-9% within or between observers is smaller than with established screening measures. Since there were no differences among observers with different training backgrounds, our study confirms the appropriateness of ABI assessment for screening peripheral arterial disease (PAD) and generalized atherosclerosis in the primary case setting. Given the importance of the early detection and management of PAD, this diagnostic tool should be used routinely as a standard for PAD screening. Additional studies will be required to confirm our observations in patients with PAD of various severities.
Lauffer, A; Solé, L; Bernstein, S; Lopes, M H; Francisconi, C F
2013-01-01
The development and validation of questionnaires for evaluating quality of life (QoL) has become an important area of research. However, there is a proliferation of non-validated measuring instruments in the health setting that do not contribute to advances in scientific knowledge. To present, through the analysis of available validated questionnaires, a checklist of the practical aspects of how to carry out the cross-cultural adaptation of QoL questionnaires (generic, or disease-specific) so that no step is overlooked in the evaluation process, and thus help prevent the elaboration of insufficient or incomplete validations. We have consulted basic textbooks and Pubmed databases using the following keywords quality of life, questionnaires, and gastroenterology, confined to «validation studies» in English, Spanish, and Portuguese, and with no time limit, for the purpose of analyzing the translation and validation of the questionnaires available through the Mapi Institute and PROQOLID websites. A checklist is presented to aid in the planning and carrying out of the cross-cultural adaptation of QoL questionnaires, in conjunction with a glossary of key terms in the area of knowledge. The acronym DSTAC was used, which refers to each of the 5 stages involved in the recommended procedure. In addition, we provide a table of the QoL instruments that have been validated into Spanish. This article provides information on how to adapt QoL questionnaires from a cross-cultural perspective, as well as to minimize common errors. Copyright © 2012 Asociación Mexicana de Gastroenterología. Published by Masson Doyma México S.A. All rights reserved.
Lessons learnt from Dental Patient Safety Case Reports
Obadan, Enihomo M.; Ramoni, Rachel B.; Kalenderian, Elsbeth
2015-01-01
Background Errors are commonplace in dentistry, it is therefore our imperative as dental professionals to intercept them before they lead to an adverse event, and/or mitigate their effects when an adverse event occurs. This requires a systematic approach at both the profession-level, encapsulated in the Agency for Healthcare Research and Quality’s Patient Safety Initiative structure, as well as at the practice-level, where Crew Resource Management is a tested paradigm. Supporting patient safety at both the dental practice and profession levels relies on understanding the types and causes of errors, an area in which little is known. Methods A retrospective review of dental adverse events reported in the literature was performed. Electronic bibliographic databases were searched and data were extracted on background characteristics, incident description, case characteristics, clinic setting where adverse event originated, phase of patient care that adverse event was detected, proximal cause, type of patient harm, degree of harm and recovery actions. Results 182 publications (containing 270 cases) were identified through our search. Delayed and unnecessary treatment/disease progression after misdiagnosis was the largest type of harm reported. 24.4% of reviewed cases were reported to have experienced permanent harm. One of every ten case reports reviewed (11.1%) reported that the adverse event resulted in the death of the affected patient. Conclusions Published case reports provide a window into understanding the nature and extent of dental adverse events, but for as much as the findings revealed about adverse events, they also identified the need for more broad-based contributions to our collective body of knowledge about adverse events in the dental office and their causes. Practical Implications Siloed and incomplete contributions to our understanding of adverse events in the dental office are threats to dental patients’ safety. PMID:25925524
On effective and optical resolutions of diffraction data sets.
Urzhumtseva, Ludmila; Klaholz, Bruno; Urzhumtsev, Alexandre
2013-10-01
In macromolecular X-ray crystallography, diffraction data sets are traditionally characterized by the highest resolution dhigh of the reflections that they contain. This measure is sensitive to individual reflections and does not refer to the eventual data incompleteness and anisotropy; it therefore does not describe the data well. A physically relevant and robust measure that provides a universal way to define the `actual' effective resolution deff of a data set is introduced. This measure is based on the accurate calculation of the minimum distance between two immobile point scatterers resolved as separate peaks in the Fourier map calculated with a given set of reflections. This measure is applicable to any data set, whether complete or incomplete. It also allows characterizion of the anisotropy of diffraction data sets in which deff strongly depends on the direction. Describing mathematical objects, the effective resolution deff characterizes the `geometry' of the set of measured reflections and is irrelevant to the diffraction intensities. At the same time, the diffraction intensities reflect the composition of the structure from physical entities: the atoms. The minimum distance for the atoms typical of a given structure is a measure that is different from and complementary to deff; it is also a characteristic that is complementary to conventional measures of the data-set quality. Following the previously introduced terms, this value is called the optical resolution, dopt. The optical resolution as defined here describes the separation of the atomic images in the `ideal' crystallographic Fourier map that would be calculated if the exact phases were known. The effective and optical resolution, as formally introduced in this work, are of general interest, giving a common `ruler' for all kinds of crystallographic diffraction data sets.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Impacts of uncertainties in European gridded precipitation observations on regional climate analysis
Gobiet, Andreas
2016-01-01
ABSTRACT Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio‐temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan‐European data sets and a set that combines eight very high‐resolution station‐based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post‐processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small‐scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate‐mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments. PMID:28111497
Prein, Andreas F; Gobiet, Andreas
2017-01-01
Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio-temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan-European data sets and a set that combines eight very high-resolution station-based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post-processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small-scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate-mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments.
Speech errors of amnesic H.M.: unlike everyday slips-of-the-tongue.
MacKay, Donald G; James, Lori E; Hadley, Christopher B; Fogler, Kethera A
2011-03-01
Three language production studies indicate that amnesic H.M. produces speech errors unlike everyday slips-of-the-tongue. Study 1 was a naturalistic task: H.M. and six controls closely matched for age, education, background and IQ described what makes captioned cartoons funny. Nine judges rated the descriptions blind to speaker identity and gave reliably more negative ratings for coherence, vagueness, comprehensibility, grammaticality, and adequacy of humor-description for H.M. than the controls. Study 2 examined "major errors", a novel type of speech error that is uncorrected and reduces the coherence, grammaticality, accuracy and/or comprehensibility of an utterance. The results indicated that H.M. produced seven types of major errors reliably more often than controls: substitutions, omissions, additions, transpositions, reading errors, free associations, and accuracy errors. These results contradict recent claims that H.M. retains unconscious or implicit language abilities and produces spoken discourse that is "sophisticated," "intact" and "without major errors." Study 3 examined whether three classical types of errors (omissions, additions, and substitutions of words and phrases) differed for H.M. versus controls in basic nature and relative frequency by error type. The results indicated that omissions, and especially multi-word omissions, were relatively more common for H.M. than the controls; and substitutions violated the syntactic class regularity (whereby, e.g., nouns substitute with nouns but not verbs) relatively more often for H.M. than the controls. These results suggest that H.M.'s medial temporal lobe damage impaired his ability to rapidly form new connections between units in the cortex, a process necessary to form complete and coherent internal representations for novel sentence-level plans. In short, different brain mechanisms underlie H.M.'s major errors (which reflect incomplete and incoherent sentence-level plans) versus everyday slips-of-the tongue (which reflect errors in activating pre-planned units in fully intact sentence-level plans). Implications of the results of Studies 1-3 are discussed for systems theory, binding theory and relational memory theories. Copyright © 2010 Elsevier Srl. All rights reserved.
Performance Analysis: Work Control Events Identified January - August 2010
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Grange, C E; Freeman, J W; Kerr, C E
2011-01-14
This performance analysis evaluated 24 events that occurred at LLNL from January through August 2010. The analysis identified areas of potential work control process and/or implementation weaknesses and several common underlying causes. Human performance improvement and safety culture factors were part of the causal analysis of each event and were analyzed. The collective significance of all events in 2010, as measured by the occurrence reporting significance category and by the proportion of events that have been reported to the DOE ORPS under the ''management concerns'' reporting criteria, does not appear to have increased in 2010. The frequency of reporting inmore » each of the significance categories has not changed in 2010 compared to the previous four years. There is no change indicating a trend in the significance category and there has been no increase in the proportion of occurrences reported in the higher significance category. Also, the frequency of events, 42 events reported through August 2010, is not greater than in previous years and is below the average of 63 occurrences per year at LLNL since 2006. Over the previous four years, an average of 43% of the LLNL's reported occurrences have been reported as either ''management concerns'' or ''near misses.'' In 2010, 29% of the occurrences have been reported as ''management concerns'' or ''near misses.'' This rate indicates that LLNL is now reporting fewer ''management concern'' and ''near miss'' occurrences compared to the previous four years. From 2008 to the present, LLNL senior management has undertaken a series of initiatives to strengthen the work planning and control system with the primary objective to improve worker safety. In 2008, the LLNL Deputy Director established the Work Control Integrated Project Team to develop the core requirements and graded elements of an institutional work planning and control system. By the end of that year this system was documented and implementation had begun. In 2009, training of the workforce began and as of the time of this report more than 50% of authorized Integration Work Sheets (IWS) use the activity-based planning process. In 2010, LSO independently reviewed the work planning and control process and confirmed to the Laboratory that the Integrated Safety Management (ISM) System was implemented. LLNL conducted a cross-directorate management self-assessment of work planning and control and is developing actions to respond to the issues identified. Ongoing efforts to strengthen the work planning and control process and to improve the quality of LLNL work packages are in progress: completion of remaining actions in response to the 2009 DOE Office of Health, Safety, and Security (HSS) evaluation of LLNL's ISM System; scheduling more than 14 work planning and control self-assessments in FY11; continuing to align subcontractor work control with the Institutional work planning and control system; and continuing to maintain the electronic IWS application. The 24 events included in this analysis were caused by errors in the first four of the five ISMS functions. The most frequent cause was errors in analyzing the hazards (Function 2). The second most frequent cause was errors occurring when defining the work (Function 1), followed by errors during the performance of work (Function 4). Interestingly, very few errors in developing controls (Function 3) resulted in events. This leads one to conclude that if improvements are made to defining the scope of work and analyzing the potential hazards, LLNL may reduce the frequency or severity of events. Analysis of the 24 events resulted in the identification of ten common causes. Some events had multiple causes, resulting in the mention of 39 causes being identified for the 24 events. The most frequent cause was workers, supervisors, or experts believing they understood the work and the hazards but their understanding was incomplete. The second most frequent cause was unclear, incomplete or confusing documents directing the work. Together, these two causes were mentioned 17 times and contributed to 13 of the events. All of the events with the cause of ''workers, supervisors, or experts believing they understood the work and the hazards but their understanding was incomplete'' had this error in the first two ISMS functions: define the work and analyze the hazard. This means that these causes result in the scope of work being ill-defined or the hazard(s) improperly analyzed. Incomplete implementation of these functional steps leads to the hazards not being controlled. The causes are then manifested in events when the work is conducted. The process to operate safely relies on accurately defining the scope of work. This review has identified a number of examples of latent organizational weakness in the execution of work control processes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si
2014-12-01
The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less
ERIC Educational Resources Information Center
Bowen, J. Philip; Sorensen, Jennifer B.; Kirschner, Karl N.
2007-01-01
The analysis explains the basis set superposition error (BSSE) and fragment relaxation involved in calculating the interaction energies using various first principle theories. Interacting the correlated fragment and increasing the size of the basis set can help in decreasing the BSSE to a great extent.
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Lee, Hong-Tao
1989-01-01
A new approach for determination of machine-tool settings for spiral bevel gears is proposed. The proposed settings provide a predesigned parabolic function of transmission errors and the desired location and orientation of the bearing contact. The predesigned parabolic function of transmission errors is able to absorb piece-wise linear functions of transmission errors that are caused by the gear misalignment and reduce gear noise. The gears are face-milled by head cutters with conical surfaces or surfaces of revolution. A computer program for simulation of meshing, bearing contact and determination of transmission errors for misaligned gear has been developed.
NASA Astrophysics Data System (ADS)
Al-Mudhafar, W. J.
2013-12-01
Precisely prediction of rock facies leads to adequate reservoir characterization by improving the porosity-permeability relationships to estimate the properties in non-cored intervals. It also helps to accurately identify the spatial facies distribution to perform an accurate reservoir model for optimal future reservoir performance. In this paper, the facies estimation has been done through Multinomial logistic regression (MLR) with respect to the well logs and core data in a well in upper sandstone formation of South Rumaila oil field. The entire independent variables are gamma rays, formation density, water saturation, shale volume, log porosity, core porosity, and core permeability. Firstly, Robust Sequential Imputation Algorithm has been considered to impute the missing data. This algorithm starts from a complete subset of the dataset and estimates sequentially the missing values in an incomplete observation by minimizing the determinant of the covariance of the augmented data matrix. Then, the observation is added to the complete data matrix and the algorithm continues with the next observation with missing values. The MLR has been chosen to estimate the maximum likelihood and minimize the standard error for the nonlinear relationships between facies & core and log data. The MLR is used to predict the probabilities of the different possible facies given each independent variable by constructing a linear predictor function having a set of weights that are linearly combined with the independent variables by using a dot product. Beta distribution of facies has been considered as prior knowledge and the resulted predicted probability (posterior) has been estimated from MLR based on Baye's theorem that represents the relationship between predicted probability (posterior) with the conditional probability and the prior knowledge. To assess the statistical accuracy of the model, the bootstrap should be carried out to estimate extra-sample prediction error by randomly drawing datasets with replacement from the training data. Each sample has the same size of the original training set and it can be conducted N times to produce N bootstrap datasets to re-fit the model accordingly to decrease the squared difference between the estimated and observed categorical variables (facies) leading to decrease the degree of uncertainty.
The consequences of hospital autonomization in Colombia: a transaction cost economics analysis.
Castano, Ramon; Mills, Anne
2013-03-01
Granting autonomy to public hospitals in developing countries has been common over recent decades, and implies a shift from hierarchical to contract-based relationships with health authorities. Theory on transactions costs in contractual relationships suggests they stem from relationship-specific investments and contract incompleteness. Transaction cost economics argues that the parties involved in exchanges seek to reduce transaction costs. The objective of this research was to analyse the relationships observed between purchasers and the 22 public hospitals of the city of Bogota, Colombia, in order to understand the role of relationship-specific investments and contract incompleteness as sources of transaction costs, through a largely qualitative study. We found that contract-based relationships showed relevant transaction costs associated mainly with contract incompleteness, not with relationship-specific investments. Regarding relationships between insurers and local hospitals for primary care services, compulsory contracting regulations locked-in the parties to the contracts. For high-complexity services (e.g. inpatient care), no restrictions applied and relationships suggested transaction-cost minimizing behaviour. Contract incompleteness was found to be a source of transaction costs on its own. We conclude that transaction costs seemed to play a key role in contract-based relationships, and contract incompleteness by itself appeared to be a source of transaction costs. The same findings are likely in other contexts because of difficulties in defining, observing and verifying the contracted products and the underlying information asymmetries. The role of compulsory contracting might be context-specific, although it is likely to emerge in other settings due to the safety-net role of public hospitals.
SCIENTIFIC UNCERTAINTIES IN ATMOSPHERIC MERCURY MODELS II: SENSITIVITY ANALYSIS IN THE CONUS DOMAIN
In this study, we present the response of model results to different scientific treatments in an effort to quantify the uncertainties caused by the incomplete understanding of mercury science and by model assumptions in atmospheric mercury models. Two sets of sensitivity simulati...
UTLS water vapour from SCIAMACHY limb measurementsV3.01 (2002-2012).
Weigel, K; Rozanov, A; Azam, F; Bramstedt, K; Damadeo, R; Eichmann, K-U; Gebhardt, C; Hurst, D; Kraemer, M; Lossow, S; Read, W; Spelten, N; Stiller, G P; Walker, K A; Weber, M; Bovensmann, H; Burrows, J P
2016-01-01
The SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY) aboard the Envisat satellite provided measurements from August 2002 until April 2012. SCIAMACHY measured the scattered or direct sunlight using different observation geometries. The limb viewing geometry allows the retrieval of water vapour at about 10-25 km height from the near-infrared spectral range (1353-1410 nm). These data cover the upper troposphere and lower stratosphere (UTLS), a region in the atmosphere which is of special interest for a variety of dynamical and chemical processes as well as for the radiative forcing. Here, the latest data version of water vapour (V3.01) from SCIAMACHY limb measurements is presented and validated by comparisons with data sets from other satellite and in situ measurements. Considering retrieval tests and the results of these comparisons, the V3.01 data are reliable from about 11 to 23 km and the best results are found in the middle of the profiles between about 14 and 20 km. Above 20 km in the extra tropics V3.01 is drier than all other data sets. Additionally, for altitudes above about 19 km, the vertical resolution of the retrieved profile is not sufficient to resolve signals with a short vertical structure like the tape recorder. Below 14 km, SCIAMACHY water vapour V3.01 is wetter than most collocated data sets, but the high variability of water vapour in the troposphere complicates the comparison. For 14-20 km height, the expected errors from the retrieval and simulations and the mean differences to collocated data sets are usually smaller than 10 % when the resolution of the SCIAMACHY data is taken into account. In general, the temporal changes agree well with collocated data sets except for the Northern Hemisphere extratropical stratosphere, where larger differences are observed. This indicates a possible drift in V3.01 most probably caused by the incomplete treatment of volcanic aerosols in the retrieval. In all other regions a good temporal stability is shown. In the tropical stratosphere an increase in water vapour is found between 2002 and 2012, which is in agreement with other satellite data sets for overlapping time periods.
The challenges in defining and measuring diagnostic error.
Zwaan, Laura; Singh, Hardeep
2015-06-01
Diagnostic errors have emerged as a serious patient safety problem but they are hard to detect and complex to define. At the research summit of the 2013 Diagnostic Error in Medicine 6th International Conference, we convened a multidisciplinary expert panel to discuss challenges in defining and measuring diagnostic errors in real-world settings. In this paper, we synthesize these discussions and outline key research challenges in operationalizing the definition and measurement of diagnostic error. Some of these challenges include 1) difficulties in determining error when the disease or diagnosis is evolving over time and in different care settings, 2) accounting for a balance between underdiagnosis and overaggressive diagnostic pursuits, and 3) determining disease diagnosis likelihood and severity in hindsight. We also build on these discussions to describe how some of these challenges can be addressed while conducting research on measuring diagnostic error.
Assessing Suturing Skills in a Self-Guided Learning Setting: Absolute Symmetry Error
ERIC Educational Resources Information Center
Brydges, Ryan; Carnahan, Heather; Dubrowski, Adam
2009-01-01
Directed self-guidance, whereby trainees independently practice a skill-set in a structured setting, may be an effective technique for novice training. Currently, however, most evaluation methods require an expert to be present during practice. The study aim was to determine if absolute symmetry error, a clinically important measure that can be…
Virtual occlusal definition for orthognathic surgery.
Liu, X J; Li, Q Q; Zhang, Z; Li, T T; Xie, Z; Zhang, Y
2016-03-01
Computer-assisted surgical simulation is being used increasingly in orthognathic surgery. However, occlusal definition is still undertaken using model surgery with subsequent digitization via surface scanning or cone beam computed tomography. A software tool has been developed and a workflow set up in order to achieve a virtual occlusal definition. The results of a validation study carried out on 60 models of normal occlusion are presented. Inter- and intra-user correlation tests were used to investigate the reproducibility of the manual setting point procedure. The errors between the virtually set positions (test) and the digitized manually set positions (gold standard) were compared. The consistency in virtual set positions performed by three individual users was investigated by one way analysis of variance test. Inter- and intra-observer correlation coefficients for manual setting points were all greater than 0.95. Overall, the median error between the test and the gold standard positions was 1.06mm. Errors did not differ among teeth (F=0.371, P>0.05). The errors were not significantly different from 1mm (P>0.05). There were no significant differences in the errors made by the three independent users (P>0.05). In conclusion, this workflow for virtual occlusal definition was found to be reliable and accurate. Copyright © 2015 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Kim, ChungYun; Mazan, Jennifer L; Quiñones-Boex, Ana C
To determine pharmacists' attitudes and behaviors on medication errors and their disclosure and to compare community and hospital pharmacists on such views. An online questionnaire was developed from previous studies on physicians' disclosure of errors. Questionnaire items included demographics, environment, personal experiences, and attitudes on medication errors and the disclosure process. An invitation to participate along with the link to the questionnaire was electronically distributed to members of two Illinois pharmacy associations. A follow-up reminder was sent 4 weeks after the original message. Data were collected for 3 months, and statistical analyses were performed with the use of IBM SPSS version 22.0. The overall response rate was 23.3% (n = 422). The average employed respondent was a 51-year-old white woman with a BS Pharmacy degree working in a hospital pharmacy as a clinical staff member. Regardless of practice settings, pharmacist respondents agreed that medication errors were inevitable and that a disclosure process is necessary. Respondents from community and hospital settings were further analyzed to assess any differences. Community pharmacist respondents were more likely to agree that medication errors were inevitable and that pharmacists should address the patient's emotions when disclosing an error. Community pharmacist respondents were also more likely to agree that the health care professional most closely involved with the error should disclose the error to the patient and thought that it was the pharmacists' responsibility to disclose the error. Hospital pharmacist respondents were more likely to agree that it was important to include all details in a disclosure process and more likely to disagree on putting a "positive spin" on the event. Regardless of practice setting, responding pharmacists generally agreed that errors should be disclosed to patients. There were, however, significant differences in their attitudes and behaviors depending on their particular practice setting. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
On basis set superposition error corrected stabilization energies for large n-body clusters.
Walczak, Katarzyna; Friedrich, Joachim; Dolg, Michael
2011-10-07
In this contribution, we propose an approximate basis set superposition error (BSSE) correction scheme for the site-site function counterpoise and for the Valiron-Mayer function counterpoise correction of second order to account for the basis set superposition error in clusters with a large number of subunits. The accuracy of the proposed scheme has been investigated for a water cluster series at the CCSD(T), CCSD, MP2, and self-consistent field levels of theory using Dunning's correlation consistent basis sets. The BSSE corrected stabilization energies for a series of water clusters are presented. A study regarding the possible savings with respect to computational resources has been carried out as well as a monitoring of the basis set dependence of the approximate BSSE corrections. © 2011 American Institute of Physics
Complexity of life via collective mind
NASA Technical Reports Server (NTRS)
Zak, Michail
2004-01-01
e mind is introduced as a set of simple intelligent units (say, neurons, or interacting agents), which can communicate by exchange of information without explicit global control. Incomplete information is compensated by a sequence of random guesses symmetrically distributed around expectations with prescribed variances. Both the expectations and variances are the invariants characterizing the whole class of agents. These invariants are stored as parameters of the collective mind, while they contribute into dynamical formalism of the agents' evolution, and in particular, into the reflective chains of their nested abstract images of the selves and non-selves. The proposed model consists of the system of stochastic differential equations in the Langevin form representing the motor dynamics, and the corresponding Fokker-Planck equation representing the mental dynamics (Motor dynamics describes the motion in physical space, while mental dynamics simulates the evolution of initial errors in terms of the probability density). The main departure of this model from Newtonian and statistical physics is due to a feedback from the mental to the motor dynamics which makes the Fokker-Planck equation nonlinear. Interpretation of this model from mathematical and physical viewpoints, as well as possible interpretation from biological, psychological, and social viewpoints are discussed. The model is illustrated by the dynamics of a dialog.
NASA Astrophysics Data System (ADS)
Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas
2010-03-01
Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.
Elfering, A; Semmer, N K; Grebner, S
This study investigates the link between workplace stress and the 'non-singularity' of patient safety-related incidents in the hospital setting. Over a period of 2 working weeks 23 young nurses from 19 hospitals in Switzerland documented 314 daily stressful events using a self-observation method (pocket diaries); 62 events were related to patient safety. Familiarity of safety-related events and probability of recurrence, as indicators of non-singularity, were the dependent variables in multilevel regression analyses. Predictor variables were both situational (self-reported situational control, safety compliance) and chronic variables (job stressors such as time pressure, or concentration demands and job control). Chronic work characteristics were rated by trained observers. The most frequent safety-related stressful events included incomplete or incorrect documentation (40.3%), medication errors (near misses 21%), delays in delivery of patient care (9.7%), and violent patients (9.7%). Familiarity of events and probability of recurrence were significantly predicted by chronic job stressors and low job control in multilevel regression analyses. Job stressors and low job control were shown to be risk factors for patient safety. The results suggest that job redesign to enhance job control and decrease job stressors may be an important intervention to increase patient safety.
Zonal wavefront reconstruction in quadrilateral geometry for phase measuring deflectometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lei; Xue, Junpeng; Gao, Bo
2017-06-14
There are wide applications for zonal reconstruction methods in slope-based metrology due to its good capability of reconstructing the local details on surface profile. It was noticed in the literature that large reconstruction errors occur when using zonal reconstruction methods designed for rectangular geometry to process slopes in a quadrilateral geometry, which is a more general geometry with phase measuring deflectometry. In this paper, we present a new idea for the zonal methods for quadrilateral geometry. Instead of employing the intermediate slopes to set up height-slope equations, we consider the height increment as a more general connector to establish themore » height-slope relations for least-squares regression. The classical zonal methods and interpolation-assisted zonal methods are compared with our proposal. Results of both simulation and experiment demonstrate the effectiveness of the proposed idea. In implementation, the modification on the classical zonal methods is addressed. Finally, the new methods preserve many good aspects of the classical ones, such as the ability to handle a large incomplete slope dataset in an arbitrary aperture, and the low computational complexity comparable with the classical zonal method. Of course, the accuracy of the new methods is much higher when integrating the slopes in quadrilateral geometry.« less
Risky business: Behaviors associated with indoor tanning in US high school students.
Chapman, Stephanie; Ashack, Kurt; Bell, Eric; Sendelweck, Myra Ann; Dellavalle, Robert
2017-09-15
Understanding of associations between indoor tanning and risky health related behaviors such as sexual activity and substance abuse among high school students across the United States is incomplete. To identify risky health related behaviors among high school students utilizing indoor tanning and analyze differences between state specific data. Results from the Youth Risk Behavior Surveillance System (YRBSS) 2013 in 14 different states were analyzed. Participants were 90,414 high school students. Responses to questions assessing indoor tanning habits, sexual activity, and use of substances were analyzed. Sexual activity was associated with indoor tanning in 10 of 14 states, with Nebraska having the strongest association (adjusted odds ratio, 3.8; 95% CI, 2.4-6.2; p<0.001). Indoor tanning was also associated with use of alcohol, marijuana, ecstasy, cocaine, prescription medications, and cigarettes. Only 15 states asked students about their personal history of indoor tanning use, and Minnesota was excluded from our analysis as they administered a non-YRBS questionnaire. Additionally, our study only analyzed results from the 2013 YRBS. Lastly, our data was analyzed in 14 individual data sets, giving a high likelihood of Type 1 error. High school students utilizing indoor tanning are more likely to engage in sexual activity and substance abuse as compared to students who do not utilize indoor tanning.
NASA Astrophysics Data System (ADS)
Oh, Moonseong
Most brachytherapy planning systems are based on a dose calculation algorithm that assumes an infinite scatter environment surrounding the target volume and applicator. In intra-operative high dose rate brachytherapy (IOHDR) where treatment catheters are typically laid either directly on a tumor bed or within applicators that may have little or no scatter material above them, the lack of scatter from one side of the applicator can result in serious underdosage during treatment. Therefore, full analyses of the physical processes such as the photoelectric effect, Rayleigh, and Compton scattering that contribute to dosimetric errors have to be investigated and documented to result in more accurate treatment delivery to patients undergoing IOHDR procedures. Monte Carlo simulation results showed the Compton scattering effect is about 40 times more probable than photoelectric effect for the treated areas of single source, 4 x 4, and 2 x 4 cm2. Also, the dose variations with and without photoelectric effect were 0.3 ˜ 0.7%, which are within the uncertainty in Monte Carlo simulations. Also, Monte Carlo simulation studies were done to verify the following experimental results for quantification of dosimetric errors in clinical IOHDR brachytherapy. The first experimental study was performed to quantify the inaccuracy in clinical dose delivery due to the incomplete scatter conditions inherent in IOHDR brachytherapy. Treatment plans were developed for 3 different treatment surface areas (4 x 4, 7 x 7, 12 x 12 cm2), each with prescription points located at 3 distances (0.5 cm, 1.0 cm, and 1.5 cm) from the source dwell positions. Measurements showed that the magnitude of the underdosage varies from about 8% to 13% of the prescription dose as the prescription depth is increased from 0.5 cm to 1.5 cm. This treatment error was found to be independent of the irradiated area and strongly dependent on the prescription distance. The study was extended to confirm the underdosage for various shape of treated area (especially, irregular shape), which can be applied in clinical cases. Treatment plans of 10 patients previously treated at Roswell Park Cancer Institute in Buffalo, which had irregular shapes of treated areas, were used. In IOHDR brachytherapy, a 2-dimensional (2-D) planar geometry is typically used without considering the curved shape of target surfaces. In clinical cases, this assumption of the planar geometry may cause the serious dose delivery errors to target volumes. The second study was performed to investigate the dose errors to curved surfaces. Seven rectangular shaped plans (five for 1.0 cm and two for 0.5 cm prescription depth) and archived irregular shaped plans of 2 patients were analyzed. Cylindrical phantoms with six radii (ranged 1.35 to 12.5 cm) were used to simulate the treatment planning geometries, which were calculated in 2-D plans. Actual doses delivered to prescription points were over-estimated up to 15% on the concave side of curved applicators for all cylindrical phantoms with 1.0 cm prescription depth. Also, delivered doses decreased by up to 10% on the convex side of curved applicators for small treated areas (≤ 5catheters), but interestingly, any dose dependence was not shown with large treated areas. Our measurements have shown inaccuracy in dose delivery when the original planar treatment plan was delivered in a curved applicator setting. Dose errors arising due to the tumor curvature may be significant in a clinical set up and merit attention during planning.
Concept Learning and Heuristic Classification in Weak-Theory Domains
1990-03-01
age and noise-induced cochlear age..gt.60 noise-induced cochlear air(mild) age-induced cochlear history(noise) norma ]_ear speechpoor)acousticneuroma...Annual review of computer science. Machine Learning, 4, 1990. (to appear). [18] R.T. Duran . Concept learning with incomplete data sets. Master’s thesis
42 CFR 82.10 - Overview of the dose reconstruction process.
Code of Federal Regulations, 2011 CFR
2011-10-01
... doses using techniques discussed in § 82.16. Once the resulting data set is complete, NIOSH will.... Additionally, NIOSH may compile data, and information from NIOSH records that may contribute to the dose... which dose and exposure monitoring data is incomplete or insufficient for dose reconstruction. (h) NIOSH...
42 CFR 82.10 - Overview of the dose reconstruction process.
Code of Federal Regulations, 2012 CFR
2012-10-01
... doses using techniques discussed in § 82.16. Once the resulting data set is complete, NIOSH will.... Additionally, NIOSH may compile data, and information from NIOSH records that may contribute to the dose... which dose and exposure monitoring data is incomplete or insufficient for dose reconstruction. (h) NIOSH...
42 CFR 82.10 - Overview of the dose reconstruction process.
Code of Federal Regulations, 2014 CFR
2014-10-01
... doses using techniques discussed in § 82.16. Once the resulting data set is complete, NIOSH will.... Additionally, NIOSH may compile data, and information from NIOSH records that may contribute to the dose... which dose and exposure monitoring data is incomplete or insufficient for dose reconstruction. (h) NIOSH...
42 CFR 82.10 - Overview of the dose reconstruction process.
Code of Federal Regulations, 2010 CFR
2010-10-01
... doses using techniques discussed in § 82.16. Once the resulting data set is complete, NIOSH will.... Additionally, NIOSH may compile data, and information from NIOSH records that may contribute to the dose... which dose and exposure monitoring data is incomplete or insufficient for dose reconstruction. (h) NIOSH...
42 CFR 82.10 - Overview of the dose reconstruction process.
Code of Federal Regulations, 2013 CFR
2013-10-01
... doses using techniques discussed in § 82.16. Once the resulting data set is complete, NIOSH will.... Additionally, NIOSH may compile data, and information from NIOSH records that may contribute to the dose... which dose and exposure monitoring data is incomplete or insufficient for dose reconstruction. (h) NIOSH...
49 CFR 529.5 - Requirements for intermediate manufacturers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION MANUFACTURERS OF MULTISTAGE AUTOMOBILES § 529... automobile cause it to exceed the maximum curb weight or maximum frontal area set forth in the document furnished it by the incomplete automobile manufacturer under § 529.4(c)(1) or by a previous intermediate...
49 CFR 529.5 - Requirements for intermediate manufacturers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION MANUFACTURERS OF MULTISTAGE AUTOMOBILES § 529... automobile cause it to exceed the maximum curb weight or maximum frontal area set forth in the document furnished it by the incomplete automobile manufacturer under § 529.4(c)(1) or by a previous intermediate...
49 CFR 529.5 - Requirements for intermediate manufacturers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION MANUFACTURERS OF MULTISTAGE AUTOMOBILES § 529... automobile cause it to exceed the maximum curb weight or maximum frontal area set forth in the document furnished it by the incomplete automobile manufacturer under § 529.4(c)(1) or by a previous intermediate...
49 CFR 529.5 - Requirements for intermediate manufacturers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION MANUFACTURERS OF MULTISTAGE AUTOMOBILES § 529... automobile cause it to exceed the maximum curb weight or maximum frontal area set forth in the document furnished it by the incomplete automobile manufacturer under § 529.4(c)(1) or by a previous intermediate...
49 CFR 529.5 - Requirements for intermediate manufacturers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION MANUFACTURERS OF MULTISTAGE AUTOMOBILES § 529... automobile cause it to exceed the maximum curb weight or maximum frontal area set forth in the document furnished it by the incomplete automobile manufacturer under § 529.4(c)(1) or by a previous intermediate...
Black carbon (BC), light absorbing particles emitted primarily from incomplete combustion, is operationally defined through a variety of instrumental measurements rather than with a universal definition set forth by the research or regulatory communities. To examine the consiste...
NASA Technical Reports Server (NTRS)
Chien, S.; Gratch, J.; Burl, M.
1994-01-01
In this report we consider a decision-making problem of selecting a strategy from a set of alternatives on the basis of incomplete information (e.g., a finite number of observations): the system can, however, gather additional information at some cost.
Efficiently Ranking Hyphotheses in Machine Learning
NASA Technical Reports Server (NTRS)
Chien, Steve
1997-01-01
This paper considers the problem of learning the ranking of a set of alternatives based upon incomplete information (e.g. a limited number of observations). At each decision cycle, the system can output a complete ordering on the hypotheses or decide to gather additional information (e.g. observation) at some cost.
Detecting and overcoming systematic errors in genome-scale phylogenies.
Rodríguez-Ezpeleta, Naiara; Brinkmann, Henner; Roure, Béatrice; Lartillot, Nicolas; Lang, B Franz; Philippe, Hervé
2007-06-01
Genome-scale data sets result in an enhanced resolution of the phylogenetic inference by reducing stochastic errors. However, there is also an increase of systematic errors due to model violations, which can lead to erroneous phylogenies. Here, we explore the impact of systematic errors on the resolution of the eukaryotic phylogeny using a data set of 143 nuclear-encoded proteins from 37 species. The initial observation was that, despite the impressive amount of data, some branches had no significant statistical support. To demonstrate that this lack of resolution is due to a mutual annihilation of phylogenetic and nonphylogenetic signals, we created a series of data sets with slightly different taxon sampling. As expected, these data sets yielded strongly supported but mutually exclusive trees, thus confirming the presence of conflicting phylogenetic and nonphylogenetic signals in the original data set. To decide on the correct tree, we applied several methods expected to reduce the impact of some kinds of systematic error. Briefly, we show that (i) removing fast-evolving positions, (ii) recoding amino acids into functional categories, and (iii) using a site-heterogeneous mixture model (CAT) are three effective means of increasing the ratio of phylogenetic to nonphylogenetic signal. Finally, our results allow us to formulate guidelines for detecting and overcoming phylogenetic artefacts in genome-scale phylogenetic analyses.
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-11-01
Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-01-01
Background Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. Objective The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. Methods The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Results Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Conclusions Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. PMID:24906806
An exploration of Australian hospital pharmacists' attitudes to patient safety.
Lalor, Daniel J; Chen, Timothy F; Walpola, Ramesh; George, Rachel A; Ashcroft, Darren M; Fois, Romano A
2015-02-01
To explore the attitudes of Australian hospital pharmacists towards patient safety in their work settings. A safety climate questionnaire was administered to all 2347 active members of the Society of Hospital Pharmacists of Australia in 2010. Part of the survey elicited free-text comments about patient safety, error and incident reporting. The comments were subjected to thematic analysis to determine the attitudes held by respondents in relation to patient safety and its quality management in their work settings. Two hundred and ten (210) of 643 survey respondents provided comments on safety and quality issues related to their work settings. The responses contained a number of dominant themes including issues of workforce and working conditions, incident reporting systems, the response when errors occur, the presence or absence of a blame culture, hospital management support for safety initiatives, openness about errors and the value of teamwork. A number of pharmacists described the development of a mature patient-safety culture - one that is open about reporting errors and active in reducing their occurrence. Others described work settings in which a culture of blame persists, stifling error reporting and ultimately compromising patient safety. Australian hospital pharmacists hold a variety of attitudes that reflect diverse workplace cultures towards patient safety, error and incident reporting. This study has provided an insight into these attitudes and the actions that are needed to improve the patient-safety culture within Australian hospital pharmacy work settings. © 2014 Royal Pharmaceutical Society.
Crowd-sourced pictures geo-localization method based on street view images and 3D reconstruction
NASA Astrophysics Data System (ADS)
Cheng, Liang; Yuan, Yi; Xia, Nan; Chen, Song; Chen, Yanming; Yang, Kang; Ma, Lei; Li, Manchun
2018-07-01
People are increasingly becoming accustomed to taking photos of everyday life in modern cities and uploading them on major photo-sharing social media sites. These sites contain numerous pictures, but some have incomplete or blurred location information. The geo-localization of crowd-sourced pictures enriches the information contained therein, and is applicable to activities such as urban construction, urban landscape analysis, and crime tracking. However, geo-localization faces huge technical challenges. This paper proposes a method for large-scale geo-localization of crowd-sourced pictures. Our approach uses structured, organized Street View images as a reference dataset and employs a three-step strategy of coarse geo-localization by image retrieval, selecting reliable matches by image registration, and fine geo-localization by 3D reconstruction to attach geographic tags to pictures from unidentified sources. In study area, 3D reconstruction based on close-range photogrammetry is used to restore the 3D geographical information of the crowd-sourced pictures, resulting in the proposed method improving the median error from 256.7 m to 69.0 m, and the percentage of the geo-localized query pictures under a 50 m error from 17.2% to 43.2% compared with the previous method. Another discovery using the proposed method is that, in respect of the causes of reconstruction error, closer distances from the cameras to the main objects in query pictures tend to produce lower errors and the component of error parallel to the road makes a more significant contribution to the Total Error. The proposed method is not limited to small areas, and could be expanded to cities and larger areas owing to its flexible parameters.
A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.
2013-12-18
This paper presents four algorithms to generate random forecast error time series, including a truncated-normal distribution model, a state-space based Markov model, a seasonal autoregressive moving average (ARMA) model, and a stochastic-optimization based model. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets, used for variable generation integration studies. A comparison is made using historical DA load forecast and actual load values to generate new sets of DA forecasts with similar stoical forecast error characteristics. This paper discusses and comparesmore » the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less
Diagnostic Errors in Ambulatory Care: Dimensions and Preventive Strategies
ERIC Educational Resources Information Center
Singh, Hardeep; Weingart, Saul N.
2009-01-01
Despite an increasing focus on patient safety in ambulatory care, progress in understanding and reducing diagnostic errors in this setting lag behind many other safety concerns such as medication errors. To explore the extent and nature of diagnostic errors in ambulatory care, we identified five dimensions of ambulatory care from which errors may…
Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; ...
2017-02-15
Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less
Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; Rudinger, Kenneth; Mizrahi, Jonathan; Fortier, Kevin; Maunz, Peter
2017-01-01
Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Here we use gate set tomography to completely characterize operations on a trapped-Yb+-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10−4). PMID:28198466
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik
Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less
Verifying Parentage and Confirming Identity in Blackberry with a Fingerprinting Set
USDA-ARS?s Scientific Manuscript database
Parentage and identity confirmation is an important aspect of clonally propagated crops outcrossing. Potential errors resulting misidentification include off-type pollination events, labeling errors, or sports of clones. DNA fingerprinting sets are an excellent solution to quickly identify off-type ...
Field guide to malformations of frogs and toads: with radiographic interpretations
Meteyer, Carol U.
2000-01-01
In 1995, students found numerous malformed frogs on a field trip to a Minnesota pond. Since that time, reports of malformed frogs have increased dramatically. Malformed frogs have now been reported in 44 states in 38 species of frogs, and 19 species of toads. Estimates as high as 60% of the newly metamorphosed frog populations have had malformations at some ponds (NARCAM, ’99). The wide geographic distribution of malformed frogs and the variety of malformations are a concern to resource managers, research scientists and public health officials. The potential for malformations to serve as a signal of ecosystem disruption, and the affect this potential disruption might have on other organisms that share those ecosystems, has not been resolved. Malformations represent an error that occurred early in development. The event that caused the developmental error is temporally distant from the malformation we see in the fully developed animal. Knowledge of normal developmental principles is necessary to design thoughtful investigations that will define the events involved in abnormal development in wild frog populations.Development begins at the time an egg is fertilized and progresses by chemical communication between cells and cell layers. This communication is programmed through gene expression. Malformations represent primary errors in development, errors in chemical communication or translation of genetic information. Deformations arise later in development and usually result from the influence of mechanical factors (such as amputation) that alter shape or anatomy of a structure that has developed normally. The occurrence and the type of malformations are influenced by the type of error or insult as well as the timing of the error (the developmental stage at which the error occurred). The appearance of the malformation can therefore provide clues that suggest when the error may have occurred. If the malformation is an incomplete organ, such as an incomplete limb, the factor or insult acted during a susceptible period prior to organ completion. Although defining the anatomy of the malformed metamorphosed frog can give us an idea of the approximate window during which the developmental insult was initiated, and might even suggest the type of insult that may have occurred, the morphology of the malformation does not define the cause. To define causes and mechanisms of frog malformations we need to use well designed investigations that are different from traditional tests used in acute toxicity or disease pathogenicity studies. When investigating malformations in metamorphosed frogs, we are looking at the affect of exposure to an agent that occurred early in tadpole development. Therefore investigations to determine causes of malformations need to look at agents that are present in the tadpoles or their environments at these early developmental times. Laboratory experiments need to expose embryos and tadpoles to suspect agents at appropriate developmental stages and look at acute results, such as toxicity and death, as well as following the developmental process to completion to determine the impact of the agent on the developing tadpole and the fully developed frog. This means holding animals past metamorphic climax to assure that the anatomy and physiology of the adult have developed normally.As we look at field collections of abnormal frogs, we need to keep in mind that these collections reflect survivors only. We are looking at malformations that were not fatal to tadpoles. We cannot assume that because we do not collect other malformations, they did not exist. More work needs to be done on the developing tadpole, in the field and in the laboratory, to better elucidate the range, frequency, character and causes of anuran malformations.
Incomplete Multisource Transfer Learning.
Ding, Zhengming; Shao, Ming; Fu, Yun
2018-02-01
Transfer learning is generally exploited to adapt well-established source knowledge for learning tasks in weakly labeled or unlabeled target domain. Nowadays, it is common to see multiple sources available for knowledge transfer, each of which, however, may not include complete classes information of the target domain. Naively merging multiple sources together would lead to inferior results due to the large divergence among multiple sources. In this paper, we attempt to utilize incomplete multiple sources for effective knowledge transfer to facilitate the learning task in target domain. To this end, we propose an incomplete multisource transfer learning through two directional knowledge transfer, i.e., cross-domain transfer from each source to target, and cross-source transfer. In particular, in cross-domain direction, we deploy latent low-rank transfer learning guided by iterative structure learning to transfer knowledge from each single source to target domain. This practice reinforces to compensate for any missing data in each source by the complete target data. While in cross-source direction, unsupervised manifold regularizer and effective multisource alignment are explored to jointly compensate for missing data from one portion of source to another. In this way, both marginal and conditional distribution discrepancy in two directions would be mitigated. Experimental results on standard cross-domain benchmarks and synthetic data sets demonstrate the effectiveness of our proposed model in knowledge transfer from incomplete multiple sources.
Shawahna, Ramzi; Masri, Dina; Al-Gharabeh, Rawan; Deek, Rawan; Al-Thayba, Lama; Halaweh, Masa
2016-02-01
To develop and achieve formal consensus on a definition of medication administration errors and scenarios that should or should not be considered as medication administration errors in hospitalised patient settings. Medication administration errors occur frequently in hospitalised patient settings. Currently, there is no formal consensus on a definition of medication administration errors or scenarios that should or should not be considered as medication administration errors. This was a descriptive study using Delphi technique. A panel of experts (n = 50) recruited from major hospitals, nursing schools and universities in Palestine took part in the study. Three Delphi rounds were followed to achieve consensus on a proposed definition of medication administration errors and a series of 61 scenarios representing potential medication administration error situations formulated into a questionnaire. In the first Delphi round, key contact nurses' views on medication administration errors were explored. In the second Delphi round, consensus was achieved to accept the proposed definition of medication administration errors and to include 36 (59%) scenarios and exclude 1 (1·6%) as medication administration errors. In the third Delphi round, consensus was achieved to consider further 14 (23%) and exclude 2 (3·3%) as medication administration errors while the remaining eight (13·1%) were considered equivocal. Of the 61 scenarios included in the Delphi process, experts decided to include 50 scenarios as medication administration errors, exclude three scenarios and include or exclude eight scenarios depending on the individual clinical situation. Consensus on a definition and scenarios representing medication administration errors can be achieved using formal consensus techniques. Researchers should be aware that using different definitions of medication administration errors, inclusion or exclusion of medication administration error situations could significantly affect the rate of medication administration errors reported in their studies. Consensual definitions and medication administration error situations can be used in future epidemiology studies investigating medication administration errors in hospitalised patient settings which may permit and promote direct comparisons of different studies. © 2015 John Wiley & Sons Ltd.
A novel multisensor traffic state assessment system based on incomplete data.
Zeng, Yiliang; Lan, Jinhui; Ran, Bin; Jiang, Yaoliang
2014-01-01
A novel multisensor system with incomplete data is presented for traffic state assessment. The system comprises probe vehicle detection sensors, fixed detection sensors, and traffic state assessment algorithm. First of all, the validity checking of the traffic flow data is taken as preprocessing of this method. And then a new method based on the history data information is proposed to fuse and recover the incomplete data. According to the characteristics of space complementary of data based on the probe vehicle detector and fixed detector, a fusion model of space matching is presented to estimate the mean travel speed of the road. Finally, the traffic flow data include flow, speed and, occupancy rate, which are detected between Beijing Deshengmen bridge and Drum Tower bridge, are fused to assess the traffic state of the road by using the fusion decision model of rough sets and cloud. The accuracy of experiment result can reach more than 98%, and the result is in accordance with the actual road traffic state. This system is effective to assess traffic state, and it is suitable for the urban intelligent transportation system.
A Novel Multisensor Traffic State Assessment System Based on Incomplete Data
Zeng, Yiliang; Lan, Jinhui; Ran, Bin; Jiang, Yaoliang
2014-01-01
A novel multisensor system with incomplete data is presented for traffic state assessment. The system comprises probe vehicle detection sensors, fixed detection sensors, and traffic state assessment algorithm. First of all, the validity checking of the traffic flow data is taken as preprocessing of this method. And then a new method based on the history data information is proposed to fuse and recover the incomplete data. According to the characteristics of space complementary of data based on the probe vehicle detector and fixed detector, a fusion model of space matching is presented to estimate the mean travel speed of the road. Finally, the traffic flow data include flow, speed and, occupancy rate, which are detected between Beijing Deshengmen bridge and Drum Tower bridge, are fused to assess the traffic state of the road by using the fusion decision model of rough sets and cloud. The accuracy of experiment result can reach more than 98%, and the result is in accordance with the actual road traffic state. This system is effective to assess traffic state, and it is suitable for the urban intelligent transportation system. PMID:25162055
Bias in error estimation when using cross-validation for model selection.
Varma, Sudhir; Simon, Richard
2006-02-23
Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data. We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these "null" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With "null" and "non null" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the "null" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of "null" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for "null" and "non-null" data distributions. We show that using CV to compute an error estimate for a classifier that has itself been tuned using CV gives a significantly biased estimate of the true error. Proper use of CV for estimating true error of a classifier developed using a well defined algorithm requires that all steps of the algorithm, including classifier parameter tuning, be repeated in each CV loop. A nested CV procedure provides an almost unbiased estimate of the true error.
Automatic alignment for three-dimensional tomographic reconstruction
NASA Astrophysics Data System (ADS)
van Leeuwen, Tristan; Maretzke, Simon; Joost Batenburg, K.
2018-02-01
In tomographic reconstruction, the goal is to reconstruct an unknown object from a collection of line integrals. Given a complete sampling of such line integrals for various angles and directions, explicit inverse formulas exist to reconstruct the object. Given noisy and incomplete measurements, the inverse problem is typically solved through a regularized least-squares approach. A challenge for both approaches is that in practice the exact directions and offsets of the x-rays are only known approximately due to, e.g. calibration errors. Such errors lead to artifacts in the reconstructed image. In the case of sufficient sampling and geometrically simple misalignment, the measurements can be corrected by exploiting so-called consistency conditions. In other cases, such conditions may not apply and we have to solve an additional inverse problem to retrieve the angles and shifts. In this paper we propose a general algorithmic framework for retrieving these parameters in conjunction with an algebraic reconstruction technique. The proposed approach is illustrated by numerical examples for both simulated data and an electron tomography dataset.
Atmospheric opacity in the Schumann-Runge bands and the aeronomic dissociation of water vapor
NASA Technical Reports Server (NTRS)
Frederick, J. E.; Hudson, R. D.
1980-01-01
Knowledge of the aeronomic production of odd hydrogen in the dissociation of water vapor is limited by uncertainties in the penetration of solar irradiance in the Schumann-Runge bands of O2 and by incomplete information concerning the products of photolysis at Lyman alpha. Consideration of all error sources involved in computing the H2O dissociation rate in the wavelength region 175-200 nm leads to an estimated uncertainty of plus or minus 35% at an altitude of 90 km for an overhead sun. The uncertainty increases with decreasing altitude such that the true dissociation rate at 60 km for an overhead sun lies between 0.45 and 1.55 times the results computed using the best input parameters currently available. Calculations of the H2O dissociation rate by Lyman alpha should include the variation in O2 opacity across the solar line width. Neglect of this can lead to errors as large as 50% at altitudes where the process is the major source of odd hydrogen.
First-order reliability application and verification methods for semistatic structures
NASA Astrophysics Data System (ADS)
Verderaime, V.
1994-11-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.
Horrey, William J; Lesch, Mary F; Mitsopoulos-Rubens, Eve; Lee, John D
2015-03-01
Humans often make inflated or erroneous estimates of their own ability or performance. Such errors in calibration can be due to incomplete processing, neglect of available information or due to improper weighing or integration of the information and can impact our decision-making, risk tolerance, and behaviors. In the driving context, these outcomes can have important implications for safety. The current paper discusses the notion of calibration in the context of self-appraisals and self-competence as well as in models of self-regulation in driving. We further develop a conceptual framework for calibration in the driving context borrowing from earlier models of momentary demand regulation, information processing, and lens models for information selection and utilization. Finally, using the model we describe the implications for calibration (or, more specifically, errors in calibration) for our understanding of driver distraction, in-vehicle automation and autonomous vehicles, and the training of novice and inexperienced drivers. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Shivanandan, Arun; Unnikrishnan, Jayakrishnan; Radenovic, Aleksandra
2015-01-01
Single Molecule Localization Microscopy techniques like PhotoActivated Localization Microscopy, with their sub-diffraction limit spatial resolution, have been popularly used to characterize the spatial organization of membrane proteins, by means of quantitative cluster analysis. However, such quantitative studies remain challenged by the techniques’ inherent sources of errors such as a limited detection efficiency of less than 60%, due to incomplete photo-conversion, and a limited localization precision in the range of 10 – 30nm, varying across the detected molecules, mainly depending on the number of photons collected from each. We provide analytical methods to estimate the effect of these errors in cluster analysis and to correct for them. These methods, based on the Ripley’s L(r) – r or Pair Correlation Function popularly used by the community, can facilitate potentially breakthrough results in quantitative biology by providing a more accurate and precise quantification of protein spatial organization. PMID:25794150
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, Albin, E-mail: albin.fredriksson@raysearchlabs.com; Hårdemark, Björn; Forsgren, Anders
2015-07-15
Purpose: This paper introduces a method that maximizes the probability of satisfying the clinical goals in intensity-modulated radiation therapy treatments subject to setup uncertainty. Methods: The authors perform robust optimization in which the clinical goals are constrained to be satisfied whenever the setup error falls within an uncertainty set. The shape of the uncertainty set is included as a variable in the optimization. The goal of the optimization is to modify the shape of the uncertainty set in order to maximize the probability that the setup error will fall within the modified set. Because the constraints enforce the clinical goalsmore » to be satisfied under all setup errors within the uncertainty set, this is equivalent to maximizing the probability of satisfying the clinical goals. This type of robust optimization is studied with respect to photon and proton therapy applied to a prostate case and compared to robust optimization using an a priori defined uncertainty set. Results: Slight reductions of the uncertainty sets resulted in plans that satisfied a larger number of clinical goals than optimization with respect to a priori defined uncertainty sets, both within the reduced uncertainty sets and within the a priori, nonreduced, uncertainty sets. For the prostate case, the plans taking reduced uncertainty sets into account satisfied 1.4 (photons) and 1.5 (protons) times as many clinical goals over the scenarios as the method taking a priori uncertainty sets into account. Conclusions: Reducing the uncertainty sets enabled the optimization to find better solutions with respect to the errors within the reduced as well as the nonreduced uncertainty sets and thereby achieve higher probability of satisfying the clinical goals. This shows that asking for a little less in the optimization sometimes leads to better overall plan quality.« less
Medication reconciliation service in Tan Tock Seng Hospital.
Yi, Sia Beng; Shan, Janice Chan Pei; Hong, Goh Lay
2013-01-01
Medication reconciliation is integral to every hospital. Approximately 60 percent of all hospital medication errors occur at admission, intra-hospital transfer or discharge. Effectively and consistently performing medication reconciliation at care-interfaces continues to be a challenge. Tan Tock Seng Hospital (TTSH) averages 4,700 admissions monthly. Many patients are elderly (> 65 years old) at risk from poly-pharmacy. As part of a medication safety initiative, pharmacy staff started a medication reconciliation service in 2007, which expanded to include all patients in October 2009. This article aims to describe the TTSH medication reconciliation system and to highlight common medication errors occurring following incomplete medication reconciliation. Where possible, patients admitted into TTSH are seen by pharmacy staff within 24 hours of admission. A form was created to document their medications, which is filed into the case sheets for referencing purposes. Any discrepancies in medicines are brought to doctors' attention. Patients are also counseled about changes to their medications. Errors picked up were captured in an Excel database. The most common medication error was prescribers missing out medications. The second commonest was recording different doses and regimens. The reason was mainly due to doctors transcribing medications inaccurately. This is a descriptive study and no statistical tests were carried out. Data entry was done by different pharmacy staff, and not a dedicated person; hence, data might be under-reported. The findings demonstrate the importance of medication reconciliation on admission. Accurate medication reconciliation can help to reduce transcription errors and improve service quality. The article highlights medication reconciliation's importance and has implications for healthcare professionals in all countries.
NASA Astrophysics Data System (ADS)
Peng, Cheng; Geneva, Nicholas; Guo, Zhaoli; Wang, Lian-Ping
2017-01-01
In lattice Boltzmann simulations involving moving solid boundaries, the momentum exchange between the solid and fluid phases was recently found to be not fully consistent with the principle of local Galilean invariance (GI) when the bounce-back schemes (BBS) and the momentum exchange method (MEM) are used. In the past, this inconsistency was resolved by introducing modified MEM schemes so that the overall moving-boundary algorithm could be more consistent with GI. However, in this paper we argue that the true origin of this violation of Galilean invariance (VGI) in the presence of a moving solid-fluid interface is due to the BBS itself, as the VGI error not only exists in the hydrodynamic force acting on the solid phase, but also in the boundary force exerted on the fluid phase, according to Newton's Third Law. The latter, however, has so far gone unnoticed in previously proposed modified MEM schemes. Based on this argument, we conclude that the previous modifications to the momentum exchange method are incomplete solutions to the VGI error in the lattice Boltzmann method (LBM). An implicit remedy to the VGI error in the LBM and its limitation is then revealed. To address the VGI error for a case when this implicit remedy does not exist, a bounce-back scheme based on coordinate transformation is proposed. Numerical tests in both laminar and turbulent flows show that the proposed scheme can effectively eliminate the errors associated with the usual bounce-back implementations on a no-slip solid boundary, and it can maintain an accurate momentum exchange calculation with minimal computational overhead.
Sure, Rebecca; Brandenburg, Jan Gerit
2015-01-01
Abstract In quantum chemical computations the combination of Hartree–Fock or a density functional theory (DFT) approximation with relatively small atomic orbital basis sets of double‐zeta quality is still widely used, for example, in the popular B3LYP/6‐31G* approach. In this Review, we critically analyze the two main sources of error in such computations, that is, the basis set superposition error on the one hand and the missing London dispersion interactions on the other. We review various strategies to correct those errors and present exemplary calculations on mainly noncovalently bound systems of widely varying size. Energies and geometries of small dimers, large supramolecular complexes, and molecular crystals are covered. We conclude that it is not justified to rely on fortunate error compensation, as the main inconsistencies can be cured by modern correction schemes which clearly outperform the plain mean‐field methods. PMID:27308221
U.S. Maternally Linked Birth Records May Be Biased for Hispanics and Other Population Groups
LEISS, JACK K.; GILES, DENISE; SULLIVAN, KRISTIN M.; MATHEWS, RAHEL; SENTELLE, GLENDA; TOMASHEK, KAY M.
2010-01-01
Purpose To advance understanding of linkage error in U.S. maternally linked datasets, and how the error may affect results of studies based on the linked data. Methods North Carolina birth and fetal death records for 1988-1997 were maternally linked (n=1,030,029). The maternal set probability, defined as the probability that all records assigned to the same maternal set do in fact represent events to the same woman, was used to assess differential maternal linkage error across race/ethnic groups. Results Maternal set probabilities were lower for records specifying Asian or Hispanic race/ethnicity, suggesting greater maternal linkage error. The lower probabilities for Hispanics were concentrated in women of Mexican origin who were not born in the United States. Conclusions Differential maternal linkage error may be a source of bias in studies using U.S. maternally linked datasets to make comparisons between Hispanics and other groups or among Hispanic subgroups. Methods to quantify and adjust for this potential bias are needed. PMID:20006273
[Diagnostic Errors in Medicine].
Buser, Claudia; Bankova, Andriyana
2015-12-09
The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.
Medication errors in the obstetrics emergency ward in a low resource setting.
Kandil, Mohamed; Sayyed, Tarek; Emarh, Mohamed; Ellakwa, Hamed; Masood, Alaa
2012-08-01
To investigate the patterns of medication errors in the obstetric emergency ward in a low resource setting. This prospective observational study included 10,000 women who presented at the obstetric emergency ward, department of Obstetrics and Gynecology, Menofyia University Hospital, Egypt between March and December 2010. All medications prescribed in the emergency ward were monitored for different types of errors. The head nurse in each shift was asked to monitor each pharmacologic order from the moment of prescribing till its administration. Retrospective review of the patients' charts and nurses' notes was carried out by the authors of this paper. Results were tabulated and statistically analyzed. A total of 1976 medication errors were detected. Administration errors were the commonest error reported. Omitted errors ranked second followed by unauthorized and prescription errors. Three administration errors resulted in three Cesareans were performed for fetal distress because of wrong doses of oxytocin infusion. The rest of errors did not cause patients harm but may have lead to an increase in monitoring. Most errors occurred during night shifts. The availability of automated infusion pumps will probably decrease administration errors significantly. There is a need for more obstetricians and nurses during the nightshifts to minimize errors resulting from working under stressful conditions.
Eppenhof, Koen A J; Pluim, Josien P W
2018-04-01
Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.
Effects of incomplete mixing on reactive transport in flows through heterogeneous porous media
NASA Astrophysics Data System (ADS)
Wright, Elise E.; Richter, David H.; Bolster, Diogo
2017-11-01
The phenomenon of incomplete mixing reduces bulk effective reaction rates in reactive transport. Many existing models do not account for these effects, resulting in the overestimation of reaction rates in laboratory and field settings. To date, most studies on incomplete mixing have focused on diffusive systems; here, we extend these to explore the role that flow heterogeneity has on incomplete mixing. To do this, we examine reactive transport using a Lagrangian reactive particle tracking algorithm in two-dimensional idealized heterogeneous porous media. Contingent on the nondimensional Peclet and Damköhler numbers in the system, it was found that near well-mixed behavior could be observed at late times in the heterogeneous flow field simulations. We look at three common flow deformation metrics that describe the enhancement of mixing in the flow due to velocity gradients: the Okubo-Weiss parameter (θ ), the largest eigenvalue of the Cauchy-Green strain tensor (λC), and the finite-time Lyapunov exponent (Λ ). Strong mixing regions in the heterogeneous flow field identified by these metrics were found to correspond to regions with higher numbers of reactions, but the infrequency of these regions compared to the large numbers of reactions occurring elsewhere in the domain imply that these strong mixing regions are insufficient in explaining the observed near well-mixed behavior. Since it was found that reactive transport in these heterogeneous flows could overcome the effects of incomplete mixing, we also search for a closure for the mean concentration. The conservative quantity u2¯, where u =CA-CB , was found to predict the late time scaling of the mean concentration, i.e., Ci¯˜u2¯ .
High-accuracy resolver-to-digital conversion via phase locked loop based on PID controller
NASA Astrophysics Data System (ADS)
Li, Yaoling; Wu, Zhong
2018-03-01
The problem of resolver-to-digital conversion (RDC) is transformed into the problem of angle tracking control, and a phase locked loop (PLL) method based on PID controller is proposed in this paper. This controller comprises a typical PI controller plus an incomplete differential which can avoid the amplification of higher-frequency noise components by filtering the phase detection error with a low-pass filter. Compared with conventional ones, the proposed PLL method makes the converter a system of type III and thus the conversion accuracy can be improved. Experimental results demonstrate the effectiveness of the proposed method.
An evaluation of computer assisted clinical classification algorithms.
Chute, C G; Yang, Y; Buntrock, J
1994-01-01
The Mayo Clinic has a long tradition of indexing patient records in high resolution and volume. Several algorithms have been developed which promise to help human coders in the classification process. We evaluate variations on code browsers and free text indexing systems with respect to their speed and error rates in our production environment. The more sophisticated indexing systems save measurable time in the coding process, but suffer from incompleteness which requires a back-up system or human verification. Expert Network does the best job of rank ordering clinical text, potentially enabling the creation of thresholds for the pass through of computer coded data without human review.
Radiocardiography in clinical cardiology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierson, R.N. Jr.; Alam, S.; Kemp, H.G.
1977-01-01
Quantitative radiocardiography provides a variety of noninvasive measurements of value in cardiology. A gamma camera and computer processing are required for most of these measurements. The advantages of ease, economy, and safety of these procedures are, in part, offset by the complexity of as yet unstandardized methods and incomplete validation of results. The expansion of these techniques will inevitably be rapid. Their careful performance requires, for the moment, a major and perhaps dedicated effort by at least one member of the professional team, if the pitfalls that lead to unrecognized error are to be avoided. We may anticipate more automatedmore » and reliable results with increased experience and validation.« less
NASA Astrophysics Data System (ADS)
Bailey, Jon A.; Jang, Yong-Chull; Lee, Weonjong; Leem, Jaehoon
2018-03-01
The CKM matrix element |Vcb| can be extracted by combining data from experiments with lattice QCD results for the semileptonic form factors for the B̅ → D(*)lv̅ decays. The Oktay-Kronfeld (OK) action was designed to reduce heavy-quark discretization errors to below 1%, or through O(λ3) in HQET power counting. Here we describe recent progress on bottom-to-charm currents improved to the same order in HQET as the OK action, and correct formerly reported results of our matching calculations, in which the operator basis was incomplete.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent
We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained bymore » replacing missing information with constant values.« less
Shin splints. Diagnosis, management, prevention.
Moore, M P
1988-01-01
Our knowledge of the etiology of shin splints is incomplete. Biomechanical abnormalities are likely to be major factors in predisposing certain persons to such injury. Also, training errors are major etiologic factors. Because shin splints result from mechanical overload of various elements of the musculoskeletal system of the leg that exceed their adaptive remodeling capacity, rest and recovery should be emphasized as an important aspect of sports training. Accurate and prompt diagnosis reduces the severity and duration of the injury. Management should consist of measures to reduce inflammation and pain and to identify possible biomechanical factors that may be correctable by strengthening and flexibility exercises or by the use of an orthotic device.
Free will as relative freedom with conscious component.
Hájícek, P
2009-03-01
The general notion of relative freedom is introduced. It is a kind of freedom that is observed everywhere in nature. In biology, incomplete knowledge is defined for all organisms. They cope with the problem by Popper's trial-and-error processes. One source of their success is the relative freedom of choice from the basic option ranges: mutations, motions and neuron connections. After the conjecture is adopted that communicability can be used as a criterion of consciousness, free will is defined as a conscious version of relative freedom. The resulting notion is logically self-consistent and it describes an observable phenomenon that agrees with our experience.
SU-E-T-195: Gantry Angle Dependency of MLC Leaf Position Error
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ju, S; Hong, C; Kim, M
Purpose: The aim of this study was to investigate the gantry angle dependency of the multileaf collimator (MLC) leaf position error. Methods: An automatic MLC quality assurance system (AutoMLCQA) was developed to evaluate the gantry angle dependency of the MLC leaf position error using an electronic portal imaging device (EPID). To eliminate the EPID position error due to gantry rotation, we designed a reference maker (RM) that could be inserted into the wedge mount. After setting up the EPID, a reference image was taken of the RM using an open field. Next, an EPID-based picket-fence test (PFT) was performed withoutmore » the RM. These procedures were repeated at every 45° intervals of the gantry angle. A total of eight reference images and PFT image sets were analyzed using in-house software. The average MLC leaf position error was calculated at five pickets (-10, -5, 0, 5, and 10 cm) in accordance with general PFT guidelines using in-house software. This test was carried out for four linear accelerators. Results: The average MLC leaf position errors were within the set criterion of <1 mm (actual errors ranged from -0.7 to 0.8 mm) for all gantry angles, but significant gantry angle dependency was observed in all machines. The error was smaller at a gantry angle of 0° but increased toward the positive direction with gantry angle increments in the clockwise direction. The error reached a maximum value at a gantry angle of 90° and then gradually decreased until 180°. In the counter-clockwise rotation of the gantry, the same pattern of error was observed but the error increased in the negative direction. Conclusion: The AutoMLCQA system was useful to evaluate the MLC leaf position error for various gantry angles without the EPID position error. The Gantry angle dependency should be considered during MLC leaf position error analysis.« less
Assessment of Linear Finite-Difference Poisson-Boltzmann Solvers
Wang, Jun; Luo, Ray
2009-01-01
CPU time and memory usage are two vital issues that any numerical solvers for the Poisson-Boltzmann equation have to face in biomolecular applications. In this study we systematically analyzed the CPU time and memory usage of five commonly used finite-difference solvers with a large and diversified set of biomolecular structures. Our comparative analysis shows that modified incomplete Cholesky conjugate gradient and geometric multigrid are the most efficient in the diversified test set. For the two efficient solvers, our test shows that their CPU times increase approximately linearly with the numbers of grids. Their CPU times also increase almost linearly with the negative logarithm of the convergence criterion at very similar rate. Our comparison further shows that geometric multigrid performs better in the large set of tested biomolecules. However, modified incomplete Cholesky conjugate gradient is superior to geometric multigrid in molecular dynamics simulations of tested molecules. We also investigated other significant components in numerical solutions of the Poisson-Boltzmann equation. It turns out that the time-limiting step is the free boundary condition setup for the linear systems for the selected proteins if the electrostatic focusing is not used. Thus, development of future numerical solvers for the Poisson-Boltzmann equation should balance all aspects of the numerical procedures in realistic biomolecular applications. PMID:20063271
Improved Quality in Aerospace Testing Through the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
DeLoach, R.
2000-01-01
This paper illustrates how, in the presence of systematic error, the quality of an experimental result can be influenced by the order in which the independent variables are set. It is suggested that in typical experimental circumstances in which systematic errors are significant, the common practice of organizing the set point order of independent variables to maximize data acquisition rate results in a test matrix that fails to produce the highest quality research result. With some care to match the volume of data required to satisfy inference error risk tolerances, it is possible to accept a lower rate of data acquisition and still produce results of higher technical quality (lower experimental error) with less cost and in less time than conventional test procedures, simply by optimizing the sequence in which independent variable levels are set.
2013-09-01
M.4.1. Two-dimensional domains cropped out of three-dimensional numerically generated realizations; (a) 3D PCE-NAPL realizations generated by UTCHEM...165 Figure R.3.2. The absolute error vs relative error scatter plots of pM and gM from SGS data set- 4 using multi-task manifold...error scatter plots of pM and gM from TP/MC data set using multi- task manifold regression
A greedy algorithm for species selection in dimension reduction of combustion chemistry
NASA Astrophysics Data System (ADS)
Hiremath, Varun; Ren, Zhuyin; Pope, Stephen B.
2010-09-01
Computational calculations of combustion problems involving large numbers of species and reactions with a detailed description of the chemistry can be very expensive. Numerous dimension reduction techniques have been developed in the past to reduce the computational cost. In this paper, we consider the rate controlled constrained-equilibrium (RCCE) dimension reduction method, in which a set of constrained species is specified. For a given number of constrained species, the 'optimal' set of constrained species is that which minimizes the dimension reduction error. The direct determination of the optimal set is computationally infeasible, and instead we present a greedy algorithm which aims at determining a 'good' set of constrained species; that is, one leading to near-minimal dimension reduction error. The partially-stirred reactor (PaSR) involving methane premixed combustion with chemistry described by the GRI-Mech 1.2 mechanism containing 31 species is used to test the algorithm. Results on dimension reduction errors for different sets of constrained species are presented to assess the effectiveness of the greedy algorithm. It is shown that the first four constrained species selected using the proposed greedy algorithm produce lower dimension reduction error than constraints on the major species: CH4, O2, CO2 and H2O. It is also shown that the first ten constrained species selected using the proposed greedy algorithm produce a non-increasing dimension reduction error with every additional constrained species; and produce the lowest dimension reduction error in many cases tested over a wide range of equivalence ratios, pressures and initial temperatures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imura, K; Fujibuchi, T; Hirata, H
Purpose: Patient set-up skills in radiotherapy treatment room have a great influence on treatment effect for image guided radiotherapy. In this study, we have developed the training system for improving practical set-up skills considering rotational correction in the virtual environment away from the pressure of actual treatment room by using three-dimensional computer graphic (3DCG) engine. Methods: The treatment room for external beam radiotherapy was reproduced in the virtual environment by using 3DCG engine (Unity). The viewpoints to perform patient set-up in the virtual treatment room were arranged in both sides of the virtual operable treatment couch to assume actual performancemore » by two clinical staffs. The position errors to mechanical isocenter considering alignment between skin marker and laser on the virtual patient model were displayed by utilizing numerical values expressed in SI units and the directions of arrow marks. The rotational errors calculated with a point on the virtual body axis as the center of each rotation axis for the virtual environment were corrected by adjusting rotational position of the body phantom wound the belt with gyroscope preparing on table in a real space. These rotational errors were evaluated by describing vector outer product operations and trigonometric functions in the script for patient set-up technique. Results: The viewpoints in the virtual environment allowed individual user to visually recognize the position discrepancy to mechanical isocenter until eliminating the positional errors of several millimeters. The rotational errors between the two points calculated with the center point could be efficiently corrected to display the minimum technique mathematically by utilizing the script. Conclusion: By utilizing the script to correct the rotational errors as well as accurate positional recognition for patient set-up technique, the training system developed for improving patient set-up skills enabled individual user to indicate efficient positional correction methods easily.« less
Buetow, Stephen; Henshaw, Jenny; Bryant, Linda; O'Sullivan, Deirdre
2010-01-01
Background. Common but seldom published are Parkinson's disease (PD) medication errors involving late, extra, or missed doses. These errors can reduce medication effectiveness and the quality of life of people with PD and their caregivers. Objective. To explore lay perspectives of factors contributing to medication timing errors for PD in hospital and community settings. Design and Methods. This qualitative research purposively sampled individuals with PD, or a proxy of their choice, throughout New Zealand during 2008-2009. Data collection involved 20 semistructured, personal interviews by telephone. A general inductive analysis of the data identified core insights consistent with the study objective. Results. Five themes help to account for possible timing adherence errors by people with PD, their caregivers or professionals. The themes are the abrupt withdrawal of PD medication; wrong, vague or misread instructions; devaluation of the lay role in managing PD medications; deficits in professional knowledge and in caring behavior around PD in formal health care settings; and lay forgetfulness. Conclusions. The results add to the limited published research on medication errors in PD and help to confirm anecdotal experience internationally. They indicate opportunities for professionals and lay people to work together to reduce errors in the timing of medication for PD in hospital and community settings. PMID:20975777
Ferreira, Carlos R.; Gahl, William A.
2017-01-01
Trace elements are chemical elements needed in minute amounts for normal physiology. Some of the physiologically relevant trace elements include iodine, copper, iron, manganese, zinc, selenium, cobalt and molybdenum. Of these, some are metals, and in particular, transition metals. The different electron shells of an atom carry different energy levels, with those closest to the nucleus being lowest in energy. The number of electrons in the outermost shell determines the reactivity of such an atom. The electron shells are divided in sub-shells, and in particular the third shell has s, p and d sub-shells. Transition metals are strictly defined as elements whose atom has an incomplete d sub-shell. This incomplete d sub-shell makes them prone to chemical reactions, particularly redox reactions. Transition metals of biologic importance include copper, iron, manganese, cobalt and molybdenum. Zinc is not a transition metal, since it has a complete d sub-shell. Selenium, on the other hand, is strictly speaking a nonmetal, although given its chemical properties between those of metals and nonmetals, it is sometimes considered a metalloid. In this review, we summarize the current knowledge on the inborn errors of metal and metalloid metabolism. PMID:29354481
On the asteroidal jet-stream Flora A
NASA Technical Reports Server (NTRS)
Klacka, Jozef
1992-01-01
The problems of the virtual existence of the Flora 1, separated from the rest of the Flora family, and jet-stream Flora A (Alfven 1969) is discussed in connection with the observational selection effects. It is shown that observational selection effects operate as a whole and can be important in incomplete observational data set.
Group Comparisons in the Presence of Missing Data Using Latent Variable Modeling Techniques
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2010-01-01
A latent variable modeling approach for examining population similarities and differences in observed variable relationship and mean indexes in incomplete data sets is discussed. The method is based on the full information maximum likelihood procedure of model fitting and parameter estimation. The procedure can be employed to test group identities…
Partial and Incomplete Voices: The Political and Three Early Childhood Teachers' Learning
ERIC Educational Resources Information Center
Henderson, Linda
2014-01-01
The early childhood-school relationship is reported as having points of separation and difference. In particular, early childhood teachers located in a school setting report experiencing a push-down effect. This paper reports on a participatory action research project involving three early childhood teachers working within an independent school.…
On Testability of Missing Data Mechanisms in Incomplete Data Sets
ERIC Educational Resources Information Center
Raykov, Tenko
2011-01-01
This article is concerned with the question of whether the missing data mechanism routinely referred to as missing completely at random (MCAR) is statistically examinable via a test for lack of distributional differences between groups with observed and missing data, and related consequences. A discussion is initially provided, from a formal logic…
Communication: Listening and Responding. Affective 4.0.
ERIC Educational Resources Information Center
Borgers, Sherry B., Comp.; Ward, G. Robert, Comp.
This module is designed to provide practice in listening effectively and in responding to messages sent by another. The module is divided into two sets of activities, the first is the formation of a triad enabling the student to investigate the following: do you listen, listening and the unrelated response, incomplete listening, listening for…
Alternative models of recreational off-highway vehicle site demand
Jeffrey Englin; Thomas Holmes; Rebecca Niell
2006-01-01
A controversial recreation activity is off-highway vehicle use. Off-highway vehicle use is controversial because it is incompatible with most other activities and is extremely hard on natural eco-systems. This study estimates utility theoretic incomplete demand systems for four off-highway vehicle sites. Since two sets of restrictions are equally consistent with...
"Antelope": a hybrid-logic model checker for branching-time Boolean GRN analysis
2011-01-01
Background In Thomas' formalism for modeling gene regulatory networks (GRNs), branching time, where a state can have more than one possible future, plays a prominent role. By representing a certain degree of unpredictability, branching time can model several important phenomena, such as (a) asynchrony, (b) incompletely specified behavior, and (c) interaction with the environment. Introducing more than one possible future for a state, however, creates a difficulty for ordinary simulators, because infinitely many paths may appear, limiting ordinary simulators to statistical conclusions. Model checkers for branching time, by contrast, are able to prove properties in the presence of infinitely many paths. Results We have developed Antelope ("Analysis of Networks through TEmporal-LOgic sPEcifications", http://turing.iimas.unam.mx:8080/AntelopeWEB/), a model checker for analyzing and constructing Boolean GRNs. Currently, software systems for Boolean GRNs use branching time almost exclusively for asynchrony. Antelope, by contrast, also uses branching time for incompletely specified behavior and environment interaction. We show the usefulness of modeling these two phenomena in the development of a Boolean GRN of the Arabidopsis thaliana root stem cell niche. There are two obstacles to a direct approach when applying model checking to Boolean GRN analysis. First, ordinary model checkers normally only verify whether or not a given set of model states has a given property. In comparison, a model checker for Boolean GRNs is preferable if it reports the set of states having a desired property. Second, for efficiency, the expressiveness of many model checkers is limited, resulting in the inability to express some interesting properties of Boolean GRNs. Antelope tries to overcome these two drawbacks: Apart from reporting the set of all states having a given property, our model checker can express, at the expense of efficiency, some properties that ordinary model checkers (e.g., NuSMV) cannot. This additional expressiveness is achieved by employing a logic extending the standard Computation-Tree Logic (CTL) with hybrid-logic operators. Conclusions We illustrate the advantages of Antelope when (a) modeling incomplete networks and environment interaction, (b) exhibiting the set of all states having a given property, and (c) representing Boolean GRN properties with hybrid CTL. PMID:22192526
Error sources in passive and active microwave satellite soil moisture over Australia
USDA-ARS?s Scientific Manuscript database
Development of a long-term climate record of soil moisture (SM) involves combining historic and present satellite-retrieved SM data sets. This in turn requires a consistent characterization and deep understanding of the systematic differences and errors in the individual data sets, which vary due to...
Robustness of Type I Error and Power in Set Correlation Analysis of Contingency Tables.
ERIC Educational Resources Information Center
Cohen, Jacob; Nee, John C. M.
1990-01-01
The analysis of contingency tables via set correlation allows the assessment of subhypotheses involving contrast functions of the categories of the nominal scales. The robustness of such methods with regard to Type I error and statistical power was studied via a Monte Carlo experiment. (TJH)
Patient disclosure of medical errors in paediatrics: A systematic literature review
Koller, Donna; Rummens, Anneke; Le Pouesard, Morgane; Espin, Sherry; Friedman, Jeremy; Coffey, Maitreya; Kenneally, Noah
2016-01-01
Medical errors are common within paediatrics; however, little research has examined the process of disclosing medical errors in paediatric settings. The present systematic review of current research and policy initiatives examined evidence regarding the disclosure of medical errors involving paediatric patients. Peer-reviewed research from a range of scientific journals from the past 10 years is presented, and an overview of Canadian and international policies regarding disclosure in paediatric settings are provided. The purpose of the present review was to scope the existing literature and policy, and to synthesize findings into an integrated and accessible report. Future research priorities and policy implications are then identified. PMID:27429578
Rolland, Jannick; Ha, Yonggang; Fidopiastis, Cali
2004-06-01
A theoretical investigation of rendered depth and angular errors, or Albertian errors, linked to natural eye movements in binocular head-mounted displays (HMDs) is presented for three possible eye-point locations: the center of the entrance pupil, the nodal point, and the center of rotation of the eye. A numerical quantification was conducted for both the pupil and the center of rotation of the eye under the assumption that the user will operate solely in either the near field under an associated instrumentation setting or the far field under a different setting. Under these conditions, the eyes are taken to gaze in the plane of the stereoscopic images. Across conditions, results show that the center of the entrance pupil minimizes rendered angular errors, while the center of rotation minimizes rendered position errors. Significantly, this investigation quantifies that under proper setting of the HMD and correct choice of the eye points, rendered depth and angular errors can be brought to be either negligible or within specification of even the most stringent applications in performance of tasks in either the near field or the far field.
Goede, Simon L; Leow, Melvin Khee-Shing
2013-01-01
This treatise investigates error sources in measurements applicable to the hypothalamus-pituitary-thyroid (HPT) system of analysis for homeostatic set point computation. The hypothalamus-pituitary transfer characteristic (HP curve) describes the relationship between plasma free thyroxine [FT4] and thyrotropin [TSH]. We define the origin, types, causes, and effects of errors that are commonly encountered in TFT measurements and examine how we can interpret these to construct a reliable HP function for set point establishment. The error sources in the clinical measurement procedures are identified and analyzed in relation to the constructed HP model. The main sources of measurement and interpretation uncertainties are (1) diurnal variations in [TSH], (2) TFT measurement variations influenced by timing of thyroid medications, (3) error sensitivity in ranges of [TSH] and [FT4] (laboratory assay dependent), (4) rounding/truncation of decimals in [FT4] which in turn amplify curve fitting errors in the [TSH] domain in the lower [FT4] range, (5) memory effects (rate-independent hysteresis effect). When the main uncertainties in thyroid function tests (TFT) are identified and analyzed, we can find the most acceptable model space with which we can construct the best HP function and the related set point area.
Reconstruction of incomplete cell paths through a 3D-2D level set segmentation
NASA Astrophysics Data System (ADS)
Hariri, Maia; Wan, Justin W. L.
2012-02-01
Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.
NASA Astrophysics Data System (ADS)
Park, H.; Han, C.; Gould, A.; Udalski, A.; Sumi, T.; Fouqué, P.; Choi, J.-Y.; Christie, G.; Depoy, D. L.; Dong, Subo; Gaudi, B. S.; Hwang, K.-H.; Jung, Y. K.; Kavka, A.; Lee, C.-U.; Monard, L. A. G.; Natusch, T.; Ngan, H.; Pogge, R. W.; Shin, I.-G.; Yee, J. C.; μFUN Collaboration; Szymański, M. K.; Kubiak, M.; Soszyński, I.; Pietrzyński, G.; Poleski, R.; Ulaczyk, K.; Pietrukowicz, P.; Kozłowski, S.; Skowron, J.; Wyrzykowski, Ł.; OGLE Collaboration; Abe, F.; Bennett, D. P.; Bond, I. A.; Botzler, C. S.; Chote, P.; Freeman, M.; Fukui, A.; Fukunaga, D.; Harris, P.; Itow, Y.; Koshimoto, N.; Ling, C. H.; Masuda, K.; Matsubara, Y.; Muraki, Y.; Namba, S.; Ohnishi, K.; Rattenbury, N. J.; Saito, To.; Sullivan, D. J.; Sweatman, W. L.; Suzuki, D.; Tristram, P. J.; Wada, K.; Yamai, N.; Yock, P. C. M.; Yonehara, A.; MOA Collaboration
2014-05-01
Characterizing a microlensing planet is done by modeling an observed lensing light curve. In this process, it is often confronted that solutions of different lensing parameters result in similar light curves, causing difficulties in uniquely interpreting the lens system, and thus understanding the causes of different types of degeneracy is important. In this work, we show that incomplete coverage of a planetary perturbation can result in degenerate solutions even for events where the planetary signal is detected with a high level of statistical significance. We demonstrate the degeneracy for an actually observed event OGLE-2012-BLG-0455/MOA-2012-BLG-206. The peak of this high-magnification event (A max ~ 400) exhibits very strong deviation from a point-lens model with Δχ2 >~ 4000 for data sets with a total of 6963 measurements. From detailed modeling of the light curve, we find that the deviation can be explained by four distinct solutions, i.e., two very different sets of solutions, each with a twofold degeneracy. While the twofold (so-called close/wide) degeneracy is well understood, the degeneracy between the radically different solutions is not previously known. The model light curves of this degeneracy differ substantially in the parts that were not covered by observation, indicating that the degeneracy is caused by the incomplete coverage of the perturbation. It is expected that the frequency of the degeneracy introduced in this work will be greatly reduced with the improvement of the current lensing survey and follow-up experiments and the advent of new surveys.
Real-Time Data Collection Using Text Messaging in a Primary Care Clinic.
Rai, Manisha; Moniz, Michelle H; Blaszczak, Julie; Richardson, Caroline R; Chang, Tammy
2017-12-01
The use of text messaging is nearly ubiquitous and represents a promising method of collecting data from diverse populations. The purpose of this study was to assess the feasibility and acceptability of text message surveys in a clinical setting and to describe key lessons to minimize attrition. We obtained a convenience sample of individuals who entered the waiting room of a low-income, primary care clinic. Participants were asked to answer between 17 and 30 survey questions on a variety of health-related topics, including both open- and closed-ended questions. Descriptive statistics were used to characterize the participants and determine the response rates. Bivariate analyses were used to identify predictors of incomplete surveys. Our convenience sample consisted of 461 individuals. Of those who attempted the survey, 80% (370/461) completed it in full. The mean age of respondents was 35.4 years (standard deviation = 12.4). Respondents were predominantly non-Hispanic black (42%) or non-Hispanic white (41%), female (75%), and with at least some college education (70%). Of those who completed the survey, 84% (312/370) reported willingness to do another text message survey. Those with incomplete surveys answered a median of nine questions before stopping. Smartphone users were less likely to leave the survey incomplete compared with non-smartphone users (p = 0.004). Text-message surveys are a feasible and acceptable method to collect real-time data among low-income, clinic-based populations. Offering participants a setting for immediate survey completion, minimizing survey length, simplifying questions, and allowing "free text" responses for all questions may optimize response rates.
Errors, Error, and Text in Multidialect Setting.
ERIC Educational Resources Information Center
Candler, W. J.
1979-01-01
This article discusses the various dialects of English spoken in Liberia and analyzes the problems of Liberian students in writing compositions in English. Errors arise mainly from differences in culture and cognition, not from superficial linguistic problems. (CFM)
Use of scan overlap redundancy to enhance multispectral aircraft scanner data
NASA Technical Reports Server (NTRS)
Lindenlaub, J. C.; Keat, J.
1973-01-01
Two criteria were suggested for optimizing the resolution error versus signal-to-noise-ratio tradeoff. The first criterion uses equal weighting coefficients and chooses n, the number of lines averaged, so as to make the average resolution error equal to the noise error. The second criterion adjusts both the number and relative sizes of the weighting coefficients so as to minimize the total error (resolution error plus noise error). The optimum set of coefficients depends upon the geometry of the resolution element, the number of redundant scan lines, the scan line increment, and the original signal-to-noise ratio of the channel. Programs were developed to find the optimum number and relative weights of the averaging coefficients. A working definition of signal-to-noise ratio was given and used to try line averaging on a typical set of data. Line averaging was evaluated only with respect to its effect on classification accuracy.
Mendiburu, Andrés Z; de Carvalho, João A; Coronado, Christian R
2015-03-21
Estimation of the lower flammability limits of C-H compounds at 25 °C and 1 atm; at moderate temperatures and in presence of diluent was the objective of this study. A set of 120 C-H compounds was divided into a correlation set and a prediction set of 60 compounds each. The absolute average relative error for the total set was 7.89%; for the correlation set, it was 6.09%; and for the prediction set it was 9.68%. However, it was shown that by considering different sources of experimental data the values were reduced to 6.5% for the prediction set and to 6.29% for the total set. The method showed consistency with Le Chatelier's law for binary mixtures of C-H compounds. When tested for a temperature range from 5 °C to 100 °C, the absolute average relative errors were 2.41% for methane; 4.78% for propane; 0.29% for iso-butane and 3.86% for propylene. When nitrogen was added, the absolute average relative errors were 2.48% for methane; 5.13% for propane; 0.11% for iso-butane and 0.15% for propylene. When carbon dioxide was added, the absolute relative errors were 1.80% for methane; 5.38% for propane; 0.86% for iso-butane and 1.06% for propylene. Copyright © 2014 Elsevier B.V. All rights reserved.
EAC: A program for the error analysis of STAGS results for plates
NASA Technical Reports Server (NTRS)
Sistla, Rajaram; Thurston, Gaylen A.; Bains, Nancy Jane C.
1989-01-01
A computer code is now available for estimating the error in results from the STAGS finite element code for a shell unit consisting of a rectangular orthotropic plate. This memorandum contains basic information about the computer code EAC (Error Analysis and Correction) and describes the connection between the input data for the STAGS shell units and the input data necessary to run the error analysis code. The STAGS code returns a set of nodal displacements and a discrete set of stress resultants; the EAC code returns a continuous solution for displacements and stress resultants. The continuous solution is defined by a set of generalized coordinates computed in EAC. The theory and the assumptions that determine the continuous solution are also outlined in this memorandum. An example of application of the code is presented and instructions on its usage on the Cyber and the VAX machines have been provided.
Error correcting code with chip kill capability and power saving enhancement
Gara, Alan G [Mount Kisco, NY; Chen, Dong [Croton On Husdon, NY; Coteus, Paul W [Yorktown Heights, NY; Flynn, William T [Rochester, MN; Marcella, James A [Rochester, MN; Takken, Todd [Brewster, NY; Trager, Barry M [Yorktown Heights, NY; Winograd, Shmuel [Scarsdale, NY
2011-08-30
A method and system are disclosed for detecting memory chip failure in a computer memory system. The method comprises the steps of accessing user data from a set of user data chips, and testing the user data for errors using data from a set of system data chips. This testing is done by generating a sequence of check symbols from the user data, grouping the user data into a sequence of data symbols, and computing a specified sequence of syndromes. If all the syndromes are zero, the user data has no errors. If one of the syndromes is non-zero, then a set of discriminator expressions are computed, and used to determine whether a single or double symbol error has occurred. In the preferred embodiment, less than two full system data chips are used for testing and correcting the user data.
Statistical models for estimating daily streamflow in Michigan
Holtschlag, D.J.; Salehi, Habib
1992-01-01
Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.
Enumerating Sparse Organisms in Ships’ Ballast Water: Why Counting to 10 Is Not So Easy
2011-01-01
To reduce ballast water-borne aquatic invasions worldwide, the International Maritime Organization and United States Coast Guard have each proposed discharge standards specifying maximum concentrations of living biota that may be released in ships’ ballast water (BW), but these regulations still lack guidance for standardized type approval and compliance testing of treatment systems. Verifying whether BW meets a discharge standard poses significant challenges. Properly treated BW will contain extremely sparse numbers of live organisms, and robust estimates of rare events require extensive sampling efforts. A balance of analytical rigor and practicality is essential to determine the volume of BW that can be reasonably sampled and processed, yet yield accurate live counts. We applied statistical modeling to a range of sample volumes, plankton concentrations, and regulatory scenarios (i.e., levels of type I and type II errors), and calculated the statistical power of each combination to detect noncompliant discharge concentrations. The model expressly addresses the roles of sampling error, BW volume, and burden of proof on the detection of noncompliant discharges in order to establish a rigorous lower limit of sampling volume. The potential effects of recovery errors (i.e., incomplete recovery and detection of live biota) in relation to sample volume are also discussed. PMID:21434685
Correcting reaction rates measured by saturation-transfer magnetic resonance spectroscopy
NASA Astrophysics Data System (ADS)
Gabr, Refaat E.; Weiss, Robert G.; Bottomley, Paul A.
2008-04-01
Off-resonance or spillover irradiation and incomplete saturation can introduce significant errors in the estimates of chemical rate constants measured by saturation-transfer magnetic resonance spectroscopy (MRS). Existing methods of correction are effective only over a limited parameter range. Here, a general approach of numerically solving the Bloch-McConnell equations to calculate exchange rates, relaxation times and concentrations for the saturation-transfer experiment is investigated, but found to require more measurements and higher signal-to-noise ratios than in vivo studies can practically afford. As an alternative, correction formulae for the reaction rate are provided which account for the expected parameter ranges and limited measurements available in vivo. The correction term is a quadratic function of experimental measurements. In computer simulations, the new formulae showed negligible bias and reduced the maximum error in the rate constants by about 3-fold compared to traditional formulae, and the error scatter by about 4-fold, over a wide range of parameters for conventional saturation transfer employing progressive saturation, and for the four-angle saturation-transfer method applied to the creatine kinase (CK) reaction in the human heart at 1.5 T. In normal in vivo spectra affected by spillover, the correction increases the mean calculated forward CK reaction rate by 6-16% over traditional and prior correction formulae.
Robert-Lachaine, Xavier; Mecheri, Hakim; Larue, Christian; Plamondon, André
2017-04-01
The potential of inertial measurement units (IMUs) for ergonomics applications appears promising. However, previous IMUs validation studies have been incomplete regarding aspects of joints analysed, complexity of movements and duration of trials. The objective was to determine the technological error and biomechanical model differences between IMUs and an optoelectronic system and evaluate the effect of task complexity and duration. Whole-body kinematics from 12 participants was recorded simultaneously with a full-body Xsens system where an Optotrak cluster was fixed on every IMU. Short functional movements and long manual material handling tasks were performed and joint angles were compared between the two systems. The differences attributed to the biomechanical model showed significantly greater (P ≤ .001) RMSE than the technological error. RMSE was systematically higher (P ≤ .001) for the long complex task with a mean on all joints of 2.8° compared to 1.2° during short functional movements. Definition of local coordinate systems based on anatomical landmarks or single posture was the most influent difference between the two systems. Additionally, IMUs accuracy was affected by the complexity and duration of the tasks. Nevertheless, technological error remained under 5° RMSE during handling tasks, which shows potential to track workers during their daily labour.
Photodiode-based cutting interruption sensor for near-infrared lasers.
Adelmann, B; Schleier, M; Neumeier, B; Hellmann, R
2016-03-01
We report on a photodiode-based sensor system to detect cutting interruptions during laser cutting with a fiber laser. An InGaAs diode records the thermal radiation from the process zone with a ring mirror and optical filter arrangement mounted between a collimation unit and a cutting head. The photodiode current is digitalized with a sample rate of 20 kHz and filtered with a Chebyshev Type I filter. From the measured signal during the piercing, a threshold value is calculated. When the diode signal exceeds this threshold during cutting, a cutting interruption is indicated. This method is applied to sensor signals from cutting mild steel, stainless steel, and aluminum, as well as different material thicknesses and also laser flame cutting, showing the possibility to detect cutting interruptions in a broad variety of applications. In a series of 83 incomplete cuts, every cutting interruption is successfully detected (alpha error of 0%), while no cutting interruption is reported in 266 complete cuts (beta error of 0%). With this remarkable high detection rate and low error rate, the possibility to work with different materials and thicknesses in combination with the easy mounting of the sensor unit also to existing cutting machines highlight the enormous potential for this sensor system in industrial applications.
Enumerating sparse organisms in ships' ballast water: why counting to 10 is not so easy.
Miller, A Whitman; Frazier, Melanie; Smith, George E; Perry, Elgin S; Ruiz, Gregory M; Tamburri, Mario N
2011-04-15
To reduce ballast water-borne aquatic invasions worldwide, the International Maritime Organization and United States Coast Guard have each proposed discharge standards specifying maximum concentrations of living biota that may be released in ships' ballast water (BW), but these regulations still lack guidance for standardized type approval and compliance testing of treatment systems. Verifying whether BW meets a discharge standard poses significant challenges. Properly treated BW will contain extremely sparse numbers of live organisms, and robust estimates of rare events require extensive sampling efforts. A balance of analytical rigor and practicality is essential to determine the volume of BW that can be reasonably sampled and processed, yet yield accurate live counts. We applied statistical modeling to a range of sample volumes, plankton concentrations, and regulatory scenarios (i.e., levels of type I and type II errors), and calculated the statistical power of each combination to detect noncompliant discharge concentrations. The model expressly addresses the roles of sampling error, BW volume, and burden of proof on the detection of noncompliant discharges in order to establish a rigorous lower limit of sampling volume. The potential effects of recovery errors (i.e., incomplete recovery and detection of live biota) in relation to sample volume are also discussed.
Network reconstruction via graph blending
NASA Astrophysics Data System (ADS)
Estrada, Rolando
2016-05-01
Graphs estimated from empirical data are often noisy and incomplete due to the difficulty of faithfully observing all the components (nodes and edges) of the true graph. This problem is particularly acute for large networks where the number of components may far exceed available surveillance capabilities. Errors in the observed graph can render subsequent analyses invalid, so it is vital to develop robust methods that can minimize these observational errors. Errors in the observed graph may include missing and spurious components, as well fused (multiple nodes are merged into one) and split (a single node is misinterpreted as many) nodes. Traditional graph reconstruction methods are only able to identify missing or spurious components (primarily edges, and to a lesser degree nodes), so we developed a novel graph blending framework that allows us to cast the full estimation problem as a simple edge addition/deletion problem. Armed with this framework, we systematically investigate the viability of various topological graph features, such as the degree distribution or the clustering coefficients, and existing graph reconstruction methods for tackling the full estimation problem. Our experimental results suggest that incorporating any topological feature as a source of information actually hinders reconstruction accuracy. We provide a theoretical analysis of this phenomenon and suggest several avenues for improving this estimation problem.
Generated spiral bevel gears: Optimal machine-tool settings and tooth contact analysis
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Tsung, W. J.; Coy, J. J.; Heine, C.
1985-01-01
Geometry and kinematic errors were studied for Gleason generated spiral bevel gears. A new method was devised for choosing optimal machine settings. These settings provide zero kinematic errors and an improved bearing contact. The kinematic errors are a major source of noise and vibration in spiral bevel gears. The improved bearing contact gives improved conditions for lubrication. A computer program for tooth contact analysis was developed, and thereby the new generation process was confirmed. The new process is governed by the requirement that during the generation process there is directional constancy of the common normal of the contacting surfaces for generator and generated surfaces of pinion and gear.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, JY; Hong, DL
Purpose: The purpose of this study is to investigate the patient set-up error and interfraction target coverage in cervical cancer using image-guided adaptive radiotherapy (IGART) with cone-beam computed tomography (CBCT). Methods: Twenty cervical cancer patients undergoing intensity modulated radiotherapy (IMRT) were randomly selected. All patients were matched to the isocenter using laser with the skin markers. Three dimensional CBCT projections were acquired by the Varian Truebeam treatment system. Set-up errors were evaluated by radiation oncologists, after CBCT correction. The clinical target volume (CTV) was delineated on each CBCT, and the planning target volume (PTV) coverage of each CBCT-CTVs was analyzed.more » Results: A total of 152 CBCT scans were acquired from twenty cervical cancer patients, the mean set-up errors in the longitudinal, vertical, and lateral direction were 3.57, 2.74 and 2.5mm respectively, without CBCT corrections. After corrections, these were decreased to 1.83, 1.44 and 0.97mm. For the target coverage, CBCT-CTV coverage without CBCT correction was 94% (143/152), and 98% (149/152) with correction. Conclusion: Use of CBCT verfication to measure patient setup errors could be applied to improve the treatment accuracy. In addition, the set-up error corrections significantly improve the CTV coverage for cervical cancer patients.« less
Bittel, Daniel C; Bittel, Adam J; Williams, Christine; Elazzazi, Ashraf
2017-05-01
Proper exercise form is critical for the safety and efficacy of therapeutic exercise. This research examines if a novel smartphone application, designed to monitor and provide real-time corrections during resistance training, can reduce performance errors and elicit a motor learning response. Forty-two participants aged 18 to 65 years were randomly assigned to treatment and control groups. Both groups were tested for the number of movement errors made during a 10-repetition set completed at baseline, immediately after, and 1 to 2 weeks after a single training session of knee extensions. The treatment group trained with real-time, smartphone-generated feedback, whereas the control subjects did not. Group performance (number of errors) was compared across test sets using a 2-factor mixed-model analysis of variance. No differences were observed between groups for age, sex, or resistance training experience. There was a significant interaction between test set and group. The treatment group demonstrated fewer errors on posttests 1 and 2 compared with pretest (P < 0.05). There was no reduction in the number of errors on any posttest for control subjects. Smartphone apps, such as the one used in this study, may enhance patient supervision, safety, and exercise efficacy across rehabilitation settings. A single training session with the app promoted motor learning and improved exercise performance.
[The error, source of learning].
Joyeux, Stéphanie; Bohic, Valérie
2016-05-01
The error itself is not recognised as a fault. It is the intentionality which differentiates between an error and a fault. An error is unintentional while a fault is a failure to respect known rules. The risk of error is omnipresent in health institutions. Public authorities have therefore set out a series of measures to reduce this risk. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Timing of silicone stent removal in patients with post-tuberculosis bronchial stenosis
Eom, Jung Seop; Kim, Hojoong; Park, Hye Yun; Jeon, Kyeongman; Um, Sang-Won; Koh, Won-Jung; Suh, Gee Young; Chung, Man Pyo; Kwon, O. Jung
2013-01-01
CONTEXT: In patients with post-tuberculosis bronchial stenosis (PTBS), the severity of bronchial stenosis affects the restenosis rate after the silicone stent is removed. In PTBS patients with incomplete bronchial obstruction, who had a favorable prognosis, the timing of stent removal to ensure airway patency is not clear. AIMS: We evaluated the time for silicone stent removal in patients with incomplete PTBS. SETTINGS AND DESIGN: A retrospective study examined PTBS patients who underwent stenting and removal of a silicone stent. METHODS: Incomplete bronchial stenosis was defined as PTBS other than total bronchial obstruction, which had a luminal opening at the stenotic segment on bronchoscopic intervention. The duration of stenting was defined as the interval from stent insertion to removal. The study included 44 PTBS patients and the patients were grouped at intervals of 6 months according to the duration of stenting. RESULTS: Patients stented for more than 12 months had a significantly lower restenosis rate than those stented for less than 12 months (4% vs. 35%, P = 0.009). Multiple logistic regression revealed an association between stenting for more than 12 months and a low restenosis rate (odds ratio 12.095; 95% confidence interval 1.097-133.377). Moreover, no restenosis was observed in PTBS patients when the stent was placed more than 14 months previously. CONCLUSIONS: In patients with incomplete PTBS, stent placement for longer than 12 months reduced restenosis after stent removal. PMID:24250736
Metheny, Leland; Eid, Saada; Lingas, Karen; Ofir, Racheli; Pinzur, Lena; Meyerson, Howard; Lazarus, Hillard M.; Huang, Alex Y.
2018-01-01
Late-term complications of hematopoietic cell transplantation (HCT) are numerous and include incomplete engraftment. One possible mechanism of incomplete engraftment after HCT is cytokine-mediated suppression or dysfunction of the bone marrow microenvironment. Mesenchymal stromal cells (MSCs) elaborate cytokines that nurture or stimulate the marrow microenvironment by several mechanisms. We hypothesize that the administration of exogenous MSCs may modulate the bone marrow milieu and improve peripheral blood count recovery in the setting of incomplete engraftment. In the current study, we demonstrated that posttransplant intramuscular administration of human placental derived mesenchymal-like adherent stromal cells [PLacental eXpanded (PLX)-R18] harvested from a three-dimensional in vitro culture system improved posttransplant engraftment of human immune compartment in an immune-deficient murine transplantation model. As measured by the percentage of CD45+ cell recovery, we observed improvement in the peripheral blood counts at weeks 6 (8.4 vs. 24.1%, p < 0.001) and 8 (7.3 vs. 13.1%, p < 0.05) and in the bone marrow at week 8 (28 vs. 40.0%, p < 0.01) in the PLX-R18 cohort. As measured by percentage of CD19+ cell recovery, there was improvement at weeks 6 (12.6 vs. 3.8%) and 8 (10.1 vs. 4.1%). These results suggest that PLX-R18 may have a therapeutic role in improving incomplete engraftment after HCT. PMID:29520362
Should genes with missing data be excluded from phylogenetic analyses?
Jiang, Wei; Chen, Si-Yun; Wang, Hong; Li, De-Zhu; Wiens, John J
2014-11-01
Phylogeneticists often design their studies to maximize the number of genes included but minimize the overall amount of missing data. However, few studies have addressed the costs and benefits of adding characters with missing data, especially for likelihood analyses of multiple loci. In this paper, we address this topic using two empirical data sets (in yeast and plants) with well-resolved phylogenies. We introduce varying amounts of missing data into varying numbers of genes and test whether the benefits of excluding genes with missing data outweigh the costs of excluding the non-missing data that are associated with them. We also test if there is a proportion of missing data in the incomplete genes at which they cease to be beneficial or harmful, and whether missing data consistently bias branch length estimates. Our results indicate that adding incomplete genes generally increases the accuracy of phylogenetic analyses relative to excluding them, especially when there is a high proportion of incomplete genes in the overall dataset (and thus few complete genes). Detailed analyses suggest that adding incomplete genes is especially helpful for resolving poorly supported nodes. Given that we find that excluding genes with missing data often decreases accuracy relative to including these genes (and that decreases are generally of greater magnitude than increases), there is little basis for assuming that excluding these genes is necessarily the safer or more conservative approach. We also find no evidence that missing data consistently bias branch length estimates. Copyright © 2014 Elsevier Inc. All rights reserved.
Metheny, Leland; Eid, Saada; Lingas, Karen; Ofir, Racheli; Pinzur, Lena; Meyerson, Howard; Lazarus, Hillard M; Huang, Alex Y
2018-01-01
Late-term complications of hematopoietic cell transplantation (HCT) are numerous and include incomplete engraftment. One possible mechanism of incomplete engraftment after HCT is cytokine-mediated suppression or dysfunction of the bone marrow microenvironment. Mesenchymal stromal cells (MSCs) elaborate cytokines that nurture or stimulate the marrow microenvironment by several mechanisms. We hypothesize that the administration of exogenous MSCs may modulate the bone marrow milieu and improve peripheral blood count recovery in the setting of incomplete engraftment. In the current study, we demonstrated that posttransplant intramuscular administration of human placental derived mesenchymal-like adherent stromal cells [PLacental eXpanded (PLX)-R18] harvested from a three-dimensional in vitro culture system improved posttransplant engraftment of human immune compartment in an immune-deficient murine transplantation model. As measured by the percentage of CD45 + cell recovery, we observed improvement in the peripheral blood counts at weeks 6 (8.4 vs. 24.1%, p < 0.001) and 8 (7.3 vs. 13.1%, p < 0.05) and in the bone marrow at week 8 (28 vs. 40.0%, p < 0.01) in the PLX-R18 cohort. As measured by percentage of CD19 + cell recovery, there was improvement at weeks 6 (12.6 vs. 3.8%) and 8 (10.1 vs. 4.1%). These results suggest that PLX-R18 may have a therapeutic role in improving incomplete engraftment after HCT.
Effects of Crew Resource Management Training on Medical Errors in a Simulated Prehospital Setting
ERIC Educational Resources Information Center
Carhart, Elliot D.
2012-01-01
This applied dissertation investigated the effect of crew resource management (CRM) training on medical errors in a simulated prehospital setting. Specific areas addressed by this program included situational awareness, decision making, task management, teamwork, and communication. This study is believed to be the first investigation of CRM…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Briscoe, M; Ploquin, N; Voroney, JP
2015-06-15
Purpose: To quantify the effect of patient rotation in stereotactic radiation therapy and establish a threshold where rotational patient set-up errors have a significant impact on target coverage. Methods: To simulate rotational patient set-up errors, a Matlab code was created to rotate the patient dose distribution around the treatment isocentre, located centrally in the lesion, while keeping the structure contours in the original locations on the CT and MRI. Rotations of 1°, 3°, and 5° for each of the pitch, roll, and yaw, as well as simultaneous rotations of 1°, 3°, and 5° around all three axes were applied tomore » two types of brain lesions: brain metastasis and acoustic neuroma. In order to analyze multiple tumour shapes, these plans included small spherical (metastasis), elliptical (acoustic neuroma), and large irregular (metastasis) tumour structures. Dose-volume histograms and planning target volumes were compared between the planned patient positions and those with simulated rotational set-up errors. The RTOG conformity index for patient rotation was also investigated. Results: Examining the tumour volumes that received 80% of the prescription dose in the planned and rotated patient positions showed decreases in prescription dose coverage of up to 2.3%. Conformity indices for treatments with simulated rotational errors showed decreases of up to 3% compared to the original plan. For irregular lesions, degradation of 1% of the target coverage can be seen for rotations as low as 3°. Conclusions: This data shows that for elliptical or spherical targets, rotational patient set-up errors less than 3° around any or all axes do not have a significant impact on the dose delivered to the target volume or the conformity index of the plan. However the same rotational errors would have an impact on plans for irregular tumours.« less
A Modified MinMax k-Means Algorithm Based on PSO.
Wang, Xiaoyan; Bai, Yanping
The MinMax k -means algorithm is widely used to tackle the effect of bad initialization by minimizing the maximum intraclustering errors. Two parameters, including the exponent parameter and memory parameter, are involved in the executive process. Since different parameters have different clustering errors, it is crucial to choose appropriate parameters. In the original algorithm, a practical framework is given. Such framework extends the MinMax k -means to automatically adapt the exponent parameter to the data set. It has been believed that if the maximum exponent parameter has been set, then the programme can reach the lowest intraclustering errors. However, our experiments show that this is not always correct. In this paper, we modified the MinMax k -means algorithm by PSO to determine the proper values of parameters which can subject the algorithm to attain the lowest clustering errors. The proposed clustering method is tested on some favorite data sets in several different initial situations and is compared to the k -means algorithm and the original MinMax k -means algorithm. The experimental results indicate that our proposed algorithm can reach the lowest clustering errors automatically.
Beer, Idal; Hoppe-Tichy, Torsten; Trbovich, Patricia
2017-01-01
Objective To examine published evidence on intravenous admixture preparation errors (IAPEs) in healthcare settings. Methods Searches were conducted in three electronic databases (January 2005 to April 2017). Publications reporting rates of IAPEs and error types were reviewed and categorised into the following groups: component errors, dose/calculation errors, aseptic technique errors and composite errors. The methodological rigour of each study was assessed using the Hawker method. Results Of the 34 articles that met inclusion criteria, 28 reported the site of IAPEs: central pharmacies (n=8), nursing wards (n=14), both settings (n=4) and other sites (n=3). Using the Hawker criteria, 14% of the articles were of good quality, 74% were of fair quality and 12% were of poor quality. Error types and reported rates varied substantially, including wrong drug (~0% to 4.7%), wrong diluent solution (0% to 49.0%), wrong label (0% to 99.0%), wrong dose (0% to 32.6%), wrong concentration (0.3% to 88.6%), wrong diluent volume (0.06% to 49.0%) and inadequate aseptic technique (0% to 92.7%)%). Four studies directly compared incidence by preparation site and/or method, finding error incidence to be lower for doses prepared within a central pharmacy versus the nursing ward and lower for automated preparation versus manual preparation. Although eight studies (24%) reported ≥1 errors with the potential to cause patient harm, no study directly linked IAPE occurrences to specific adverse patient outcomes. Conclusions The available data suggest a need to continue to optimise the intravenous preparation process, focus on improving preparation workflow, design and implement preventive strategies, train staff on optimal admixture protocols and implement standardisation. Future research should focus on the development of consistent error subtype definitions, standardised reporting methodology and reliable, reproducible methods to track and link risk factors with the burden of harm associated with these errors. PMID:29288174
ERIC Educational Resources Information Center
Parcover, Jason; Mays, Sally; McCarthy, Amy
2015-01-01
The mental health needs of college students are placing increasing demands on counseling center resources, and traditional outreach efforts may be outdated or incomplete. The public health model provides an approach for reaching more students, decreasing stigma, and addressing mental health concerns before they reach crisis levels. Implementing a…
Fire potential rating for wildland fuelbeds using the Fuel Characteristic Classification System.
David V. Sandberg; Cynthia L. Riccardi; Mark D. Schaff
2007-01-01
The Fuel Characteristic Classification System (FCCS) is a systematic catalog of inherent physical properties of wildland fuelbeds that allows land managers, policymakers, and scientists to build and calculate fuel characteristics with complete or incomplete information. The FCCS is equipped with a set of equations to calculate the potential of any real-world or...
Sex-oriented stable matchings of the marriage problem with correlated and incomplete information
NASA Astrophysics Data System (ADS)
Caldarelli, Guido; Capocci, Andrea; Laureti, Paolo
2001-10-01
In the stable marriage problem two sets of agents must be paired according to mutual preferences, which may happen to conflict. We present two generalizations of its sex-oriented version, aiming to take into account correlations between the preferences of agents and costly information. Their effects are investigated both numerically and analytically.
General linear codes for fault-tolerant matrix operations on processor arrays
NASA Technical Reports Server (NTRS)
Nair, V. S. S.; Abraham, J. A.
1988-01-01
Various checksum codes have been suggested for fault-tolerant matrix computations on processor arrays. Use of these codes is limited due to potential roundoff and overflow errors. Numerical errors may also be misconstrued as errors due to physical faults in the system. In this a set of linear codes is identified which can be used for fault-tolerant matrix operations such as matrix addition, multiplication, transposition, and LU-decomposition, with minimum numerical error. Encoding schemes are given for some of the example codes which fall under the general set of codes. With the help of experiments, a rule of thumb for the selection of a particular code for a given application is derived.
Defining robustness protocols: a method to include and evaluate robustness in clinical plans
NASA Astrophysics Data System (ADS)
McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.
2015-04-01
We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.
Rapid production of optimal-quality reduced-resolution representations of very large databases
Sigeti, David E.; Duchaineau, Mark; Miller, Mark C.; Wolinsky, Murray; Aldrich, Charles; Mineev-Weinstein, Mark B.
2001-01-01
View space representation data is produced in real time from a world space database representing terrain features. The world space database is first preprocessed. A database is formed having one element for each spatial region corresponding to a finest selected level of detail. A multiresolution database is then formed by merging elements and a strict error metric is computed for each element at each level of detail that is independent of parameters defining the view space. The multiresolution database and associated strict error metrics are then processed in real time for real time frame representations. View parameters for a view volume comprising a view location and field of view are selected. The error metric with the view parameters is converted to a view-dependent error metric. Elements with the coarsest resolution are chosen for an initial representation. Data set first elements from the initial representation data set are selected that are at least partially within the view volume. The first elements are placed in a split queue ordered by the value of the view-dependent error metric. If the number of first elements in the queue meets or exceeds a predetermined number of elements or whether the largest error metric is less than or equal to a selected upper error metric bound, the element at the head of the queue is force split and the resulting elements are inserted into the queue. Force splitting is continued until the determination is positive to form a first multiresolution set of elements. The first multiresolution set of elements is then outputted as reduced resolution view space data representing the terrain features.
Boughalia, A; Marcie, S; Fellah, M; Chami, S; Mekki, F
2015-06-01
The aim of this study is to assess and quantify patients' set-up errors using an electronic portal imaging device and to evaluate their dosimetric and biological impact in terms of generalized equivalent uniform dose (gEUD) on predictive models, such as the tumour control probability (TCP) and the normal tissue complication probability (NTCP). 20 patients treated for nasopharyngeal cancer were enrolled in the radiotherapy-oncology department of HCA. Systematic and random errors were quantified. The dosimetric and biological impact of these set-up errors on the target volume and the organ at risk (OARs) coverage were assessed using calculation of dose-volume histogram, gEUD, TCP and NTCP. For this purpose, an in-house software was developed and used. The standard deviations (1SDs) of the systematic set-up and random set-up errors were calculated for the lateral and subclavicular fields and gave the following results: ∑ = 0.63 ± (0.42) mm and σ = 3.75 ± (0.79) mm, respectively. Thus a planning organ at risk volume (PRV) margin of 3 mm was defined around the OARs, and a 5-mm margin used around the clinical target volume. The gEUD, TCP and NTCP calculations obtained with and without set-up errors have shown increased values for tumour, where ΔgEUD (tumour) = 1.94% Gy (p = 0.00721) and ΔTCP = 2.03%. The toxicity of OARs was quantified using gEUD and NTCP. The values of ΔgEUD (OARs) vary from 0.78% to 5.95% in the case of the brainstem and the optic chiasm, respectively. The corresponding ΔNTCP varies from 0.15% to 0.53%, respectively. The quantification of set-up errors has a dosimetric and biological impact on the tumour and on the OARs. The developed in-house software using the concept of gEUD, TCP and NTCP biological models has been successfully used in this study. It can be used also to optimize the treatment plan established for our patients. The gEUD, TCP and NTCP may be more suitable tools to assess the treatment plans before treating the patients.
Hamiltonian lattice field theory: Computer calculations using variational methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zako, Robert L.
1991-12-03
I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato`s generalizations of Temple`s formula. The algorithm could bemore » adapted to systems such as atoms and molecules. I show how to compute Green`s functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green`s functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems.« less
Nour-Eldein, Hebatallah
2016-01-01
With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles.
Nour-Eldein, Hebatallah
2016-01-01
Background: With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. Objectives: To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. Methods: This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Results: Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Conclusion: Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles. PMID:27453839
An error taxonomy system for analysis of haemodialysis incidents.
Gu, Xiuzhu; Itoh, Kenji; Suzuki, Satoshi
2014-12-01
This paper describes the development of a haemodialysis error taxonomy system for analysing incidents and predicting the safety status of a dialysis organisation. The error taxonomy system was developed by adapting an error taxonomy system which assumed no specific specialty to haemodialysis situations. Its application was conducted with 1,909 incident reports collected from two dialysis facilities in Japan. Over 70% of haemodialysis incidents were reported as problems or complications related to dialyser, circuit, medication and setting of dialysis condition. Approximately 70% of errors took place immediately before and after the four hours of haemodialysis therapy. Error types most frequently made in the dialysis unit were omission and qualitative errors. Failures or complications classified to staff human factors, communication, task and organisational factors were found in most dialysis incidents. Device/equipment/materials, medicine and clinical documents were most likely to be involved in errors. Haemodialysis nurses were involved in more incidents related to medicine and documents, whereas dialysis technologists made more errors with device/equipment/materials. This error taxonomy system is able to investigate incidents and adverse events occurring in the dialysis setting but is also able to estimate safety-related status of an organisation, such as reporting culture. © 2014 European Dialysis and Transplant Nurses Association/European Renal Care Association.
Errors in radiation oncology: A study in pathways and dosimetric impact
Drzymala, Robert E.; Purdy, James A.; Michalski, Jeff
2005-01-01
As complexity for treating patients increases, so does the risk of error. Some publications have suggested that record and verify (R&V) systems may contribute in propagating errors. Direct data transfer has the potential to eliminate most, but not all, errors. And although the dosimetric consequences may be obvious in some cases, a detailed study does not exist. In this effort, we examined potential errors in terms of scenarios, pathways of occurrence, and dosimetry. Our goal was to prioritize error prevention according to likelihood of event and dosimetric impact. For conventional photon treatments, we investigated errors of incorrect source‐to‐surface distance (SSD), energy, omitted wedge (physical, dynamic, or universal) or compensating filter, incorrect wedge or compensating filter orientation, improper rotational rate for arc therapy, and geometrical misses due to incorrect gantry, collimator or table angle, reversed field settings, and setup errors. For electron beam therapy, errors investigated included incorrect energy, incorrect SSD, along with geometric misses. For special procedures we examined errors for total body irradiation (TBI, incorrect field size, dose rate, treatment distance) and LINAC radiosurgery (incorrect collimation setting, incorrect rotational parameters). Likelihood of error was determined and subsequently rated according to our history of detecting such errors. Dosimetric evaluation was conducted by using dosimetric data, treatment plans, or measurements. We found geometric misses to have the highest error probability. They most often occurred due to improper setup via coordinate shift errors or incorrect field shaping. The dosimetric impact is unique for each case and depends on the proportion of fields in error and volume mistreated. These errors were short‐lived due to rapid detection via port films. The most significant dosimetric error was related to a reversed wedge direction. This may occur due to incorrect collimator angle or wedge orientation. For parallel‐opposed 60° wedge fields, this error could be as high as 80% to a point off‐axis. Other examples of dosimetric impact included the following: SSD, ~2%/cm for photons or electrons; photon energy (6 MV vs. 18 MV), on average 16% depending on depth, electron energy, ~0.5cm of depth coverage per MeV (mega‐electron volt). Of these examples, incorrect distances were most likely but rapidly detected by in vivo dosimetry. Errors were categorized by occurrence rate, methods and timing of detection, longevity, and dosimetric impact. Solutions were devised according to these criteria. To date, no one has studied the dosimetric impact of global errors in radiation oncology. Although there is heightened awareness that with increased use of ancillary devices and automation, there must be a parallel increase in quality check systems and processes, errors do and will continue to occur. This study has helped us identify and prioritize potential errors in our clinic according to frequency and dosimetric impact. For example, to reduce the use of an incorrect wedge direction, our clinic employs off‐axis in vivo dosimetry. To avoid a treatment distance setup error, we use both vertical table settings and optical distance indicator (ODI) values to properly set up fields. As R&V systems become more automated, more accurate and efficient data transfer will occur. This will require further analysis. Finally, we have begun examining potential intensity‐modulated radiation therapy (IMRT) errors according to the same criteria. PACS numbers: 87.53.Xd, 87.53.St PMID:16143793
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Arnold, James O. (Technical Monitor)
1999-01-01
The atomization energy of Mg4 is determined using the MP2 and CCSD(T) levels of theory. Basis set incompleteness, basis set extrapolation, and core-valence effects are discussed. Our best atomization energy, including the zero-point energy and scalar relativistic effects, is 24.6+/-1.6 kcal per mol. Our computed and extrapolated values are compared with previous results, where it is observed that our extrapolated MP2 value is good agreement with the MP2-R12 value. The CCSD(T) and MP2 core effects are found to have the opposite signs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pritychenko, B.
The precision of double-beta ββ-decay experimental half lives and their uncertainties is reanalyzed. The method of Benford's distributions has been applied to nuclear reaction, structure and decay data sets. First-digit distribution trend for ββ-decay T 2v 1/2 is consistent with large nuclear reaction and structure data sets and provides validation of experimental half-lives. A complementary analysis of the decay uncertainties indicates deficiencies due to small size of statistical samples, and incomplete collection of experimental information. Further experimental and theoretical efforts would lead toward more precise values of-decay half-lives and nuclear matrix elements.
The Lung Image Database Consortium (LIDC): Ensuring the integrity of expert-defined “truth”
Armato, Samuel G.; Roberts, Rachael Y.; McNitt-Gray, Michael F.; Meyer, Charles R.; Reeves, Anthony P.; McLennan, Geoffrey; Engelmann, Roger M.; Bland, Peyton H.; Aberle, Denise R.; Kazerooni, Ella A.; MacMahon, Heber; van Beek, Edwin J.R.; Yankelevitz, David; Croft, Barbara Y.; Clarke, Laurence P.
2007-01-01
Rationale and Objectives Computer-aided diagnostic (CAD) systems fundamentally require the opinions of expert human observers to establish “truth” for algorithm development, training, and testing. The integrity of this “truth,” however, must be established before investigators commit to this “gold standard” as the basis for their research. The purpose of this study was to develop a quality assurance (QA) model as an integral component of the “truth” collection process concerning the location and spatial extent of lung nodules observed on computed tomography (CT) scans to be included in the Lung Image Database Consortium (LIDC) public database. Materials and Methods One hundred CT scans were interpreted by four radiologists through a two-phase process. For the first of these reads (the “blinded read phase”), radiologists independently identified and annotated lesions, assigning each to one of three categories: “nodule ≥ 3mm,” “nodule < 3mm,” or “non-nodule ≥ 3mm.” For the second read (the “unblinded read phase”), the same radiologists independently evaluated the same CT scans but with all of the annotations from the previously performed blinded reads presented; each radiologist could add marks, edit or delete their own marks, change the lesion category of their own marks, or leave their marks unchanged. The post-unblinded-read set of marks was grouped into discrete nodules and subjected to the QA process, which consisted of (1) identification of potential errors introduced during the complete image annotation process (such as two marks on what appears to be a single lesion or an incomplete nodule contour) and (2) correction of those errors. Seven categories of potential error were defined; any nodule with a mark that satisfied the criterion for one of these categories was referred to the radiologist who assigned that mark for either correction or confirmation that the mark was intentional. Results A total of 105 QA issues were identified across 45 (45.0%) of the 100 CT scans. Radiologist review resulted in modifications to 101 (96.2%) of these potential errors. Twenty-one lesions erroneously marked as lung nodules after the unblinded reads had this designation removed through the QA process. Conclusion The establishment of “truth” must incorporate a QA process to guarantee the integrity of the datasets that will provide the basis for the development, training, and testing of CAD systems. PMID:18035275
Nuclear Forensics Analysis with Missing and Uncertain Data
Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent
2015-10-05
We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained bymore » replacing missing information with constant values.« less
Misconduct accounts for the majority of retracted scientific publications
Fang, Ferric C.; Steen, R. Grant; Casadevall, Arturo
2012-01-01
A detailed review of all 2,047 biomedical and life-science research articles indexed by PubMed as retracted on May 3, 2012 revealed that only 21.3% of retractions were attributable to error. In contrast, 67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%). Incomplete, uninformative or misleading retraction announcements have led to a previous underestimation of the role of fraud in the ongoing retraction epidemic. The percentage of scientific articles retracted because of fraud has increased ∼10-fold since 1975. Retractions exhibit distinctive temporal and geographic patterns that may reveal underlying causes. PMID:23027971
Addition and subtraction by students with Down syndrome
NASA Astrophysics Data System (ADS)
Noda Herrera, Aurelia; Bruno, Alicia; González, Carina; Moreno, Lorenzo; Sanabria, Hilda
2011-01-01
We present a research report on addition and subtraction conducted with Down syndrome students between the ages of 12 and 31. We interviewed a group of students with Down syndrome who executed algorithms and solved problems using specific materials and paper and pencil. The results show that students with Down syndrome progress through the same procedural levels as those without disabilities though they have difficulties in reaching the most abstract level (numerical facts). The use of fingers or concrete representations (balls) appears as a fundamental process among these students. As for errors, these vary widely depending on the students, and can be attributed mostly to an incomplete knowledge of the decimal number system.
Referential first mention in narratives by mildly mentally retarded adults.
Kernan, K T; Sabsay, S
1987-01-01
Referential first mentions in narrative reports of a short film by 40 mildly mentally retarded adults and 20 nonretarded adults were compared. The mentally retarded sample included equal numbers of male and female, and black and white speakers. The mentally retarded speakers made significantly fewer first mentions and significantly more errors in the form of the first mentions than did nonretarded speakers. A pattern of better performance by black males than by other mentally retarded speakers was found. It is suggested that task difficulty and incomplete mastery of the use of definite and indefinite forms for encoding old and new information, rather than some global type of egocentrism, accounted for the poorer performance by mentally retarded speakers.
ERIC Educational Resources Information Center
Kolitsoe Moru, Eunice; Qhobela, Makomosela
2013-01-01
The study investigated teachers' pedagogical content knowledge of common students' errors and misconceptions in sets. Five mathematics teachers from one Lesotho secondary school were the sample of the study. Questionnaires and interviews were used for data collection. The results show that teachers were able to identify the following students'…
NASA Astrophysics Data System (ADS)
Ziemba, Alexander; El Serafy, Ghada
2016-04-01
Ecological modeling and water quality investigations are complex processes which can require a high level of parameterization and a multitude of varying data sets in order to properly execute the model in question. Since models are generally complex, their calibration and validation can benefit from the application of data and information fusion techniques. The data applied to ecological models comes from a wide range of sources such as remote sensing, earth observation, and in-situ measurements, resulting in a high variability in the temporal and spatial resolution of the various data sets available to water quality investigators. It is proposed that effective fusion into a comprehensive singular set will provide a more complete and robust data resource with which models can be calibrated, validated, and driven by. Each individual product contains a unique valuation of error resulting from the method of measurement and application of pre-processing techniques. The uncertainty and error is further compounded when the data being fused is of varying temporal and spatial resolution. In order to have a reliable fusion based model and data set, the uncertainty of the results and confidence interval of the data being reported must be effectively communicated to those who would utilize the data product or model outputs in a decision making process[2]. Here we review an array of data fusion techniques applied to various remote sensing, earth observation, and in-situ data sets whose domains' are varied in spatial and temporal resolution. The data sets examined are combined in a manner so that the various classifications, complementary, redundant, and cooperative, of data are all assessed to determine classification's impact on the propagation and compounding of error. In order to assess the error of the fused data products, a comparison is conducted with data sets containing a known confidence interval and quality rating. We conclude with a quantification of the performance of the data fusion techniques and a recommendation on the feasibility of applying of the fused products in operating forecast systems and modeling scenarios. The error bands and confidence intervals derived can be used in order to clarify the error and confidence of water quality variables produced by prediction and forecasting models. References [1] F. Castanedo, "A Review of Data Fusion Techniques", The Scientific World Journal, vol. 2013, pp. 1-19, 2013. [2] T. Keenan, M. Carbone, M. Reichstein and A. Richardson, "The model-data fusion pitfall: assuming certainty in an uncertain world", Oecologia, vol. 167, no. 3, pp. 587-597, 2011.
Patient Safety: Moving the Bar in Prison Health Care Standards
Greifinger, Robert B.; Mellow, Jeff
2010-01-01
Improvements in community health care quality through error reduction have been slow to transfer to correctional settings. We convened a panel of correctional experts, which recommended 60 patient safety standards focusing on such issues as creating safety cultures at organizational, supervisory, and staff levels through changes to policy and training and by ensuring staff competency, reducing medication errors, encouraging the seamless transfer of information between and within practice settings, and developing mechanisms to detect errors or near misses and to shift the emphasis from blaming staff to fixing systems. To our knowledge, this is the first published set of standards focusing on patient safety in prisons, adapted from the emerging literature on quality improvement in the community. PMID:20864714
On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)
NASA Astrophysics Data System (ADS)
Huffman, G. J.
2013-12-01
Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate. Is this sufficient, or is it necessary to aggregate the precipitation error estimates across the time/space data cube used for averaging? At least for small time/space data cubes it would seem that the detailed variables that affect each precipitation error estimate in the aggregation, such as sensor type, land/ocean surface type, convective/stratiform type, and so on, drive variations that must be accounted for explicitly.
Molecular dynamics force-field refinement against quasi-elastic neutron scattering data
Borreguero Calvo, Jose M.; Lynch, Vickie E.
2015-11-23
Quasi-elastic neutron scattering (QENS) is one of the experimental techniques of choice for probing the dynamics at length and time scales that are also in the realm of full-atom molecular dynamics (MD) simulations. This overlap enables extension of current fitting methods that use time-independent equilibrium measurements to new methods fitting against dynamics data. We present an algorithm that fits simulation-derived incoherent dynamical structure factors against QENS data probing the diffusive dynamics of the system. We showcase the difficulties inherent to this type of fitting problem, namely, the disparity between simulation and experiment environment, as well as limitations in the simulationmore » due to incomplete sampling of phase space. We discuss a methodology to overcome these difficulties and apply it to a set of full-atom MD simulations for the purpose of refining the force-field parameter governing the activation energy of methyl rotation in the octa-methyl polyhedral oligomeric silsesquioxane molecule. Our optimal simulated activation energy agrees with the experimentally derived value up to a 5% difference, well within experimental error. We believe the method will find applicability to other types of diffusive motions and other representation of the systems such as coarse-grain models where empirical fitting is essential. In addition, the refinement method can be extended to the coherent dynamic structure factor with no additional effort.« less
NASA Astrophysics Data System (ADS)
Shepherd, James J.; López Ríos, Pablo; Needs, Richard J.; Drummond, Neil D.; Mohr, Jennifer A.-F.; Booth, George H.; Grüneis, Andreas; Kresse, Georg; Alavi, Ali
2013-03-01
Full configuration interaction quantum Monte Carlo1 (FCIQMC) and its initiator adaptation2 allow for exact solutions to the Schrödinger equation to be obtained within a finite-basis wavefunction ansatz. In this talk, we explore an application of FCIQMC to the homogeneous electron gas (HEG). In particular we use these exact finite-basis energies to compare with approximate quantum chemical calculations from the VASP code3. After removing the basis set incompleteness error by extrapolation4,5, we compare our energies with state-of-the-art diffusion Monte Carlo calculations from the CASINO package6. Using a combined approach of the two quantum Monte Carlo methods, we present the highest-accuracy thermodynamic (infinite-particle) limit energies for the HEG achieved to date. 1 G. H. Booth, A. Thom, and A. Alavi, J. Chem. Phys. 131, 054106 (2009). 2 D. Cleland, G. H. Booth, and A. Alavi, J. Chem. Phys. 132, 041103 (2010). 3 www.vasp.at (2012). 4 J. J. Shepherd, A. Grüneis, G. H. Booth, G. Kresse, and A. Alavi, Phys. Rev. B. 86, 035111 (2012). 5 J. J. Shepherd, G. H. Booth, and A. Alavi, J. Chem. Phys. 136, 244101 (2012). 6 R. Needs, M. Towler, N. Drummond, and P. L. Ríos, J. Phys.: Condensed Matter 22, 023201 (2010).
Reading Profiles in Multi-Site Data With Missingness.
Eckert, Mark A; Vaden, Kenneth I; Gebregziabher, Mulugeta
2018-01-01
Children with reading disability exhibit varied deficits in reading and cognitive abilities that contribute to their reading comprehension problems. Some children exhibit primary deficits in phonological processing, while others can exhibit deficits in oral language and executive functions that affect comprehension. This behavioral heterogeneity is problematic when missing data prevent the characterization of different reading profiles, which often occurs in retrospective data sharing initiatives without coordinated data collection. Here we show that reading profiles can be reliably identified based on Random Forest classification of incomplete behavioral datasets, after the missForest method is used to multiply impute missing values. Results from simulation analyses showed that reading profiles could be accurately classified across degrees of missingness (e.g., ∼5% classification error for 30% missingness across the sample). The application of missForest to a real multi-site dataset with missingness ( n = 924) showed that reading disability profiles significantly and consistently differed in reading and cognitive abilities for cases with and without missing data. The results of validation analyses indicated that the reading profiles (cases with and without missing data) exhibited significant differences for an independent set of behavioral variables that were not used to classify reading profiles. Together, the results show how multiple imputation can be applied to the classification of cases with missing data and can increase the integrity of results from multi-site open access datasets.
[Detection and classification of medication errors at Joan XXIII University Hospital].
Jornet Montaña, S; Canadell Vilarrasa, L; Calabuig Mũoz, M; Riera Sendra, G; Vuelta Arce, M; Bardají Ruiz, A; Gallart Mora, M J
2004-01-01
Medication errors are multifactorial and multidisciplinary, and may originate in processes such as drug prescription, transcription, dispensation, preparation and administration. The goal of this work was to measure the incidence of detectable medication errors that arise within a unit dose drug distribution and control system, from drug prescription to drug administration, by means of an observational method confined to the Pharmacy Department, as well as a voluntary, anonymous report system. The acceptance of this voluntary report system's implementation was also assessed. A prospective descriptive study was conducted. Data collection was performed at the Pharmacy Department from a review of prescribed medical orders, a review of pharmaceutical transcriptions, a review of dispensed medication and a review of medication returned in unit dose medication carts. A voluntary, anonymous report system centralized in the Pharmacy Department was also set up to detect medication errors. Prescription errors were the most frequent (1.12%), closely followed by dispensation errors (1.04%). Transcription errors (0.42%) and administration errors (0.69%) had the lowest overall incidence. Voluntary report involved only 4.25% of all detected errors, whereas unit dose medication cart review contributed the most to error detection. Recognizing the incidence and types of medication errors that occur in a health-care setting allows us to analyze their causes and effect changes in different stages of the process in order to ensure maximal patient safety.
Evaluation of 4D-CT lung registration.
Kabus, Sven; Klinder, Tobias; Murphy, Keelin; van Ginneken, Bram; van Lorenz, Cristian; Pluim, Josien P W
2009-01-01
Non-rigid registration accuracy assessment is typically performed by evaluating the target registration error at manually placed landmarks. For 4D-CT lung data, we compare two sets of landmark distributions: a smaller set primarily defined on vessel bifurcations as commonly described in the literature and a larger set being well-distributed throughout the lung volume. For six different registration schemes (three in-house schemes and three schemes frequently used by the community) the landmark error is evaluated and found to depend significantly on the distribution of the landmarks. In particular, lung regions near to the pleura show a target registration error three times larger than near-mediastinal regions. While the inter-method variability on the landmark positions is rather small, the methods show discriminating differences with respect to consistency and local volume change. In conclusion, both a well-distributed set of landmarks and a deformation vector field analysis are necessary for reliable non-rigid registration accuracy assessment.
NASA Astrophysics Data System (ADS)
Witte, Jonathon; Neaton, Jeffrey B.; Head-Gordon, Martin
2017-06-01
With the aim of mitigating the basis set error in density functional theory (DFT) calculations employing local basis sets, we herein develop two empirical corrections for basis set superposition error (BSSE) in the def2-SVPD basis, a basis which—when stripped of BSSE—is capable of providing near-complete-basis DFT results for non-covalent interactions. Specifically, we adapt the existing pairwise geometrical counterpoise (gCP) approach to the def2-SVPD basis, and we develop a beyond-pairwise approach, DFT-C, which we parameterize across a small set of intermolecular interactions. Both gCP and DFT-C are evaluated against the traditional Boys-Bernardi counterpoise correction across a set of 3402 non-covalent binding energies and isomerization energies. We find that the DFT-C method represents a significant improvement over gCP, particularly for non-covalently-interacting molecular clusters. Moreover, DFT-C is transferable among density functionals and can be combined with existing functionals—such as B97M-V—to recover large-basis results at a fraction of the cost.
Assessment of meteorological uncertainties as they apply to the ASCENDS mission
NASA Astrophysics Data System (ADS)
Snell, H. E.; Zaccheo, S.; Chase, A.; Eluszkiewicz, J.; Ott, L. E.; Pawson, S.
2011-12-01
Many environment-oriented remote sensing and modeling applications require precise knowledge of the atmospheric state (temperature, pressure, water vapor, surface pressure, etc.) on a fine spatial grid with a comprehensive understanding of the associated errors. Coincident atmospheric state measurements may be obtained via co-located remote sensing instruments or by extracting these data from ancillary models. The appropriate technique for a given application depends upon the required accuracy. State-of-the-art mesoscale/regional numerical weather prediction (NWP) models operate on spatial scales of a few kilometers resolution, and global scale NWP models operate on scales of tens of kilometers. Remote sensing measurements may be made on spatial scale comparable to the measurement of interest. These measurements normally require a separate sensor, which increases the overall size, weight, power and complexity of the satellite payload. Thus, a comprehensive understanding of the errors associated with each of these approaches is a critical part of the design/characterization of a remote-sensing system whose measurement accuracy depends on knowledge of the atmospheric state. One of the requirements as part of the overall ASCENDS (Active Sensing of CO2 Emissions over Nights, Days, and Seasons) mission development is to develop a consistent set of atmospheric state variables (vertical temperature and water vapor profiles, and surface pressure) for use in helping to constrain overall retrieval error budget. If the error budget requires tighter uncertainties on ancillary atmospheric parameters than can be provided by NWP models and analyses, additional sensors may be required to reduce the overall measurement error and meet mission requirements. To this end we have used NWP models and reanalysis information to generate a set of atmospheric profiles which contain reasonable variability. This data consists of a "truth" set and a companion "measured" set of profiles. The truth set contains climatologically-relevant profiles of pressure, temperature and humidity with an accompanying surface pressure. The measured set consists of some number of instances of the truth set which have been perturbed to represent realistic measurement uncertainty for the truth profile using measurement error covariance matrices. The primary focus has been to develop matrices derived using information about the profile retrieval accuracy as documented for on-orbit sensor systems including AIRS, AMSU, ATMS, and CrIS. Surface pressure variability and uncertainty was derived from globally-compiled station pressure information. We generated an additional measurement set of profiles which represent the overall error within NWP models. These profile sets will allow for comprehensive trade studies for sensor system design and provide a basis for setting measurement requirements for co-located temperature, humidity sounders, determine the utility of NWP data to either replace or supplement collocated measurements, and to assess the overall end-to-end system performance of the sensor system. In this presentation we discuss the process by which we created these data sets and show their utility in performing trade studies for sensor system concepts and designs.
UTLS water vapour from SCIAMACHY limb measurementsV3.01 (2002–2012)
Weigel, K.; Rozanov, A.; Azam, F.; Bramstedt, K.; Damadeo, R.; Eichmann, K.-U.; Gebhardt, C.; Hurst, D.; Kraemer, M.; Lossow, S.; Read, W.; Spelten, N.; Stiller, G. P.; Walker, K. A.; Weber, M.; Bovensmann, H.; Burrows, J. P.
2017-01-01
The SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY) aboard the Envisat satellite provided measurements from August 2002 until April 2012. SCIAMACHY measured the scattered or direct sunlight using different observation geometries. The limb viewing geometry allows the retrieval of water vapour at about 10–25 km height from the near-infrared spectral range (1353–1410 nm). These data cover the upper troposphere and lower stratosphere (UTLS), a region in the atmosphere which is of special interest for a variety of dynamical and chemical processes as well as for the radiative forcing. Here, the latest data version of water vapour (V3.01) from SCIAMACHY limb measurements is presented and validated by comparisons with data sets from other satellite and in situ measurements. Considering retrieval tests and the results of these comparisons, the V3.01 data are reliable from about 11 to 23 km and the best results are found in the middle of the profiles between about 14 and 20 km. Above 20 km in the extra tropics V3.01 is drier than all other data sets. Additionally, for altitudes above about 19 km, the vertical resolution of the retrieved profile is not sufficient to resolve signals with a short vertical structure like the tape recorder. Below 14 km, SCIAMACHY water vapour V3.01 is wetter than most collocated data sets, but the high variability of water vapour in the troposphere complicates the comparison. For 14–20 km height, the expected errors from the retrieval and simulations and the mean differences to collocated data sets are usually smaller than 10 % when the resolution of the SCIAMACHY data is taken into account. In general, the temporal changes agree well with collocated data sets except for the Northern Hemisphere extratropical stratosphere, where larger differences are observed. This indicates a possible drift in V3.01 most probably caused by the incomplete treatment of volcanic aerosols in the retrieval. In all other regions a good temporal stability is shown. In the tropical stratosphere an increase in water vapour is found between 2002 and 2012, which is in agreement with other satellite data sets for overlapping time periods. PMID:29263764
NASA Astrophysics Data System (ADS)
Sakata, Shojiro; Fujisawa, Masaya
It is a well-known fact [7], [9] that the BMS algorithm with majority voting can decode up to half the Feng-Rao designed distance dFR. Since dFR is not smaller than the Goppa designed distance dG, that algorithm can correct up to \\lfloor \\frac{d_G-1}{2}\\rfloor errors. On the other hand, it has been considered to be evident that the original BMS algorithm (without voting) [1], [2] can correct up to \\lfloor \\frac{d_G-g-1}{2}\\rfloor errors similarly to the basic algorithm by Skorobogatov-Vladut. But, is it true? In this short paper, we show that it is true, although we need a few remarks and some additional procedures for determining the Groebner basis of the error locator ideal exactly. In fact, as the basic algorithm gives a set of polynomials whose zero set contains the error locators as a subset, it cannot always give the exact error locators, unless the syndrome equation is solved to find the error values in addition.
Interval Neutrosophic Sets and Their Application in Multicriteria Decision Making Problems
Zhang, Hong-yu; Wang, Jian-qiang; Chen, Xiao-hong
2014-01-01
As a generalization of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete, and inconsistent information existing in the real world. And interval neutrosophic sets (INSs) have been proposed exactly to address issues with a set of numbers in the real unit interval, not just a specific number. However, there are fewer reliable operations for INSs, as well as the INS aggregation operators and decision making method. For this purpose, the operations for INSs are defined and a comparison approach is put forward based on the related research of interval valued intuitionistic fuzzy sets (IVIFSs) in this paper. On the basis of the operations and comparison approach, two interval neutrosophic number aggregation operators are developed. Then, a method for multicriteria decision making problems is explored applying the aggregation operators. In addition, an example is provided to illustrate the application of the proposed method. PMID:24695916
Interval neutrosophic sets and their application in multicriteria decision making problems.
Zhang, Hong-yu; Wang, Jian-qiang; Chen, Xiao-hong
2014-01-01
As a generalization of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete, and inconsistent information existing in the real world. And interval neutrosophic sets (INSs) have been proposed exactly to address issues with a set of numbers in the real unit interval, not just a specific number. However, there are fewer reliable operations for INSs, as well as the INS aggregation operators and decision making method. For this purpose, the operations for INSs are defined and a comparison approach is put forward based on the related research of interval valued intuitionistic fuzzy sets (IVIFSs) in this paper. On the basis of the operations and comparison approach, two interval neutrosophic number aggregation operators are developed. Then, a method for multicriteria decision making problems is explored applying the aggregation operators. In addition, an example is provided to illustrate the application of the proposed method.
"Apologies" from pathologists: why, when, and how to say "sorry" after committing a medical error.
Dewar, Rajan; Parkash, Vinita; Forrow, Lachlan; Truog, Robert D
2014-05-01
How pathologists communicate an error is complicated by the absence of a direct physician-patient relationship. Using 2 examples, we elaborate on how other physician colleagues routinely play an intermediary role in our day-to-day transactions and in the communication of a pathologist error to the patient. The concept of a "dual-hybrid" mind-set in the intermediary physician and its role in representing the pathologists' viewpoint adequately is considered. In a dual-hybrid mind-set, the intermediary physician can align with the patients' philosophy and like the patient, consider the smallest deviation from norm to be an error. Alternatively, they might embrace the traditional physician philosophy and communicate only those errors that resulted in a clinically inappropriate outcome. Neither may effectively reflect the pathologists' interests. We propose that pathologists develop strategies to communicate errors that include considerations of meeting with the patients directly. Such interactions promote healing for the patient and are relieving to the well-intentioned pathologist.
Nishiura, K
1998-08-01
With the use of rapid serial visual presentation (RSVP), the present study investigated the cause of target intrusion errors and functioning of monitoring processes. Eighteen students participated in Experiment 1, and 24 in Experiment 2. In Experiment 1, different target intrusion errors were found depending on different kinds of letters --romaji, hiragana, and kanji. In Experiment 2, stimulus set size and context information were manipulated in an attempt to explore the cause of post-target intrusion errors. Results showed that as stimulus set size increased, the post-target intrusion errors also increased, but contextual information did not affect the errors. Results concerning mean report probability indicated that increased allocation of attentional resource to response-defining dimension was the cause of the errors. In addition, results concerning confidence rating showed that monitoring of temporal and contextual information was extremely accurate, but it was not so for stimulus information. These results suggest that attentional resource is different from monitoring resource.
Survival analysis with error-prone time-varying covariates: a risk set calibration approach
Liao, Xiaomei; Zucker, David M.; Li, Yi; Spiegelman, Donna
2010-01-01
Summary Occupational, environmental, and nutritional epidemiologists are often interested in estimating the prospective effect of time-varying exposure variables such as cumulative exposure or cumulative updated average exposure, in relation to chronic disease endpoints such as cancer incidence and mortality. From exposure validation studies, it is apparent that many of the variables of interest are measured with moderate to substantial error. Although the ordinary regression calibration approach is approximately valid and efficient for measurement error correction of relative risk estimates from the Cox model with time-independent point exposures when the disease is rare, it is not adaptable for use with time-varying exposures. By re-calibrating the measurement error model within each risk set, a risk set regression calibration method is proposed for this setting. An algorithm for a bias-corrected point estimate of the relative risk using an RRC approach is presented, followed by the derivation of an estimate of its variance, resulting in a sandwich estimator. Emphasis is on methods applicable to the main study/external validation study design, which arises in important applications. Simulation studies under several assumptions about the error model were carried out, which demonstrated the validity and efficiency of the method in finite samples. The method was applied to a study of diet and cancer from Harvard’s Health Professionals Follow-up Study (HPFS). PMID:20486928
Attard, Catherine R M; Beheregaray, Luciano B; Möller, Luciana M
2018-05-01
There has been remarkably little attention to using the high resolution provided by genotyping-by-sequencing (i.e., RADseq and similar methods) for assessing relatedness in wildlife populations. A major hurdle is the genotyping error, especially allelic dropout, often found in this type of data that could lead to downward-biased, yet precise, estimates of relatedness. Here, we assess the applicability of genotyping-by-sequencing for relatedness inferences given its relatively high genotyping error rate. Individuals of known relatedness were simulated under genotyping error, allelic dropout and missing data scenarios based on an empirical ddRAD data set, and their true relatedness was compared to that estimated by seven relatedness estimators. We found that an estimator chosen through such analyses can circumvent the influence of genotyping error, with the estimator of Ritland (Genetics Research, 67, 175) shown to be unaffected by allelic dropout and to be the most accurate when there is genotyping error. We also found that the choice of estimator should not rely solely on the strength of correlation between estimated and true relatedness as a strong correlation does not necessarily mean estimates are close to true relatedness. We also demonstrated how even a large SNP data set with genotyping error (allelic dropout or otherwise) or missing data still performs better than a perfectly genotyped microsatellite data set of tens of markers. The simulation-based approach used here can be easily implemented by others on their own genotyping-by-sequencing data sets to confirm the most appropriate and powerful estimator for their data. © 2017 John Wiley & Sons Ltd.
Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.
Samoli, Evangelia; Butland, Barbara K
2017-12-01
Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.
Mulliken, John B; LaBrie, Richard A
2012-02-01
Repair of unilateral cleft lip requires three-dimensional craftsmanship and understanding four-dimensional changes. Ninety-nine children with unilateral complete or incomplete cleft lip were measured by direct anthropometry following rotation-advancement repair (intraoperatively) and again in childhood. Changes in heminasal width, labial height, and labial width were analyzed and compared measures depending on whether the cleft was incomplete/complete or involved left/right side. Average heminasal width (sn-al) was set 1 mm less on the cleft side and measured only 0.7 mm less at 6 years. Labial height (sn-cphi) was slightly greater on the cleft side at repair and matched the noncleft side at follow-up. Vertical dimension (sbal-cphi) was slightly less at operation; the percent change was the same on both sides. Transverse labial width (cphi-ch) was set short on the cleft side and lengthened disproportionately, resulting in less than 1 mm difference at 6 years. All anthropometric dimensions grew less in complete cleft lips compared with incomplete forms; however, only labial height and width were significantly different. There were no disparities in nasolabial growth between left- and right-sided cleft lips. Cleft side alar base drifts laterally and should be positioned slightly more medial and secured to nasalis or periosteum. Growth in labial height lags and, therefore, the repaired side should be equal to or slightly greater than on the normal side, particularly in a complete labial cleft. Transverse labial width grows more on the cleft side; thus, lateral Cupid's bow peak point can be marked closer to the commissure to match the labial height on the noncleft side. Therapeutic, IV.
Tavakkoli, Anna; Law, Ryan J; Bedi, Aarti O; Prabhu, Anoop; Hiatt, Tadd; Anderson, Michelle A; Wamsteker, Erik J; Elmunzer, B Joseph; Piraka, Cyrus R; Scheiman, James M; Elta, Grace H; Kwon, Richard S
2017-09-01
Endoscopic experience is known to correlate with outcomes of endoscopic mucosal resection (EMR), particularly complete resection of the polyp tissue. Whether specialist endoscopists can protect against incomplete polypectomy in the setting of known risk factors for incomplete resection (IR) is unknown. We aimed to characterize how specialist endoscopists may help to mitigate the risk of IR of large sessile polyps. This is a retrospective cohort study of patients who underwent EMR at the University of Michigan from January 1, 2006, to November 15, 2015. The primary outcome was endoscopist-reported polyp tissue remaining at the end of the initial EMR attempt. Specialist endoscopists were defined as endoscopists who receive tertiary referrals for difficult colonoscopy cases and completed at least 20 EMR colonic polyp resections over the study period. A total of 257 patients with 269 polyps were included in the study. IR occurred in 40 (16%) cases. IR was associated with polyp size ≥ 40 mm [adjusted odds ratio (aOR) 3.31, 95% confidence interval (CI) 1.38-7.93], flat/laterally spreading polyps (aOR 2.61, 95% CI 1.24-5.48), and difficulty lifting the polyp (aOR 11.0, 95% CI 2.66-45.3). A specialist endoscopist performing the initial EMR was protective against IR, even in the setting of risk factors for IR (aOR 0.13, 95% CI 0.04-0.41). IR is associated with polyp size ≥ 40 mm, flat and/or laterally spreading polyps, and difficulty lifting the polyp. A specialist endoscopist initiating the EMR was protective of IR.
Shi, Cheng-Min; Yang, Ziheng
2018-01-01
Abstract The phylogenetic relationships among extant gibbon species remain unresolved despite numerous efforts using morphological, behavorial, and genetic data and the sequencing of whole genomes. A major challenge in reconstructing the gibbon phylogeny is the radiative speciation process, which resulted in extremely short internal branches in the species phylogeny and extensive incomplete lineage sorting with extensive gene-tree heterogeneity across the genome. Here, we analyze two genomic-scale data sets, with ∼10,000 putative noncoding and exonic loci, respectively, to estimate the species tree for the major groups of gibbons. We used the Bayesian full-likelihood method bpp under the multispecies coalescent model, which naturally accommodates incomplete lineage sorting and uncertainties in the gene trees. For comparison, we included three heuristic coalescent-based methods (mp-est, SVDQuartets, and astral) as well as concatenation. From both data sets, we infer the phylogeny for the four extant gibbon genera to be (Hylobates, (Nomascus, (Hoolock, Symphalangus))). We used simulation guided by the real data to evaluate the accuracy of the methods used. Astral, while not as efficient as bpp, performed well in estimation of the species tree even in presence of excessive incomplete lineage sorting. Concatenation, mp-est and SVDQuartets were unreliable when the species tree contains very short internal branches. Likelihood ratio test of gene flow suggests a small amount of migration from Hylobates moloch to H. pileatus, while cross-genera migration is absent or rare. Our results highlight the utility of coalescent-based methods in addressing challenging species tree problems characterized by short internal branches and rampant gene tree-species tree discordance. PMID:29087487
Sensory stimulation augments the effects of massed practice training in persons with tetraplegia.
Beekhuizen, Kristina S; Field-Fote, Edelle C
2008-04-01
To compare functional changes and cortical neuroplasticity associated with hand and upper extremity use after massed (repetitive task-oriented practice) training, somatosensory stimulation, massed practice training combined with somatosensory stimulation, or no intervention, in persons with chronic incomplete tetraplegia. Participants were randomly assigned to 1 of 4 groups: massed practice training combined with somatosensory peripheral nerve stimulation (MP+SS), somatosensory peripheral nerve stimulation only (SS), massed practice training only (MP), and no intervention (control). University medical school setting. Twenty-four subjects with chronic incomplete tetraplegia. Intervention sessions were 2 hours per session, 5 days a week for 3 weeks. Massed practice training consisted of repetitive practice of functional tasks requiring skilled hand and upper-extremity use. Somatosensory stimulation consisted of median nerve stimulation with intensity set below motor threshold. Pre- and post-testing assessed changes in functional hand use (Jebsen-Taylor Hand Function Test), functional upper-extremity use (Wolf Motor Function Test), pinch grip strength (key pinch force), sensory function (monofilament testing), and changes in cortical excitation (motor evoked potential threshold). The 3 groups showed significant improvements in hand function after training. The MP+SS and SS groups had significant improvements in upper-extremity function and pinch strength compared with the control group, but only the MP+SS group had a significant change in sensory scores compared with the control group. The MP+SS and MP groups had greater change in threshold measures of cortical excitability. People with chronic incomplete tetraplegia obtain functional benefits from massed practice of task-oriented skills. Somatosensory stimulation appears to be a valuable adjunct to training programs designed to improve hand and upper-extremity function in these subjects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, H.; Han, C.; Choi, J.-Y.
2014-05-20
Characterizing a microlensing planet is done by modeling an observed lensing light curve. In this process, it is often confronted that solutions of different lensing parameters result in similar light curves, causing difficulties in uniquely interpreting the lens system, and thus understanding the causes of different types of degeneracy is important. In this work, we show that incomplete coverage of a planetary perturbation can result in degenerate solutions even for events where the planetary signal is detected with a high level of statistical significance. We demonstrate the degeneracy for an actually observed event OGLE-2012-BLG-0455/MOA-2012-BLG-206. The peak of this high-magnification eventmore » (A {sub max} ∼ 400) exhibits very strong deviation from a point-lens model with Δχ{sup 2} ≳ 4000 for data sets with a total of 6963 measurements. From detailed modeling of the light curve, we find that the deviation can be explained by four distinct solutions, i.e., two very different sets of solutions, each with a twofold degeneracy. While the twofold (so-called close/wide) degeneracy is well understood, the degeneracy between the radically different solutions is not previously known. The model light curves of this degeneracy differ substantially in the parts that were not covered by observation, indicating that the degeneracy is caused by the incomplete coverage of the perturbation. It is expected that the frequency of the degeneracy introduced in this work will be greatly reduced with the improvement of the current lensing survey and follow-up experiments and the advent of new surveys.« less
A preliminary taxonomy of medical errors in family practice
Dovey, S; Meyers, D; Phillips, R; Green, L; Fryer, G; Galliher, J; Kappus, J; Grob, P
2002-01-01
Objective: To develop a preliminary taxonomy of primary care medical errors. Design: Qualitative analysis to identify categories of error reported during a randomized controlled trial of computer and paper reporting methods. Setting: The National Network for Family Practice and Primary Care Research. Participants: Family physicians. Main outcome measures: Medical error category, context, and consequence. Results: Forty two physicians made 344 reports: 284 (82.6%) arose from healthcare systems dysfunction; 46 (13.4%) were errors due to gaps in knowledge or skills; and 14 (4.1%) were reports of adverse events, not errors. The main subcategories were: administrative failures (102; 30.9% of errors), investigation failures (82; 24.8%), treatment delivery lapses (76; 23.0%), miscommunication (19; 5.8%), payment systems problems (4; 1.2%), error in the execution of a clinical task (19; 5.8%), wrong treatment decision (14; 4.2%), and wrong diagnosis (13; 3.9%). Most reports were of errors that were recognized and occurred in reporters' practices. Affected patients ranged in age from 8 months to 100 years, were of both sexes, and represented all major US ethnic groups. Almost half the reports were of events which had adverse consequences. Ten errors resulted in patients being admitted to hospital and one patient died. Conclusions: This medical error taxonomy, developed from self-reports of errors observed by family physicians during their routine clinical practice, emphasizes problems in healthcare processes and acknowledges medical errors arising from shortfalls in clinical knowledge and skills. Patient safety strategies with most effect in primary care settings need to be broader than the current focus on medication errors. PMID:12486987
A description of medication errors reported by pharmacists in a neonatal intensive care unit.
Pawluk, Shane; Jaam, Myriam; Hazi, Fatima; Al Hail, Moza Sulaiman; El Kassem, Wessam; Khalifa, Hanan; Thomas, Binny; Abdul Rouf, Pallivalappila
2017-02-01
Background Patients in the Neonatal Intensive Care Unit (NICU) are at an increased risk for medication errors. Objective The objective of this study is to describe the nature and setting of medication errors occurring in patients admitted to an NICU in Qatar based on a standard electronic system reported by pharmacists. Setting Neonatal intensive care unit, Doha, Qatar. Method This was a retrospective cross-sectional study on medication errors reported electronically by pharmacists in the NICU between January 1, 2014 and April 30, 2015. Main outcome measure Data collected included patient information, and incident details including error category, medications involved, and follow-up completed. Results A total of 201 NICU pharmacists-reported medication errors were submitted during the study period. All reported errors did not reach the patient and did not cause harm. Of the errors reported, 98.5% occurred in the prescribing phase of the medication process with 58.7% being due to calculation errors. Overall, 53 different medications were documented in error reports with the anti-infective agents being the most frequently cited. The majority of incidents indicated that the primary prescriber was contacted and the error was resolved before reaching the next phase of the medication process. Conclusion Medication errors reported by pharmacists occur most frequently in the prescribing phase of the medication process. Our data suggest that error reporting systems need to be specific to the population involved. Special attention should be paid to frequently used medications in the NICU as these were responsible for the greatest numbers of medication errors.
Measuring physical activity during pregnancy.
Harrison, Cheryce L; Thompson, Russell G; Teede, Helena J; Lombard, Catherine B
2011-03-21
Currently, little is known about physical activity patterns in pregnancy with prior estimates predominantly based on subjective assessment measures that are prone to error. Given the increasing obesity rates and the importance of physical activity in pregnancy, we evaluated the relationship and agreement between subjective and objective physical activity assessment tools to inform researchers and clinicians on optimal assessment of physical activity in pregnancy. 48 pregnant women between 26-28 weeks gestation were recruited. The Yamax pedometer and Actigraph accelerometer were worn for 5-7 days under free living conditions and thereafter the International Physical Activity Questionnaire (IPAQ) was completed. IPAQ and pedometer estimates of activity were compared to the more robust and accurate accelerometer data. Of 48 women recruited, 30 women completed the study (mean age: 33.6 ± 4.7 years; mean BMI: 31.2 ± 5.1 kg/m(2)) and 18 were excluded (failure to wear [n = 8] and incomplete data [n = 10]). The accelerometer and pedometer correlated significantly on estimation of daily steps (ρ = 0.69, p < 0.01) and had good absolute agreement with low systematic error (mean difference: 505 ± 1498 steps/day). Accelerometer and IPAQ estimates of total, light and moderate Metabolic Equivalent minutes/day (MET min(-1) day(-1)) were not significantly correlated and there was poor absolute agreement. Relative to the accelerometer, the IPAQ under predicted daily total METs (105.76 ± 259.13 min(-1) day(-1)) and light METs (255.55 ± 128.41 min(-1) day(-1)) and over predicted moderate METs (-112.25 ± 166.41 min(-1) day(-1)). Compared with the accelerometer, the pedometer appears to provide a reliable estimate of physical activity in pregnancy, whereas the subjective IPAQ measure performed less accurately in this setting. Future research measuring activity in pregnancy should optimally encompass objective measures of physical activity. Australian New Zealand Clinical Trial Registry Number: ACTRN12608000233325. Registered 7/5/2008.
Measuring physical activity during pregnancy
2011-01-01
Background Currently, little is known about physical activity patterns in pregnancy with prior estimates predominantly based on subjective assessment measures that are prone to error. Given the increasing obesity rates and the importance of physical activity in pregnancy, we evaluated the relationship and agreement between subjective and objective physical activity assessment tools to inform researchers and clinicians on optimal assessment of physical activity in pregnancy. Methods 48 pregnant women between 26-28 weeks gestation were recruited. The Yamax pedometer and Actigraph accelerometer were worn for 5-7 days under free living conditions and thereafter the International Physical Activity Questionnaire (IPAQ) was completed. IPAQ and pedometer estimates of activity were compared to the more robust and accurate accelerometer data. Results Of 48 women recruited, 30 women completed the study (mean age: 33.6 ± 4.7 years; mean BMI: 31.2 ± 5.1 kg/m2) and 18 were excluded (failure to wear [n = 8] and incomplete data [n = 10]). The accelerometer and pedometer correlated significantly on estimation of daily steps (ρ = 0.69, p < 0.01) and had good absolute agreement with low systematic error (mean difference: 505 ± 1498 steps/day). Accelerometer and IPAQ estimates of total, light and moderate Metabolic Equivalent minutes/day (MET min-1 day-1) were not significantly correlated and there was poor absolute agreement. Relative to the accelerometer, the IPAQ under predicted daily total METs (105.76 ± 259.13 min-1 day-1) and light METs (255.55 ± 128.41 min-1 day-1) and over predicted moderate METs (-112.25 ± 166.41 min-1 day-1). Conclusion Compared with the accelerometer, the pedometer appears to provide a reliable estimate of physical activity in pregnancy, whereas the subjective IPAQ measure performed less accurately in this setting. Future research measuring activity in pregnancy should optimally encompass objective measures of physical activity. Trial Registration Australian New Zealand Clinical Trial Registry Number: ACTRN12608000233325. Registered 7/5/2008. PMID:21418609
Usability study of a computer-based self-management system for older adults with chronic diseases.
Or, Calvin; Tao, Da
2012-11-08
Usability can influence patients' acceptance and adoption of a health information technology. However, little research has been conducted to study the usability of a self-management health care system, especially one geared toward elderly patients. This usability study evaluated a new computer-based self-management system interface for older adults with chronic diseases, using a paper prototype approach. Fifty older adults with different chronic diseases participated. Two usability evaluation methods were involved: (1) a heuristics evaluation and (2) end-user testing with a think-aloud testing method, audio recording, videotaping, and interviewing. A set of usability metrics was employed to determine the overall system usability, including task incompletion rate, task completion time, frequency of error, frequency of help, satisfaction, perceived usefulness, and perceived ease of use. Interviews were used to elicit participants' comments on the system design. The quantitative data were analyzed using descriptive statistics and the qualitative data were analyzed for content. The participants were able to perform the predesigned self-management tasks with the current system design and they expressed mostly positive responses about the perceived usability measures regarding the system interface. However, the heuristics evaluation, performance measures, and interviews revealed a number of usability problems related to system navigation, information search and interpretation, information presentation, and readability. Design recommendations for further system interface modifications were discussed. This study verified the usability of the self-management system developed for older adults with chronic diseases. Also, we demonstrated that our usability evaluation approach could be used to quickly and effectively identify usability problems in a health care information system at an early stage of the system development process using a paper prototype. Conducting a usability evaluation is an essential step in system development to ensure that the system features match the users' true needs, expectations, and characteristics, and also to minimize the likelihood of the users committing user errors and having difficulties using the system.
Factors affecting the concordance between orthologous gene trees and species tree in bacteria.
Castillo-Ramírez, Santiago; González, Víctor
2008-10-30
As originally defined, orthologous genes implied a reflection of the history of the species. In recent years, many studies have examined the concordance between orthologous gene trees and species trees in bacteria. These studies have produced contradictory results that may have been influenced by orthologous gene misidentification and artefactual phylogenetic reconstructions. Here, using a method that allows the detection and exclusion of false positives during identification of orthologous genes, we address the question of whether putative orthologous genes within bacteria really reflect the history of the species. We identified a set of 370 orthologous genes from the bacterial order Rhizobiales. Although manifesting strong vertical signal, almost every orthologous gene had a distinct phylogeny, and the most common topology among the orthologous gene trees did not correspond with the best estimate of the species tree. However, each orthologous gene tree shared an average of 70% of its bipartitions with the best estimate of the species tree. Stochastic error related to gene size affected the concordance between the best estimated of the species tree and the orthologous gene trees, although this effect was weak and distributed unevenly among the functional categories. The nodes showing the greatest discordance were those defined by the shortest internal branches in the best estimated of the species tree. Moreover, a clear bias was evident with respect to the function of the orthologous genes, and the degree of divergence among the orthologous genes appeared to be related to their functional classification. Orthologous genes do not reflect the history of the species when taken as individual markers, but they do when taken as a whole. Stochastic error affected the concordance of orthologous genes with the species tree, albeit weakly. We conclude that two important biological causes of discordance among orthologous genes are incomplete lineage sorting and functional restriction.
Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances
NASA Astrophysics Data System (ADS)
Stähler, Simon C.; Sigloch, Karin
2016-11-01
Seismic source inversion, a central task in seismology, is concerned with the estimation of earthquake source parameters and their uncertainties. Estimating uncertainties is particularly challenging because source inversion is a non-linear problem. In a companion paper, Stähler and Sigloch (2014) developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements, a problem we address here. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D = 1 - CC of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. By identifying and quantifying this likelihood function, we make D and thus waveform cross-correlation measurements usable for fully probabilistic sampling strategies, in source inversion and related applications such as seismic tomography.
Usability Study of a Computer-Based Self-Management System for Older Adults with Chronic Diseases
Tao, Da
2012-01-01
Background Usability can influence patients’ acceptance and adoption of a health information technology. However, little research has been conducted to study the usability of a self-management health care system, especially one geared toward elderly patients. Objective This usability study evaluated a new computer-based self-management system interface for older adults with chronic diseases, using a paper prototype approach. Methods Fifty older adults with different chronic diseases participated. Two usability evaluation methods were involved: (1) a heuristics evaluation and (2) end-user testing with a think-aloud testing method, audio recording, videotaping, and interviewing. A set of usability metrics was employed to determine the overall system usability, including task incompletion rate, task completion time, frequency of error, frequency of help, satisfaction, perceived usefulness, and perceived ease of use. Interviews were used to elicit participants’ comments on the system design. The quantitative data were analyzed using descriptive statistics and the qualitative data were analyzed for content. Results The participants were able to perform the predesigned self-management tasks with the current system design and they expressed mostly positive responses about the perceived usability measures regarding the system interface. However, the heuristics evaluation, performance measures, and interviews revealed a number of usability problems related to system navigation, information search and interpretation, information presentation, and readability. Design recommendations for further system interface modifications were discussed. Conclusions This study verified the usability of the self-management system developed for older adults with chronic diseases. Also, we demonstrated that our usability evaluation approach could be used to quickly and effectively identify usability problems in a health care information system at an early stage of the system development process using a paper prototype. Conducting a usability evaluation is an essential step in system development to ensure that the system features match the users’ true needs, expectations, and characteristics, and also to minimize the likelihood of the users committing user errors and having difficulties using the system. PMID:23612015
Weighted Statistical Binning: Enabling Statistically Consistent Genome-Scale Phylogenetic Analyses
Bayzid, Md Shamsuzzoha; Mirarab, Siavash; Boussau, Bastien; Warnow, Tandy
2015-01-01
Because biological processes can result in different loci having different evolutionary histories, species tree estimation requires multiple loci from across multiple genomes. While many processes can result in discord between gene trees and species trees, incomplete lineage sorting (ILS), modeled by the multi-species coalescent, is considered to be a dominant cause for gene tree heterogeneity. Coalescent-based methods have been developed to estimate species trees, many of which operate by combining estimated gene trees, and so are called "summary methods". Because summary methods are generally fast (and much faster than more complicated coalescent-based methods that co-estimate gene trees and species trees), they have become very popular techniques for estimating species trees from multiple loci. However, recent studies have established that summary methods can have reduced accuracy in the presence of gene tree estimation error, and also that many biological datasets have substantial gene tree estimation error, so that summary methods may not be highly accurate in biologically realistic conditions. Mirarab et al. (Science 2014) presented the "statistical binning" technique to improve gene tree estimation in multi-locus analyses, and showed that it improved the accuracy of MP-EST, one of the most popular coalescent-based summary methods. Statistical binning, which uses a simple heuristic to evaluate "combinability" and then uses the larger sets of genes to re-calculate gene trees, has good empirical performance, but using statistical binning within a phylogenomic pipeline does not have the desirable property of being statistically consistent. We show that weighting the re-calculated gene trees by the bin sizes makes statistical binning statistically consistent under the multispecies coalescent, and maintains the good empirical performance. Thus, "weighted statistical binning" enables highly accurate genome-scale species tree estimation, and is also statistically consistent under the multi-species coalescent model. New data used in this study are available at DOI: http://dx.doi.org/10.6084/m9.figshare.1411146, and the software is available at https://github.com/smirarab/binning. PMID:26086579
Decryption with incomplete cyphertext and multiple-information encryption in phase space.
Xu, Xiaobin; Wu, Quanying; Liu, Jun; Situ, Guohai
2016-01-25
Recently, we have demonstrated that information encryption in phase space offers security enhancement over the traditional encryption schemes operating in real space. However, there is also an important issue with this technique: increasing the cost for data transmitting and storage. To address this issue, here we investigate the problem of decryption using incomplete cyphertext. We show that the analytic solution under the traditional framework set the lower limit of decryption performance. More importantly, we demonstrate that one just needs a small amount of cyphertext to recover the plaintext signal faithfully using compressive sensing, meaning that the amount of data that needs to transmit and store can be significantly reduced. This leads to multiple information encryption so that we can use the system bandwidth more effectively. We also provide an optical experimental result to demonstrate the plaintext recovered in phase space.
Genetic mapping in the presence of genotyping errors.
Cartwright, Dustin A; Troggio, Michela; Velasco, Riccardo; Gutin, Alexander
2007-08-01
Genetic maps are built using the genotypes of many related individuals. Genotyping errors in these data sets can distort genetic maps, especially by inflating the distances. We have extended the traditional likelihood model used for genetic mapping to include the possibility of genotyping errors. Each individual marker is assigned an error rate, which is inferred from the data, just as the genetic distances are. We have developed a software package, called TMAP, which uses this model to find maximum-likelihood maps for phase-known pedigrees. We have tested our methods using a data set in Vitis and on simulated data and confirmed that our method dramatically reduces the inflationary effect caused by increasing the number of markers and leads to more accurate orders.
Genetic Mapping in the Presence of Genotyping Errors
Cartwright, Dustin A.; Troggio, Michela; Velasco, Riccardo; Gutin, Alexander
2007-01-01
Genetic maps are built using the genotypes of many related individuals. Genotyping errors in these data sets can distort genetic maps, especially by inflating the distances. We have extended the traditional likelihood model used for genetic mapping to include the possibility of genotyping errors. Each individual marker is assigned an error rate, which is inferred from the data, just as the genetic distances are. We have developed a software package, called TMAP, which uses this model to find maximum-likelihood maps for phase-known pedigrees. We have tested our methods using a data set in Vitis and on simulated data and confirmed that our method dramatically reduces the inflationary effect caused by increasing the number of markers and leads to more accurate orders. PMID:17277374
Njuguna, Henry N; Caselton, Deborah L; Arunga, Geoffrey O; Emukule, Gideon O; Kinyanjui, Dennis K; Kalani, Rosalia M; Kinkade, Carl; Muthoka, Phillip M; Katz, Mark A; Mott, Joshua A
2014-12-24
For disease surveillance, manual data collection using paper-based questionnaires can be time consuming and prone to errors. We introduced smartphone data collection to replace paper-based data collection for an influenza sentinel surveillance system in four hospitals in Kenya. We compared the quality, cost and timeliness of data collection between the smartphone data collection system and the paper-based system. Since 2006, the Kenya Ministry of Health (MoH) with technical support from the Kenya Medical Research Institute/Centers for Disease Control and Prevention (KEMRI/CDC) conducted hospital-based sentinel surveillance for influenza in Kenya. In May 2011, the MOH replaced paper-based collection with an electronic data collection system using Field Adapted Survey Toolkit (FAST) on HTC Touch Pro2 smartphones at four sentinel sites. We compared 880 paper-based questionnaires dated Jan 2010-Jun 2011 and 880 smartphone questionnaires dated May 2011-Jun 2012 from the four surveillance sites. For each site, we compared the quality, cost and timeliness of each data collection system. Incomplete records were more likely seen in data collected using pen-and-paper compared to data collected using smartphones (adjusted incidence rate ratio (aIRR) 7, 95% CI: 4.4-10.3). Errors and inconsistent answers were also more likely to be seen in data collected using pen-and-paper compared to data collected using smartphones (aIRR: 25, 95% CI: 12.5-51.8). Smartphone data was uploaded into the database in a median time of 7 days while paper-based data took a median of 21 days to be entered (p < 0.01). It cost USD 1,501 (9.4%) more to establish the smartphone data collection system ($17,500) than the pen-and-paper system (USD $15,999). During two years, however, the smartphone data collection system was $3,801 (7%) less expensive to operate ($50,200) when compared to pen-and-paper system ($54,001). Compared to paper-based data collection, an electronic data collection system produced fewer incomplete data, fewer errors and inconsistent responses and delivered data faster. Although start-up costs were higher, the overall costs of establishing and running the electronic data collection system were lower compared to paper-based data collection system. Electronic data collection using smartphones has potential to improve timeliness, data integrity and reduce costs.
Factors affecting the perception of Korean-accented American English
NASA Astrophysics Data System (ADS)
Cho, Kwansun; Harris, John G.; Shrivastav, Rahul
2005-09-01
This experiment examines the relative contribution of two factors, intonation and articulation errors, on the perception of foreign accent in Korean-accented American English. Ten native speakers of Korean and ten native speakers of American English were asked to read ten English sentences. These sentences were then modified using high-quality speech resynthesis techniques [STRAIGHT Kawahara et al., Speech Commun. 27, 187-207 (1999)] to generate four sets of stimuli. In the first two sets of stimuli, the intonation patterns of the Korean speakers and American speakers were switched with one another. The articulatory errors for each speaker were not modified. In the final two sets, the sentences from the Korean and American speakers were resynthesized without any modifications. Fifteen listeners were asked to rate all the stimuli for the degree of foreign accent. Preliminary results show that, for native speakers of American English, articulation errors may play a greater role in the perception of foreign accent than errors in intonation patterns. [Work supported by KAIM.
Bolann, B J; Asberg, A
2004-01-01
The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.
Error reduction in EMG signal decomposition
Kline, Joshua C.
2014-01-01
Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization. PMID:25210159
How well does multiple OCR error correction generalize?
NASA Astrophysics Data System (ADS)
Lund, William B.; Ringger, Eric K.; Walker, Daniel D.
2013-12-01
As the digitization of historical documents, such as newspapers, becomes more common, the need of the archive patron for accurate digital text from those documents increases. Building on our earlier work, the contributions of this paper are: 1. in demonstrating the applicability of novel methods for correcting optical character recognition (OCR) on disparate data sets, including a new synthetic training set, 2. enhancing the correction algorithm with novel features, and 3. assessing the data requirements of the correction learning method. First, we correct errors using conditional random fields (CRF) trained on synthetic training data sets in order to demonstrate the applicability of the methodology to unrelated test sets. Second, we show the strength of lexical features from the training sets on two unrelated test sets, yielding a relative reduction in word error rate on the test sets of 6.52%. New features capture the recurrence of hypothesis tokens and yield an additional relative reduction in WER of 2.30%. Further, we show that only 2.0% of the full training corpus of over 500,000 feature cases is needed to achieve correction results comparable to those using the entire training corpus, effectively reducing both the complexity of the training process and the learned correction model.
The Effects of Bar-coding Technology on Medication Errors: A Systematic Literature Review.
Hutton, Kevin; Ding, Qian; Wellman, Gregory
2017-02-24
The bar-coding technology adoptions have risen drastically in U.S. health systems in the past decade. However, few studies have addressed the impact of bar-coding technology with strong prospective methodologies and the research, which has been conducted from both in-pharmacy and bedside implementations. This systematic literature review is to examine the effectiveness of bar-coding technology on preventing medication errors and what types of medication errors may be prevented in the hospital setting. A systematic search of databases was performed from 1998 to December 2016. Studies measuring the effect of bar-coding technology on medication errors were included in a full-text review. Studies with the outcomes other than medication errors such as efficiency or workarounds were excluded. The outcomes were measured and findings were summarized for each retained study. A total of 2603 articles were initially identified and 10 studies, which used prospective before-and-after study design, were fully reviewed in this article. Of the 10 included studies, 9 took place in the United States, whereas the remaining was conducted in the United Kingdom. One research article focused on bar-coding implementation in a pharmacy setting, whereas the other 9 focused on bar coding within patient care areas. All 10 studies showed overall positive effects associated with bar-coding implementation. The results of this review show that bar-coding technology may reduce medication errors in hospital settings, particularly on preventing targeted wrong dose, wrong drug, wrong patient, unauthorized drug, and wrong route errors.
Set-up uncertainties: online correction with X-ray volume imaging.
Kataria, Tejinder; Abhishek, Ashu; Chadha, Pranav; Nandigam, Janardhan
2011-01-01
To determine interfractional three-dimensional set-up errors using X-ray volumetric imaging (XVI). Between December 2007 and August 2009, 125 patients were taken up for image-guided radiotherapy using online XVI. After matching of reference and acquired volume view images, set-up errors in three translation directions were recorded and corrected online before treatment each day. Mean displacements, population systematic (Σ), and random (σ) errors were calculated and analyzed using SPSS (v16) software. Optimum clinical target volume (CTV) to planning target volume (PTV) margin was calculated using Van Herk's (2.5Σ + 0.7 σ) and Stroom's (2Σ + 0.7 σ) formula. Patients were grouped in 4 cohorts, namely brain, head and neck, thorax, and abdomen-pelvis. The mean vector displacement recorded were 0.18 cm, 0.15 cm, 0.36 cm, and 0.35 cm for brain, head and neck, thorax, and abdomen-pelvis, respectively. Analysis of individual mean set-up errors revealed good agreement with the proposed 0.3 cm isotropic margins for brain and 0.5 cm isotropic margins for head-neck. Similarly, 0.5 cm circumferential and 1 cm craniocaudal proposed margins were in agreement with thorax and abdomen-pelvic cases. The calculated mean displacements were well within CTV-PTV margin estimates of Van Herk (90% population coverage to minimum 95% prescribed dose) and Stroom (99% target volume coverage by 95% prescribed dose). Employing these individualized margins in a particular cohort ensure comparable target coverage as described in literature, which is further improved if XVI-aided set-up error detection and correction is used before treatment.
Parameter Estimation in Rasch Models for Examinee-Selected Items
ERIC Educational Resources Information Center
Liu, Chen-Wei; Wang, Wen-Chung
2017-01-01
The examinee-selected-item (ESI) design, in which examinees are required to respond to a fixed number of items in a given set of items (e.g., choose one item to respond from a pair of items), always yields incomplete data (i.e., only the selected items are answered and the others have missing data) that are likely nonignorable. Therefore, using…
Chemical name extraction based on automatic training data generation and rich feature set.
Yan, Su; Spangler, W Scott; Chen, Ying
2013-01-01
The automation of extracting chemical names from text has significant value to biomedical and life science research. A major barrier in this task is the difficulty of getting a sizable and good quality data to train a reliable entity extraction model. Another difficulty is the selection of informative features of chemical names, since comprehensive domain knowledge on chemistry nomenclature is required. Leveraging random text generation techniques, we explore the idea of automatically creating training sets for the task of chemical name extraction. Assuming the availability of an incomplete list of chemical names, called a dictionary, we are able to generate well-controlled, random, yet realistic chemical-like training documents. We statistically analyze the construction of chemical names based on the incomplete dictionary, and propose a series of new features, without relying on any domain knowledge. Compared to state-of-the-art models learned from manually labeled data and domain knowledge, our solution shows better or comparable results in annotating real-world data with less human effort. Moreover, we report an interesting observation about the language for chemical names. That is, both the structural and semantic components of chemical names follow a Zipfian distribution, which resembles many natural languages.
NASA Astrophysics Data System (ADS)
Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin
2018-02-01
In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.
NASA Astrophysics Data System (ADS)
Gavrishchaka, V. V.; Ganguli, S. B.
2001-12-01
Reliable forecasting of rare events in a complex dynamical system is a challenging problem that is important for many practical applications. Due to the nature of rare events, data set available for construction of the statistical and/or machine learning model is often very limited and incomplete. Therefore many widely used approaches including such robust algorithms as neural networks can easily become inadequate for rare events prediction. Moreover in many practical cases models with high-dimensional inputs are required. This limits applications of the existing rare event modeling techniques (e.g., extreme value theory) that focus on univariate cases. These approaches are not easily extended to multivariate cases. Support vector machine (SVM) is a machine learning system that can provide an optimal generalization using very limited and incomplete training data sets and can efficiently handle high-dimensional data. These features may allow to use SVM to model rare events in some applications. We have applied SVM-based system to the problem of large-amplitude substorm prediction and extreme event forecasting in stock and currency exchange markets. Encouraging preliminary results will be presented and other possible applications of the system will be discussed.
Muessig, L; Hauser, J; Wills, T J; Cacucci, F
2016-08-01
Place cells are hippocampal pyramidal cells that are active when an animal visits a restricted area of the environment, and collectively their activity constitutes a neural representation of space. Place cell populations in the adult rat hippocampus display fundamental properties consistent with an associative memory network: the ability to 1) generate new and distinct spatial firing patterns when encountering novel spatial contexts or changes in sensory input ("remapping") and 2) reinstate previously stored firing patterns when encountering a familiar context, including on the basis of an incomplete/degraded set of sensory cues ("pattern completion"). To date, it is unknown when these spatial memory responses emerge during brain development. Here, we show that, from the age of first exploration (postnatal day 16) onwards, place cell populations already exhibit these key features: they generate new representations upon exposure to a novel context and can reactivate familiar representations on the basis of an incomplete set of sensory cues. These results demonstrate that, as early as exploratory behaviors emerge, and despite the absence of an adult-like grid cell network, the developing hippocampus processes incoming sensory information as an associative memory network. © The Author 2016. Published by Oxford University Press.
Kinematic markers dissociate error correction from sensorimotor realignment during prism adaptation.
O'Shea, Jacinta; Gaveau, Valérie; Kandel, Matthieu; Koga, Kazuo; Susami, Kenji; Prablanc, Claude; Rossetti, Yves
2014-03-01
This study investigated the motor control mechanisms that enable healthy individuals to adapt their pointing movements during prism exposure to a rightward optical shift. In the prism adaptation literature, two processes are typically distinguished. Strategic motor adjustments are thought to drive the pattern of rapid endpoint error correction typically observed during the early stage of prism exposure. This is distinguished from so-called 'true sensorimotor realignment', normally measured with a different pointing task, at the end of prism exposure, which reveals a compensatory leftward 'prism after-effect'. Here, we tested whether each mode of motor compensation - strategic adjustments versus 'true sensorimotor realignment' - could be distinguished, by analyzing patterns of kinematic change during prism exposure. We hypothesized that fast feedforward versus slower feedback error corrective processes would map onto two distinct phases of the reach trajectory. Specifically, we predicted that feedforward adjustments would drive rapid compensation of the initial (acceleration) phase of the reach, resulting in the rapid reduction of endpoint errors typically observed early during prism exposure. By contrast, we expected visual-proprioceptive realignment to unfold more slowly and to reflect feedback influences during the terminal (deceleration) phase of the reach. The results confirmed these hypotheses. Rapid error reduction during the early stage of prism exposure was achieved by trial-by-trial adjustments of the motor plan, which were proportional to the endpoint error feedback from the previous trial. By contrast, compensation of the terminal reach phase unfolded slowly across the duration of prism exposure. Even after 100 trials of pointing through prisms, adaptation was incomplete, with participants continuing to exhibit a small rightward shift in both the reach endpoints and in the terminal phase of reach trajectories. Individual differences in the degree of adaptation of the terminal reach phase predicted the magnitude of prism after-effects. In summary, this study identifies distinct kinematic signatures of fast strategic versus slow sensorimotor realignment processes, which combine to adjust motor performance to compensate for a prismatic shift. © 2013 Elsevier Ltd. All rights reserved.
Rethinking big data: A review on the data quality and usage issues
NASA Astrophysics Data System (ADS)
Liu, Jianzheng; Li, Jie; Li, Weifeng; Wu, Jiansheng
2016-05-01
The recent explosive publications of big data studies have well documented the rise of big data and its ongoing prevalence. Different types of ;big data; have emerged and have greatly enriched spatial information sciences and related fields in terms of breadth and granularity. Studies that were difficult to conduct in the past time due to data availability can now be carried out. However, big data brings lots of ;big errors; in data quality and data usage, which cannot be used as a substitute for sound research design and solid theories. We indicated and summarized the problems faced by current big data studies with regard to data collection, processing and analysis: inauthentic data collection, information incompleteness and noise of big data, unrepresentativeness, consistency and reliability, and ethical issues. Cases of empirical studies are provided as evidences for each problem. We propose that big data research should closely follow good scientific practice to provide reliable and scientific ;stories;, as well as explore and develop techniques and methods to mitigate or rectify those 'big-errors' brought by big data.
NASA Astrophysics Data System (ADS)
Shukri, S. Ahmad; Millar, R.; Gratton, G.; Garner, M.; Noh, H. Mohd
2017-12-01
Documentation errors and human errors are often claimed to be the contributory factors for aircraft maintenance mistakes. This paper highlights the preliminary results of the third phase of a four-phased research on communication media that are utilised in an aircraft maintenance organisation. The second phase has looked into the probability of success and failure in completing a task by 60 subjects while in this third phase, the same subjects have been interviewed immediately after completing the task by using Root Cause Analysis (RCA) method. It is discovered that the root cause of their inability to finish the task while using only written manual is the absence of diagrams. However, haste is identified to be the root cause for the incompletion of the task when both manual and diagram are given to the participants. It is observed that those who are able to complete the task is due to their reference to both manual and diagram, simultaneously.
de Bock, Élodie; Hardouin, Jean-Benoit; Blanchin, Myriam; Le Neel, Tanguy; Kubis, Gildas; Bonnaud-Antignac, Angélique; Dantan, Étienne; Sébille, Véronique
2016-10-01
The objective was to compare classical test theory and Rasch-family models derived from item response theory for the analysis of longitudinal patient-reported outcomes data with possibly informative intermittent missing items. A simulation study was performed in order to assess and compare the performance of classical test theory and Rasch model in terms of bias, control of the type I error and power of the test of time effect. The type I error was controlled for classical test theory and Rasch model whether data were complete or some items were missing. Both methods were unbiased and displayed similar power with complete data. When items were missing, Rasch model remained unbiased and displayed higher power than classical test theory. Rasch model performed better than the classical test theory approach regarding the analysis of longitudinal patient-reported outcomes with possibly informative intermittent missing items mainly for power. This study highlights the interest of Rasch-based models in clinical research and epidemiology for the analysis of incomplete patient-reported outcomes data. © The Author(s) 2013.
Multidisciplinary optimization of an HSCT wing using a response surface methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giunta, A.A.; Grossman, B.; Mason, W.H.
1994-12-31
Aerospace vehicle design is traditionally divided into three phases: conceptual, preliminary, and detailed. Each of these design phases entails a particular level of accuracy and computational expense. While there are several computer programs which perform inexpensive conceptual-level aircraft multidisciplinary design optimization (MDO), aircraft MDO remains prohibitively expensive using preliminary- and detailed-level analysis tools. This occurs due to the expense of computational analyses and because gradient-based optimization requires the analysis of hundreds or thousands of aircraft configurations to estimate design sensitivity information. A further hindrance to aircraft MDO is the problem of numerical noise which occurs frequently in engineering computations. Computermore » models produce numerical noise as a result of the incomplete convergence of iterative processes, round-off errors, and modeling errors. Such numerical noise is typically manifested as a high frequency, low amplitude variation in the results obtained from the computer models. Optimization attempted using noisy computer models may result in the erroneous calculation of design sensitivities and may slow or prevent convergence to an optimal design.« less
A closed-chamber method to measure greenhouse gas fluxes from dry aquatic sediments
NASA Astrophysics Data System (ADS)
Lesmeister, Lukas; Koschorreck, Matthias
2017-06-01
Recent research indicates that greenhouse gas (GHG) emissions from dry aquatic sediments are a relevant process in the freshwater carbon cycle. However, fluxes are difficult to measure because of the often rocky substrate and the dynamic nature of the habitat. Here we tested the performance of different materials to seal a closed chamber to stony ground both in laboratory and field experiments. Using on-site material consistently resulted in elevated fluxes. The artefact was caused both by outgassing of the material and production of gas. The magnitude of the artefact was site dependent - the measured CO2 flux increased between 10 and 208 %. Errors due to incomplete sealing proved to be more severe than errors due to non-inert sealing material.Pottery clay as sealing material provided a tight seal between the chamber and the ground and no production of gases was detected. With this approach it is possible to get reliable gas fluxes from hard-substrate sites without using a permanent collar. Our test experiments confirmed that CO2 fluxes from dry aquatic sediments are similar to CO2 fluxes from terrestrial soils.
Processing medical data: a systematic review
2013-01-01
Background Medical data recording is one of the basic clinical tools. Electronic Health Record (EHR) is important for data processing, communication, efficiency and effectiveness of patients’ information access, confidentiality, ethical and/or legal issues. Clinical record promote and support communication among service providers and hence upscale quality of healthcare. Qualities of records are reflections of the quality of care patients offered. Methods Qualitative analysis was undertaken for this systematic review. We reviewed 40 materials Published from 1999 to 2013. We searched these materials from databases including ovidMEDLINE and ovidEMBASE. Two reviewers independently screened materials on medical data recording, documentation and information processing and communication. Finally, all selected references were summarized, reconciled and compiled as one compatible document. Result Patients were dying and/or getting much suffering as the result of poor quality medical records. Electronic health record minimizes errors, saves unnecessary time, and money wasted on processing medical data. Conclusion Many countries have been complaining for incompleteness, inappropriateness and illegibility of records. Therefore creating awareness on the magnitude of the problem has paramount importance. Hence available correct patient information has lots of potential in reducing errors and support roles. PMID:24107106
NASA Astrophysics Data System (ADS)
Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann
2016-05-01
The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.
Action errors, error management, and learning in organizations.
Frese, Michael; Keith, Nina
2015-01-03
Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.
Common errors in multidrug-resistant tuberculosis management.
Monedero, Ignacio; Caminero, Jose A
2014-02-01
Multidrug-resistant tuberculosis (MDR-TB), defined as being resistant to at least rifampicin and isoniazid, has an increasing burden and threatens TB control. Diagnosis is limited and usually delayed while treatment is long lasting, toxic and poorly effective. MDR-TB management in scarce-resource settings is demanding however it is feasible and extremely necessary. In these settings, cure rates do not usually exceed 60-70% and MDR-TB management is novel for many TB programs. In this challenging scenario, both clinical and programmatic errors are likely to occur. The majority of these errors may be prevented or alleviated with appropriate and timely training in addition to uninterrupted procurement of high-quality drugs, updated national guidelines and laws and an overall improvement in management capacities. While new tools for diagnosis and shorter and less toxic treatment are not available in developing countries, MDR-TB management will remain complex in scarce resource settings. Focusing special attention on the common errors in diagnosis, regimen design and especially treatment delivery may benefit patients and programs with current outdated tools. The present article is a compilation of typical errors repeatedly observed by the authors in a wide range of countries during technical assistant missions and trainings.
A regularization corrected score method for nonlinear regression models with covariate error.
Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna
2013-03-01
Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer. Copyright © 2013, The International Biometric Society.
Retention-error patterns in complex alphanumeric serial-recall tasks.
Mathy, Fabien; Varré, Jean-Stéphane
2013-01-01
We propose a new method based on an algorithm usually dedicated to DNA sequence alignment in order to both reliably score short-term memory performance on immediate serial-recall tasks and analyse retention-error patterns. There can be considerable confusion on how performance on immediate serial list recall tasks is scored, especially when the to-be-remembered items are sampled with replacement. We discuss the utility of sequence-alignment algorithms to compare the stimuli to the participants' responses. The idea is that deletion, substitution, translocation, and insertion errors, which are typical in DNA, are also typical putative errors in short-term memory (respectively omission, confusion, permutation, and intrusion errors). We analyse four data sets in which alphanumeric lists included a few (or many) repetitions. After examining the method on two simple data sets, we show that sequence alignment offers 1) a compelling method for measuring capacity in terms of chunks when many regularities are introduced in the material (third data set) and 2) a reliable estimator of individual differences in short-term memory capacity. This study illustrates the difficulty of arriving at a good measure of short-term memory performance, and also attempts to characterise the primary factors underpinning remembering and forgetting.
A Modified MinMax k-Means Algorithm Based on PSO
2016-01-01
The MinMax k-means algorithm is widely used to tackle the effect of bad initialization by minimizing the maximum intraclustering errors. Two parameters, including the exponent parameter and memory parameter, are involved in the executive process. Since different parameters have different clustering errors, it is crucial to choose appropriate parameters. In the original algorithm, a practical framework is given. Such framework extends the MinMax k-means to automatically adapt the exponent parameter to the data set. It has been believed that if the maximum exponent parameter has been set, then the programme can reach the lowest intraclustering errors. However, our experiments show that this is not always correct. In this paper, we modified the MinMax k-means algorithm by PSO to determine the proper values of parameters which can subject the algorithm to attain the lowest clustering errors. The proposed clustering method is tested on some favorite data sets in several different initial situations and is compared to the k-means algorithm and the original MinMax k-means algorithm. The experimental results indicate that our proposed algorithm can reach the lowest clustering errors automatically. PMID:27656201
The Cut-Score Operating Function: A New Tool to Aid in Standard Setting
ERIC Educational Resources Information Center
Grabovsky, Irina; Wainer, Howard
2017-01-01
In this essay, we describe the construction and use of the Cut-Score Operating Function in aiding standard setting decisions. The Cut-Score Operating Function shows the relation between the cut-score chosen and the consequent error rate. It allows error rates to be defined by multiple loss functions and will show the behavior of each loss…
A Comparison of Fuzzy Models in Similarity Assessment of Misregistered Area Class Maps
NASA Astrophysics Data System (ADS)
Brown, Scott
Spatial uncertainty refers to unknown error and vagueness in geographic data. It is relevant to land change and urban growth modelers, soil and biome scientists, geological surveyors and others, who must assess thematic maps for similarity, or categorical agreement. In this paper I build upon prior map comparison research, testing the effectiveness of similarity measures on misregistered data. Though several methods compare uncertain thematic maps, few methods have been tested on misregistration. My objective is to test five map comparison methods for sensitivity to misregistration, including sub-pixel errors in both position and rotation. Methods included four fuzzy categorical models: fuzzy kappa's model, fuzzy inference, cell aggregation, and the epsilon band. The fifth method used conventional crisp classification. I applied these methods to a case study map and simulated data in two sets: a test set with misregistration error, and a control set with equivalent uniform random error. For all five methods, I used raw accuracy or the kappa statistic to measure similarity. Rough-set epsilon bands report the most similarity increase in test maps relative to control data. Conversely, the fuzzy inference model reports a decrease in test map similarity.
Accounting for misclassification error in retrospective smoking data.
Kenkel, Donald S; Lillard, Dean R; Mathios, Alan D
2004-10-01
Recent waves of major longitudinal surveys in the US and other countries include retrospective questions about the timing of smoking initiation and cessation, creating a potentially important but under-utilized source of information on smoking behavior over the life course. In this paper, we explore the extent of, consequences of, and possible solutions to misclassification errors in models of smoking participation that use data generated from retrospective reports. In our empirical work, we exploit the fact that the National Longitudinal Survey of Youth 1979 provides both contemporaneous and retrospective information about smoking status in certain years. We compare the results from four sets of models of smoking participation. The first set of results are from baseline probit models of smoking participation from contemporaneously reported information. The second set of results are from models that are identical except that the dependent variable is based on retrospective information. The last two sets of results are from models that take a parametric approach to account for a simple form of misclassification error. Our preliminary results suggest that accounting for misclassification error is important. However, the adjusted maximum likelihood estimation approach to account for misclassification does not always perform as expected. Copyright 2004 John Wiley & Sons, Ltd.
Accurate Classification of RNA Structures Using Topological Fingerprints
Li, Kejie; Gribskov, Michael
2016-01-01
While RNAs are well known to possess complex structures, functionally similar RNAs often have little sequence similarity. While the exact size and spacing of base-paired regions vary, functionally similar RNAs have pronounced similarity in the arrangement, or topology, of base-paired stems. Furthermore, predicted RNA structures often lack pseudoknots (a crucial aspect of biological activity), and are only partially correct, or incomplete. A topological approach addresses all of these difficulties. In this work we describe each RNA structure as a graph that can be converted to a topological spectrum (RNA fingerprint). The set of subgraphs in an RNA structure, its RNA fingerprint, can be compared with the fingerprints of other RNA structures to identify and correctly classify functionally related RNAs. Topologically similar RNAs can be identified even when a large fraction, up to 30%, of the stems are omitted, indicating that highly accurate structures are not necessary. We investigate the performance of the RNA fingerprint approach on a set of eight highly curated RNA families, with diverse sizes and functions, containing pseudoknots, and with little sequence similarity–an especially difficult test set. In spite of the difficult test set, the RNA fingerprint approach is very successful (ROC AUC > 0.95). Due to the inclusion of pseudoknots, the RNA fingerprint approach both covers a wider range of possible structures than methods based only on secondary structure, and its tolerance for incomplete structures suggests that it can be applied even to predicted structures. Source code is freely available at https://github.rcac.purdue.edu/mgribsko/XIOS_RNA_fingerprint. PMID:27755571